doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/51305 (DOI)
We have with us our first speaker, Elisio Pereira. He is a researcher at the Research Center for System and Technologies, CISTECH, which is hosted in the Faculty of Engineering in the University of Borca. Elisio, the floor is yours. I don't know if you want to add anything to your presentation, so please go ahead. Okay, thank you. So, hello everyone. My name is Elisio Pereira. I'm from the University of Porto in Portugal, and my presentation is about the BINASOR, which is a tool that permits the reconfiguration in cyber-physical production systems. Starting with the agenda for that presentation, the first part I want to introduce a little bit of the CISTECH platform zero, the cyber-physical system, reconfiguration in that systems, and the AC6149, which is a standard to reconfigure cyber-physical systems. Then I will talk about the implementation of the BINASOR itself and the integration with the other platforms. The third point is the test case scenarios that I have to present you. You have three different cases, and another one related to the performance evaluation of the tool, and the last one is related to the conclusions and the feature work of this presentation and of this project. So, starting with the introduction. As you probably know, the industry 4.0 brings several advantages, like the digitalization of the industrial equipment, which enables the connectivity between different kinds of devices, and makes the networks and the cyber-physical systems. Regarding that, it is important to have good tools to reconfigure the cyber-physical systems in order to have a quick modification of requirements. For example, if you want to produce a new product, you don't need to go to the equipment itself and the reprogrammed equipment, you can perfectly reconfigure the equipment from a remote computer and easily change your production targets, change your production line, and so on. Regarding that, there are different tools and programming languages to implement these kinds of distributed cyber-physical systems. One of them is DSC61499, which is an industrial standard. Another one is the Nudarev, that permits the configuration of devices also, and the Eclipse-Cura. Focusing on the ESC61499, that standard is composed by two different components, the development environment and the runtime environment. The development environment is basically where the user or the developer orchestrates his pipeline of tasks. Basically, each task that you have here is a function block, which has an event interface, where the events trigger different functionalities in that function block, and we have a data interface. So the user orchestrates a pipeline of function blocks, which can perform a certain operation, like move a robotic arm and so on. Then the user maps each of the function block to one device that is running and runtime environment, and the EVA automatically sends the function block to that runtime environment to execute it. Okay. Focusing on the development environment that we have present right now, these are the three most common, the FDBanche, the Fordiacide and the Garth FDE. The most popular is the Fordiacide, which is an Eclipse-based tool, and have integration with the Ford FDE, which is an FDE implemented in C++. The other two ideas have different features, but they don't have so popularity in the community. Regarding the runtime environments that we have, some of them are implemented in C++, others in Java. And typically the programming languages that the runtime environment is implemented is the same that we need to implement the function blocks. So that could lead to some restrictions in the functionalities that we want to implement. For example, if we want to develop a classifier or a regression algorithm, it's better to, it's easier to develop that in Python, and that could lead us to some disadvantages of using that kind of FTS. Another disadvantage is the third-party integration with regarding the industry 4.0 and the interoperability between a large amount of devices, it's a limitation, because we need to be able to share information between every device that we have in the shop. And some of that tools of the runtime environments are achieved projects, which is also a big limitation, because if you have an error, you cannot share that error with anyone because the projects are achieved. So that's the implementation of the dinosaur itself. Basically, the architecture that we have in the dinosaur, basically we adopt the Fardyakide as integrated development environment to drag our pipeline. And then we have integration with the dinosaur, which is a runtime environment that executes in embedded systems, or for example, other kind of computers. The dinosaur is implemented in Python, which permits us to implement algorithms for machine learning, communication easily, and have third-party integration with OPCU8 applications, which is good because in industry, most of the applications communicate through that standard. If you are curious to check the application we have here, here the link, and you can go with that and check tutorials and some other examples that we use to implement the dinosaur and so on. Some third-party scenarios and so on. The dinosaur implementation itself uses a producer-consumer pattern, basically in threads, where each thread is basically a function block that consumes events and produces events. When a function block receives an event, it triggers its functionalities, it executes, and then produces the output events that forward to the next function block that performs again, and produces the outputs and sends the events to the next one. Okay, that is the execution model that we have implemented in the dinosaur. The function blocks, which are the tasks that we perform in our own time environment, are implemented using two different files, a Python file, where are implemented the functionalities, the code itself, and XML file, where we define the structure of the function block. Basically, the name of the events, the type of variables that the function block has as input and output, and so on. The next point is the integration with the FORDiac. The FORDiac is a different project. We integrate with them using TCPIP sockets, and the XML messages. That communication permits from the FORDiac to create, stop, and delete the pipelines of function blocks in the RTE that is running that function block. This way you can drag or pipeline on the FORDiac and deploy or pipeline of function blocks in the FES existence in the network. And another feature is to monitor variables and trigger events in the function blocks. For example, imagine that you have executing this function block, you want to see the state of that variable, you and the FORDiac there are able to check that value of that variable. Regarding the OPC way integration, as I said before, the OPC way integration enables the connectivity with external industrial platforms or applications. The data model that we have implemented permits to map each function block into a different category to organize better the information that we have. We have a device, device type, service, start point, end point, the start point and end point are the types more related to protocols where the data brings from another, for example, another function block. And the device is a different type of communication between the different types of communication. So, for example, you have a communication between MQT or TCPIP to implement a start point or end point to bring data. And the function block that runs permanently, reaching data from the, for example, from a sensor and so on. The data model permits also to store the actual state of the dinosaur. So, for example, if the dinosaur crashes, if we restart the dinosaur, we continue in the same state. And the development process from the point of view of the developer to manage the whole system is composed by these four steps. Both four steps are performed in the Fodiak IDE and in another platform, if you want, for example, an IDE to develop code. The first step is to develop the function blocks, the Python one Python file, including the functionalities and the XML file with the structure of function block. Then, when the function block is created, you can orchestrate a pipeline, different pipeline, connecting the function block between dependencies between them. And the next step is to map each one of the function blocks to one different device. Okay, we have a key here to different devices with two different IP addresses and ports. And then you can map each function block in the device where you want. Then in the Fodiak IDE, you are able to deploy the pipeline and send the respective function blocks to each device that is running a dinosaur instance. The test case scenarios that I want to present regarding that platform. The first one is the collision detection in server motors. We have here a robotic arm based in several motors, as you can see here, this is a server motors, this is another one, another one. In this case, we collect data from the server motors collect the voltage current to real and apparent power from the base and shoulder robotic motors. And here in this function block, the data from the shoulder, the voltage current real power and power, and here from the, from the base, from the base motor. And the next function block using the data collected via serial ports predicts if the robotic arm is colliding with some object or not. Basically, for that we use classification algorithm in this particular case are random forest. And if it, if he, if it detects that we have collision he sends a event to the robotic arm controller that is performing some tasks to stop to avoid being damaged because you are colliding to an obstacle. And the next, the next test case scenario. He has as components, you are five robotic arm and Raspberry 3d printed Reaper that is composed by Raspberry Pi, several motor and and the pink dream also. In this other case, we have two function blocks, one to move the robotic arm from one, one particular position x, y, z, with particular rotation of the joints of the robotic arm. And also here, a function block that is running on the, on the Raspberry Pi of the, of the, of the three pin printer gripper that opens and closes the gripper using the GPIOs of the Raspberry Pi. The first one is running in the Raspberry Pi. This one is running in another computer that is controlling the robotic arm. So the operation that is made here the deeper close moves up, moves down, deeper opens, moves up, moves down, and close again. Okay. The next scenario that I have is related to manufacturing applications. In this particular case is we want to simulate the production process. Each station of the process contains three different function blocks. And the products are simulated to pass through which each station and being processed in each station. These, these example permits us to validate the scalability of the dinosaur. And regarding that scalability, you use that, that example to do the performance evaluation when we increase the number of function blocks to validate how, how the performance involves giving giving that that data. So we conclude that basically the complexity of more function blocks causes more resource to be consumed. And the, the rate of increasing is basically 18 function blocks by 1% of the usage. And, and that is, it's a scalable, a scalable solution that you have here. If you have for example, increasing a very increasing usage, you can also split the function block between two devices to avoid that, that, that higher usage of resources. And as conclusions and future work. You have proposed a framework that increases the flexibility of the traditional industrial systems. And the usage of Python is a very important feature because brings us some advantages of the developments that fighting community is doing in that in some crucial field that is machine learning robotics, sensory, sensory, session, communicate protocols of communication and so on. And we have also developed some, some function blocks from communication for example, we use MPT, TCP for classification using classification algorithms and regression algorithms, and to control GPIOs and other kind of embedded devices. And with the dinosaur also the transforms a local heavyweight application into a distributed solution, which enables us to improve the performance of, of a system, because you can split into more devices, the application and increase the performance. This way. And as a future work, we want to continue developing new function blocks, obviously. And we want to implement a new execution model in the dinosaur, which is based in speculative computing, which is a very interesting field that basically you predict, which is the next task or the next function block to execute. And that improves a lot your, your performance because you have now the output before executing the, the, the, the function. And another, another, another point interesting to see to check in the future is to integrate that the platform with an optimization algorithm to obtain the optimal placement of, of function blocks in the available devices. Basically the algorithm can optimize the distribution of the function blocks and if, for example, if you have a lot of resources, consumer like CPU or and the algorithm could decide to split in another, in another type of configuration to optimize that that kind of operations. So that was my presentation. Here are some references and I'm open to any questions if you have. So for the, for the time I was, I was too long. Well, we, we still have time for a question or two. I don't know if someone from the audience wants to ask you something. There's nothing on the chat. So I do have more like curiosity. I thought about bringing this project to publish as open source in the Eclipse Foundation. I know that it's already available on GitHub, but I can see that you have integrated with a lot of our projects. So for example, for the for the idea. Anyway, just a thought, I think would be really useful. I think it's really interesting what you've achieved here. Yes, yes. Thank you for a question. It is very, very important for us. We want to make that put from available. We published that on on YouTube, but to to put that platform also in in the eclipse is also important for us because for example for the idea is, is there a lot of benefits also. And that will will bring us a lot of advantages regarding regarding that. So my field is the field of these, these two guys here, because they are the guys of management. But if it is a good, a good, a good thing to, to, to put that platform also in in the eclipse and to continue to continue that. Well, let's just keep in touch after the talk. I do have a follow up question for your presentation but I think I'll save those for the break out since we're already a lot of minutes late. Okay. There is a question on the chat. Is this is IOT cybersecurity related. Is this for the presentation of Elisa Gabriella. Would you like to elaborate. I don't know if I am interpreting his question or comment with but are you considering cybersecurity. cybersecurity is not. We are considering cybersecurity more in the in the field of the function block itself. We have a guy that is developing some sort of algorithms to the tech versions in our, in our devices. So basically imagine that we have here in the image, for example, we have some, some network of devices, we have, we have, we want to develop an algorithm that could detect if we have some intrusion in some of the device and, and, and so on. And we have a lot to talk, talk about that in more in more detail. But we have developed a solution to, to work with with office away it's working with office away and can be extended to other other types of protocols in future.
The nowadays industrial digital revolution demands for software driven solutions where reconfiguration is one of the key enablers to achieve smart manufacturing by easy deployment and code reuse. Despite existing several tools and platforms that allow for software reconfiguration at the digital twin / edge level, it is most of the times difficult to make use of state of the art algorithms developed in the most popular programming languages due to software incompatibility. This paper presents a novel framework named Dynamic INtelligent Architecture for Software MOdular REconfiguration (DINASORE) that implements the industrial standard IEC 61499 based in Function Blocks (FB) in Python language for Cyber-Physical Production Systems' implementation. It adopts the 4DIAC-IDE as graphical user interface (GUI) to ease the design and deployment of FBs to quickly and on-demand reconfigure target equipment. The proposed framework provides data integration to third party platforms through the use of OPC-UA. The test scenarios demonstrate that the proposed framework 1) is flexible and reliable for different applications and 2) linear increases the CPU and memory workload for a large amount of FBs.
10.5446/51306 (DOI)
I think we'll move to our next presentation, which is bringing clouds down to Earth, modeling arrowhead deployments via Eclipse Vorto. This is going to be presented by Geza Kuxar. I apologize for the... No worries, no worries. It's hard to read the right pronunciation. It's Kuchar. Kuchar, okay. It's quite surprising, I know. But that's Hungarian. Yeah, Hungarian language is full of surprises. Yeah, sorry. I always have trouble with pronunciation of names, especially in these online events where you meet people from everywhere in the world. Yeah, I know that feeling. Okay, thank you. So Geza, the floor is yours. Just a quick introduction. He's a senior researcher at the inquiry labs in Budapest. He's part of this Budapest-based R&D specializing in system engineers. If you want to add something, let's go ahead. Thank you very much. So hello again. Yeah, thank you for the introduction. This paper will be about bringing those clouds down to Earth, which is, I think, a very engineering task to do, and how we interpret this and how we are doing this. We will elaborate on that in the upcoming minutes. So stay tuned and then you will find out what we mean by that. Actually, it's not just presented by me alone. So as I already said, my name is Geza Kutcher and I'm from Budapest. I did my PhD in Germany, but then moved back. And now I'm at an R Systems Engineering R&D called the inquiry labs here. And my partners in crime have been these guys from Bosven, Kevin and Johannes. And Kevin will also present that part of the talk, which is more tied to active sport, one of the core technology we've been using here. So Kevin is also with us. I already say his face. So I will then give the mic over to Kevin when it's time. So let me then start. This is a problem which I've been struggling, struggling quite a lot in different fashions in the past year, let's say or so, not only in the frame of this, this work. I think the general one of the grand challenge of yeah, IOT engineering or yeah, well founded or model based IOT engineering at least that's that's for sure. So how can I maintain an overview or how can I maintain a an architecture and design perspective on something which is as dynamic as a modern IOT installment or infrastructure. So this famous guy is here in my interpretation is now the modeler. And what he aims at, I guess, is that he wants to look at those class from above, wants to keep this, I mean by standing on this rock wants to keep this his perspective. I mean, keeping constantly in mind that this IOT cloud architecture will be forever fluctuating and changing. But still, I do think I'm strongly convinced that modeling as a general paradigm still has absolutely has merits when it comes to dealing with this kind of problem or this kind of discrepancy. But in order to deal with the discrepancy, we have to first, and this is kind of the motivation, a very high level motivation of what we're doing. We have a feeling need to bring those clouds down to earth. What do we mean by that. So, if I want to, if I'm just starting to design my system, let's say it's going to be an IOT, let's say it's going to be a manufacturing system that's what we are dealing with mostly in the frame of the arrow to throw jet where this whole collaboration is rooted. Nothing is there yet you are going to be at your factory in just a few years, you start to design you start thinking about it. So, you design the architectural plans which you're creating are very valuable artifacts, they should remain so throughout the whole lifecycle of the project, but they are up in the clouds not only in the, in the, you know, the presence of the bird but also in the sense that these are very high level abstractions of some future systems. And their future realization is of course should be real and physical and they should be here down on earth. So, I think it is one of the main challenges here of IOT design in general, what is an adequate representation of the so called I'm using the arrow and the, the most popular with them in the future here what is an adequate representation of those system system, which should be realized at some time, I mean bridging this gap from high level design to to actual IOT installments in the word of industrial IOT. Another thing which we are thinking with that is that such a design does not really allow to capture the real communication and such and this industrial IOT installments in the sense that whatever you design, even if you include like abstractions of communication channels or whatever links between your systems in your system of system architecture. So, this is real. And this is because you don't have enough details on what the systems will actually look like, what those, how those systems will be represented in your real, in your real setup, what those devices are going to be we should integrate and therefore, this is this is where we really think that the another mentioning another very very famous upcoming emerging password, these device digital tools should be integrated with the, with this, with this IOT architecture, in order to bring this, these IOT design closer to reality. So this is the big picture. In our paper, we are presenting this with a toy example. The Aria II project also has some real life use cases, which should be in a few years realized on the same very but of course for presentation we just came up with something very simple and easily graspable. Let us just imagine that there is some machine knowing about some task scheduling information. And then there is also another machine which is, so this is a very simple manufacturing scenario let's say. So then there is some distance measurement unit here. It is all connected up. That's what the, these lines actually stand for mostly. This is all tied up with the so called Aero head mandatory core services so it's a set of services provided by the so called Aero head framework, which is there for you to orchestrate your IOT setup. And then your, then there's an offset the machine which is then an actual robotic arm or something like that, which will set you a given offset between different, yeah, manufacturing units. So not that interesting what this thing should really do is really just an example to convey your ideas. How can we bring such a yet growing like representations a bit closer to the word of engineering. So the basic ingredients which we are building on is the so called system language, which is a widespread language and standard for systems modeling. Currently, most people use version 1.6. This is also what we are using in the moment. It's just a kind of remark. Some people might know that version two as in development. So it's coming up next year. It's going to be very interesting. I think it's a great and beaver barrels are keeping an eye on that. But for now let's stick with this in a version 1.6. It is, of course, it's now in a major refurbishment. Actually, exactly because it has been conceived for designing model systems. And this is not very fashionable nowadays. But for us, system L is easily extendable. And this is what we're exploiting here. In the rest of the talk, I will refer to our system profile which we have created to reason about such IOT designs as self system also I'm exploiting here the systems modeling language abbreviation and what I want to say is this is a system of systems modeling language now. Of course, it's not a big deal is just a profile over system L. But still it proves to be very useful. So the, the error framework which I just mentioned has been recently renamed and put into an acrylics incubation project, which is now called eclipsier arrowhead. Good looking about this is also another very important central ingredient of what we've been doing. And this is a software platform, which already hosts that dynamic industrial and social installation, which we are talking about. But the way we can reason about architectures and designs here now has, as it has been already mentioned, it doesn't really contain the details yet. So, although we have already proposed social system L and we also use it in different contexts so it's not only in the, not only meaningful in within within the frame of this work. So, I observe and this is kind of the key observation behind this contribution which we're presenting now that we are looking for I also conceptually and technology could close candidate for language, which can model your device digital twins now so we want to add those real devices or at the same time, the representations of those devices into the mix. And this is what the active swath to project is good for. That's why we started this collaboration. And this is now the very point in this presentation where I'm giving over for Kevin to say a few more words about what because he's much more looking into the mind. So, I have an senior software engineer at Wash I owe and I'm the project lead for eclipse waters. I'm now going to give you a short introduction to the project. So eclipse water is an open source project or semantic modeling of digital twins and their capabilities. The goal of the project is to deliver the tool set for everyone to create semantic models of the digital twins but also to integrate those models into existing or new IOT solutions. So to achieve that the water project consists of four different components. The first one is the basis for everything that is water lung. This is, this is the main specific language that describes the digital twin and its capabilities. And then we have the core of the project which is the repository where those models that you create in water lung can be created managed and distributed. This is the web application that has web editor and different API is to interact with the model. So you can also use that as a point of reference to publish and share models. And then we have plugins that are integrated with the repository but can also be used standalone that can transform water models into different other representations for instance source code templates or request templates or just other semantic modeling representations. And the last component that we have is the telemetry payload mapping. So this can map data that is sent by a device during the runtime according to the water specification into a normalized format. So for instance, eclipse digital payload format could be the output format for the telemetry payload mapping engine. And before you can use this to create normalized API's in your IOT solution by mapping the data with according to the water model. So if we go to the next slide. I'm going to give you a quick overview of the components of elements of water lung. Water lung has the top element which is the information model that is the element that describes the digital twin and its capabilities. Then we have the function block one information model can have one or multiple function blocks, and the function blocks describe the individual capabilities that are implemented by the digital twin. The function block has different properties they are grouped according to their semantic meaning. So we have status properties that are read only so sensors for instance, contain only status properties in most cases, and we have configuration properties those are read write accessible. We have events that can be emitted by the device or digital twin. And finally we have operations that are functions that you can call on your device. Those are the most important elements and then we have sub elements that can be useful while you're creating your models we have data types that can that you can use to describe complex data types, or different enemy or different operations. For instance, for units measurement units this can be quite useful. And then we have the mapping type which is used to provide platform or implementation specific information since you want your model to be as generic as possible to make it reusable. At some points you will probably need especially when you want to integrate your models into your solution you will need platform or implementation specific information. So you can create mapping elements that can be applied to information models or function blocks to combine the generic information with the specific information. That is the concept of what along with that I can hand over back to Giza. Thank you very much Kevin for giving us those insights which I could have not given. So, what we are seeing here as so we have come to the. So, I would like to ruin on them but actually we have reached the, the, the, the apex of this presentation. Yes, and then I'm going to try and always just keep trying to give you even a quick glimpse into a prototype which is working also and that's every time you want to try it live if you're interested. So we have an integration prototype indeed. So, it's both ends of this of this prototype architecture is within the realm of eclipse so it's a baby can safely say it's eclipse based because the underlying you may be noted because it's a longer standing EMA project, why I try the, the this model transformation engine is the basis, but actually our company has quite some expertise and although it might look, seem a bit curious, it's actually a very good idea to combine this eclipse core modeling core technology with a, with the most, I would say the most popular system modeling and integration to called magic row, which is of course, yeah it's, it's a commercial product, but it's, yeah it has been there now for some decades and it's pretty strong. So what we actually created, and this is why this this arrow is highlighted is kind of bringing it back into the eclipse I have eclipse list IOT bird using four to so what we have created but our prototype mainly consists of right now. So, it would actually be the other way around rather, because I think that's more that would be more intuitive sorry but that. So the point here is that we are able and if I can do see something else now which is by the way a magic drain installation. So are you seeing my, my, my, my screen, but this is now not the slides right. Perfect. Okay, so this is magic row. And this is where we actually modeled using system of those systems. And I have already prefabricated some water information models here for you. So based on the water meter model, we could actually replicate that kind of device modeling paradigm, which is represented by water. And then what we have added extra as a kind of a, I think pretty cool integration feature is that now using our plugin, you can go here, you can go to the among the numbers import options using the plugin you find a water repository. And then it really broses live the water repository. For example, if you remember did that toy scenario which I've shown in the beginning, it was about measuring a distance. So it seems, of course, I know, I knew where to go but even if you wouldn't, you could use a search button and and browse around here, looking for what you need. So the tiny the distance sensor, it is, it is imported here pretty quickly. So it appears in the in this in this modeling tree. And then you can actually you find a representation of the distance and information model converted into system out directly from the water repository. And then you can just start using it in your, in your arrowhead based system of system models here with the magic through. So, that's the essence of it. And with that, I just switch back here quickly and show you one last slide on where we are going with this because of course we are far from being ready I think it's already pretty cool what we could demonstrate what we already achieved. But of course this is not the end of the story. There are many many possible things to do which we can think of. So summary here is that what I've already seen the kind of integrated system modeling with device modeling using a very convenient way to combine these two different words for the sake of arrowhead so what the next next potential goal would be now that arrowhead and the active arrowhead recently, of course a nice actually based integration so bringing it all back to the common basis is of course an immediate goal for future work, but also of course on the tooling side, one could extend it into both both directions so system modeling, even if magic though is the most popular. There are actually these ways alternatives like papyrus. Of course, they are looking for also on papyrus as a candidate for model authoring and model transformation, but also on the repository side interestingly. And the magic though also has a kind of a model repository solution built in. So it can this this kind of extension work can also go into that direction looking at what magic draws native repository solution can, can give us in the actual power. And also of course maybe most importantly, this communication which is now only in, I mean, only in this, this kind of import direction if you look at that from a major perspective. If you see it by directional, then only then you can say that you have created the real, a real bilateral, a real, real tight integration between these two modeling words but I think this, this can become a real benefit that not only for arrowhead tools, but also for the whole world of, yeah, well founded communication, dynamic modeling. So thank you very much for your attention. And I'm here for questions if you still have some time for that, or then in the breakout session if that's not the case. Thank you. I think we are running a bit late. If someone has a question I will get to the audience. I'll keep the questions for the breakout after the session is finished and we can take a quick break off five minutes and start again at 51. Okay. So we keep our schedule. Okay, I think. Alright. So we'll start with the questions from the audience. Okay. Okay, let's reconvene in five minutes then.
The design and development of interconnected industrial production facilities, which integrate aspects of theInternet of Things (IoT) or, more specifically, the industrial IoT (IIoT) often deals with complex scenarios involving dynamic System of Systems (SoS), resulting in large development and deployment efforts. The Arrowhead community aims at delivering mechanisms and technologies to cope with such complex scenarios. In particular, the concepts of local clouds providing a service-oriented architecture (SOA) framework for IIoT. Here,a central challenge is the conceptual modeling of such use-cases. SysML is widely established as a standardized modeling language and framework for large-scale systems engineering and, thus, for Arrowhead local cloud designs. However, SysMLand its Arrowhead profile lacks a canonical way to support actual platform modeling and device involvement in heavily distributed IIoT scenarios. The Eclipse Vorto project is ideal for filling this gap: it provides a modeling language for IoT devices and comes with a set of modeling tools and already existing reusable templates of device models. In this paper, we propose an approach to integrating Eclipse Vorto models into ArrowheadSysML models. We illustrate the concept with a realistic, yet comprehensible industrial scenario and also present a prototype to emphasize the benefits of our novel integration platform.
10.5446/51308 (DOI)
It is time to start with the second keynote speaker of this today's conference. I have the pleasure to introduce to you Mr. Paul Emmanuel Bonn from Airbus Cybersecurity Funds. He is going to provide a talk on securing low-power device communication in critical infrastructure management. Paul Emmanuel is a passionate of cybersecurity, IoT and identity management. He is an expert in IoT system security and he leads the innovation activities within Airbus Cybersecurity Funds. As a former security and identity management engineer, Paul Emmanuel was involved in several European initiatives and contributed to several projects to the French Ministry of Defence from a secure architecture definition to integration of cybersecurity solutions. After applying several patents linked to cybersecurity of IoT and IoT systems, he is now focused on innovation and IoT system security. With Airbus Paul Emmanuel is currently directly involved in the EU H2020 Brain IoT project where he leads the activities related to security and he is also part of the European security and privacy cluster. Thank you Enrico for this very nice introduction. We will have a short, quite short talk about the securing low-power device communication in critical infrastructure. I will just start to give all of you, at least from Airbus, regarding the threats into the IoT systems. Just to start quickly, how we see the threats today in IoT? We see three usage of IoT in order to perform cyber attacks. The first one is to use the IoT as a target itself. For that, I will take the example of Dallas Emergency Serial cyber attack in 2017 when cyber attack was performed on the Dallas Emergency Serial and make them run early in the morning. This doesn't seem to be very critical but the fact is that this overflow, the emergency call, creates a kind of panic into the city. There is a lot of other examples regarding the IoT as a target of cyber attacks such as, for example, in the car industry with the Tesla, the Jeep that were breached. But this is, I think, a good example. The second type of attack is using the IoT as a vector. Indeed, most of the time hackers are looking for the weakest point into the system they want to attack. What we see is that most of the time, the IoT is one of the weakest points. In 2017, there was a quite interesting attack into a casino where hackers used a simple temperature sensor that was connected to some cloud application in order to go through this temperature sensor, go through the Wi-Fi of the system and then to the database into the casino. This is very interesting to see that here they use a quite simple device in order to retrieve all the data, to try to retrieve all the data from the casino. So here we can really see that the IoT is seen as a vector in order to target a much bigger system than the IoT itself. And the last one is that the IoT is also often used as a weapon. Typically, the Mirai botnet who broke the DNS two years ago is a very good example of cyber attack. Why we can use it as a weapon? It is simply because there is so much IoT out there that are quite similar one to the other, that it is quite easy when you have found the security bridge to take the control over lots of devices and therefore you can use it as a real weapon. So now if we go to the critical infrastructure and the industrial IoT consideration regarding cybersecurity, we can see that there is several kind of impacts of the cyber attack into IoT systems. The first one is the business impact. Indeed, if a hacker succeed to break an IoT system in the industrial and critical infrastructure context, this could have a huge business impact in terms of production and service done time, in terms of quality of the production that is done, of the service that is delivered. And of course it could also cause reputational damages because when a company related to critical infrastructure is known as cyber attack, then for the reputation it could be quite terrible. And therefore it could be also a loss of opportunities and businesses for this company. The second point is the physical damages. In critical infrastructure, IoT is often used in the context of what we call cyber physical systems. So it means that attacking an IoT could cause equipment damages and also in some context it could lead to some human safety issue. The production line is a good example. If you attack some robots within the production line, then you could have some human safety issue for the operators that are working, for example, in the factory. So this is a key point also in the critical infrastructure. And the last one is related to the damage to intangible assets. Critical infrastructure also could mean that you have a high level of intellectual properties to protect. And these intellectual properties could be, for example, linked to the production means you have, and that are often related to the devices and the IoT you have in the company. Just for example, when you produce, for example, foods, temperature is a key value. Why? Because it's a secret you have, so it's part of your intellectual property in order to make the food as you want to be at the end. So if a competitor succeeds to steal you the temperature of your processes, then it could be a high loss for you in terms of intellectual property and uniqueness into your own market. So the example I gave is quite simple, but we can also imagine the same in the production, more industrial oriented or more critically oriented. And of course, we all know that there could be threats related to privacy issue and the private data leakage. For that, for example, I will talk a little bit after about water management use case, but in all the energy management use cases, you have to face the management of some kind of personal data that are privacy sensitive. So this is also a real risk that we have to take into account in critical infrastructure, especially in energy domain. So let's have a short overview of the IoT, let's say quite standard system architecture. So I divided here the architecture into five blocks. The first one is the device on the left, the device itself. The second one is what I would call the HET network, so the network which is close to the device. The third one is the network provider. This could be the internet infrastructure typically. Then you have the IoT platform itself or the IoT platforms that will collect all the data. And finally, you have the application and services that will serve the service for which the IoT and the devices are deployed. Along all this value chain, we can identify threats. It could be very hardware and physical threats on the device itself. It could be on the network by spoofing some entrusted data, especially when you are using non-secured protocols. But there is also threats on, let's say, third party that are providing services for the IoT network, for the IoT systems, such as the network provider. There was some interesting example of hacking into a telecom provider that showed that leads to a leak of a lot of personal and professional data through the 4G infrastructure. We could also identify some threats on the IoT platforms, quite similar to the one on the network provider. And finally, this could lead to fake data sent into the application and therefore an application that will not be able to create the service for which it is designed. Just in order to show you an example of this kind of threat, I made a little search on Shodan. Shodan is a search engine that was made quite famous because it's the one which was used in order to reveal that there were a lot of IP cameras that were fully open on the internet. And I made a small research regarding an IoT protocol which is quite widely used, which is MQTT, so it's a Q protocol, quite well known, sorry. And making this research, I found that there were 74,000 of fully open back-end on the internet. And by going through some of them, I saw that there were lots of back-end that were related to IoT data, for example, GPS position, temperature, humidity, some kind of actuators, and there were not any protection in order to access the data, but there were also not any kind of encryption on those data on the server. So it's really sure that the threats exist and that we have to take this into account when we are designing all systems. Just to conclude on these specificities of IoT on the top of those threats, we can also highlight some complexity that are really related to the use of IoT. The first one is the fact that there are a lot of heterogeneous protocols and with very different requirements that make very hard to ensure real end-to-end security as we can have today, for example, when we go on HTTPS websites. The second point is that we are using in critical infrastructure protocols and hardware that come from the mass markets and that are not, let's say, built specifically for critical infrastructure systems. This is necessary because indeed if we want to build IoT services, we need to reduce the cost because we want to deploy lots of devices and the cost is one of the drivers in order to build those IoT systems. But on the other hand, this has a huge impact on the overall security of the system because you introduce some weakness into your IoT and OT systems. And the third, let's say, challenge that we have to face is that we see that we have very different use cases even into what we call the critical infrastructure. We see the industry 4.0, the connected transportation, some use cases related to smart city, to energy and all that have very specific requirements that are not the same. For example, if we go to energy, if you want to deploy some gas meter, you will have energy constraints on it. But if you want to deploy an electric meter, then you will not have this low power requirement. So this leads to a lot of heterogeneity constraints that are very complex in order to take into account when you have the system view of the full IoT architecture. So just a short talk about what we call the end-to-end security. When we are talking about the end-to-end security, the idea of that is that you are able to ensure the security, the authentication and privacy of the data from the device up to the final application. We can, let's say, compare this to a kind of VPN that will ensure the security of the data through all the value chains. So typically, if you implement a trustable end-to-end security, you can rely on third-party for your networks, for your platforms and for your systems. This is really the added value of the approach of end-to-end security, and therefore you will be protected against all the threats on the third-party components. If we have a short look on the state of the art of this end-to-end security with a specific focus for low power, then we identified four main, let's say, protocols. The first one is TLS. TLS is quite well known. It's the protocol that is used, for example, when you go on the website of your bank. But the issue of this security layer is that it's quite heavy in terms of consumption. And it's also quite, let's say, not adapted to when you are using different protocols at different points of your system. Now if I go to EDHOC plus TLS, it's, let's say, enhanced use of TLS, lowering to use lower hardware in order to perform the cryptographic operation. This is a very interesting approach, but what we can see is that it's not covering this topic of end-to-end security. When you have, let's say, heterogeneous protocol in the data flow. If I talk a little bit about the LCHC plus TLS, LCHC, it's a compression algorithm in order to ensure the capability to use IP over many kind of protocols, such as, for example, Laura1. This one is very interesting to ensure end-to-end security, but it doesn't support the fact that you're really using your data flow different protocols. For example, if you are using Laura1, then MQTT protocol, then HTTP, it won't be able to secure all the chain without what we call hop-to-hop security. So encrypting and re-encrypting the data at each third party component. And finally, there is the OSCO protocol, which is also very interesting, but quite limited to co-app. And therefore, it's not possible to have protocol disruption, so quite the same issue for the previous one. So within the Grand IoT project, what we have seen is that there was no really solution to ensure true end-to-end security, especially targeting to critical infrastructure where we want to have high trust into the data, even when they are going through some kind of third party that are providing services such as a network provider or even some platform aggregator providers. So we have worked in order to develop such kind of end-to-end security layer, and we have done that in the water management use case. Why water management? Because it's a very critical use case where you have to trust the data you received, and therefore this kind of security is quite well-suited, let's say, to this kind of use case. So I gave it here an example where we have two IoT networks and where we are sending data to the Grand IoT fabric. And the idea is that we are able to encrypt over all the applicative layer the data that are send in, and we are able also to check this encryption and check the authenticity of the data when we received it into the system. Through the security management, we are also able to distribute the key, allowing therefore to comply with regulation that ask for regular key renewals and such kind of requirements. This shows that we are able to protect against any kind of attack on the third party. It means that if the network provider backends is hacked, all the data will be encrypted and authenticated. So if anyone is trying to spoof or to steal data in this back-end, this will have no effect on the final system. This layer has a very low impact on the energy consumption because the impact is about 0.25% of the whole device lifetime, and the impact on the bandwidth is also very low because we are talking about 5 bytes leading to the security overhead. So just to summarize very quickly what has been done. So let's say this is quite legacy infrastructure where you have the Grand IoT node which could be compared to the final application. And between both you have your network infrastructure, either a LoRa1 or any other kind of network. Those networks are offering a first layer of security between the device and the network itself. And then they are offering security layer in order to connect to their back-end. But into their back-end the data are either non-authenticated or even more not encrypted. So the security layer we have developed allows to ensure this end-to-end security. And this let's say fully transparently because all the keys and the key management is also exchanged through this network. And therefore you don't need any other kind of connection or any other kind of requirements. So to conclude, this was a short presentation related to cyber, I would call cyber protection for the low power devices. But we need to understand that cyber security system rely on two main pillars. The first one is the cyber protection of the system. But the second one is the cyber detection. And here in the IoT systems we have new challenges, especially regarding let's say legacy cyber monitoring systems. Because in legacy cyber monitoring system we are relying on systems that don't have restriction on the bandwidth, don't have restriction on the power, and that are relying let's say on the quite well known number of assets. Here in the IoT we are talking about environments where we can see devices that are coming into the system and out of the system regularly. We are talking about protocol, network protocols that have very different bandwidths. So it's really new challenges that we have to face in order to monitor those kind of systems. And for that, and the last point is that it's a very decentralized architecture. Therefore, that's why we think that the artificial intelligence is a key technology in order to enable this kind of reliable cyber monitoring for those IoT contexts. So this is really the key point if we want to offer new technologies that are allowing to cyber monitor those kind of decentralized complex IoT systems. Thank you. So I keep around 10 minutes for the questions. So I will be happy to answer if there is questions. Thank you. Thank you very much, Paul Emmanuel for your very, very interesting presentation. If there are any questions from the audience, please raise your hand or write your question in the chat. Maybe I can start with a question from my side. It's a more general respect to the topics that you addressed in your presentation. In this conference, we will talk also about infrastructures that can reconfigure themselves autonomously to adapt themselves to surrounding environments. And these kind of infrastructures could be very important for for managing easily critical infrastructures. The question is, what do you think are the major challenges in terms of security and privacy also for modern application where artificial intelligence takes an important role, especially in autonomous systems? That's a very good question. From the security perspective, I see regarding the two pillars I mentioned in the conclusion, I see several challenges. The first one is how I can trust any new new assets that is coming into the systems. So this is really not that easy as it seems, because it could rely, if I take the industry example, it could rely in the provider you are using to put new assets. It could rely on the system itself, how you can ensure that new components you start or you bring into your systems is secure. So this is really a key challenge. Then perhaps a threat really specific to the IoT is that when you bring an asset, a new asset or new system into the system, you don't know its environment. Can we trust the environment where he is deployed? This is also a key point where I think that research also have its parts to place in order to bring solutions. Also, the environment could be controlled quite easily from external, for triggering some external events, happenings, right? So if someone could have malicious intentions, could operate not on the infrastructure itself, but on the environment in order to control. Exactly. Exactly. That the point with the IoT and the cyber physical systems. It's not only the system itself, but the environment is here for the cyber security. Thank you. We have another question from Marco Jan. How do you see the role of open source in such security work? Is your implementation going open source? That's a very good question. I think that open source is key because you have feedback from the community and in this context it's very important where we are talking about cyber security solutions. Regarding the open source release of this activity, it's indeed a way that we are investigating and that's really, I think, could be valuable for us and for the community. Thank you. Marco, did he answer to your question? Okay. If there are no other questions, are there other questions for Paul and Manuel? If not, I can say again, thank you very much to Paul and Manuel. Thank you for your presentation and for joining us for our conference. Thank you very much, Enrico, for this opportunity to present this activity. Thank you.
In this keynote, we will present an overview of a secure IoT data transmission ecosystem. This overview will be completed with a concrete example of a water management use case from the Brain-IoT project. In this use case, to ensure high trust in the device’s data, we integrated an end-to-end security layer that is compatible with battery-less devices with high constraints in terms of energy, power computation and bandwidth.
10.5446/51309 (DOI)
Buongiorno, io sono Francesca Marcello e oggi presento un lavoro di ricerca, focusinga su energia di smart building e managgi di confort, basato su sensore, attività, riconoscenza e predizioni. Dopo una breve introduzione su smart buildings e alcuni dei punti più importanti che caratterizzano, ci sarà l'explanazione del problema sotto l'investigazione e la presentazione del sistema che è proposta per una soluzione. In particolare, ci sono tre elementi principali di questo sistema che ci sono che sono un modulo per la riconoscenza di attività che sono usati per usare dentro del rome, il secondo è un modulo per la prediczione di queste attività, quando sono conosciuti nel sistema. Il ultimo è l'acquistamento delle applicazioni del rome, accanto ai rambi. Poi ci sarà la presentazione della simulazione che ci sono trattati e la presentazione dei risultati che sono ottenuti. L'ultimo ci sono i conclusi e i migliori lavori che sono adattati nel futuro. Smart buildings sono caratterizzati per la presenza di sensori, attività dei rambi e dei smart devices che donano l'opportunità di realizzare energie e mangiamenti di confort, di dare supporto ai usatori di vari rambi intelligenti. In particolare, questi sistemi sono fatti per monitorare e controllare gli equipi in giro dei rambi e capire i comportamenti userati. Con l'aim di dare ai usari con le tue che supportano le soluzioni cost effective nei managgiamenti di applicazione. Per sviluppare un tipo di sistema, è necessario monitorare i abiti dei user, ricordare le preferenze e perdere i segnali delle attività performance e dei usari applicati durante il giorno. La maggior parte della literatura considera il confort di user come un set di constrainte per applicare i usari, che sono priori set base su statistiche in generale. Per quanto più, più studi che fanno freddo su attività di riconoscenza e predicazione non considerano un sistema complesso per le soluzioni di smart building. La migliore idea è di realizzare un sistema che continuamente monitora i preferenti user e abiti di applicare i usari. In questo modo, il sistema può mangiare le applicazioni cost-savings basati su user preferenti e le consumazioni di energia. In questo slide, un'overview del sistema proposto è presentato e possiamo vedere le molte module che sono inglese nel processo. In questo slide, il sistema di applicazione è presentato e possiamo vedere le molte module che sono inglese. Per prima volta, i sensori sono usati per fare observazioni su dei user e sull'interazione con gli environmenti di rinforzamento. Il modulo di attività di riconoscenza combinata la observazione in eventi con informazioni significativi di attività done per i user. Il modulo di riconoscenza di attività riconosce le attività grazie alla correlazione e le attività attività. E il modulo di profilo ha scoperto tutte le abitazioni giocato da le riconoscenze di attività. La informazione del modulo di riconoscenza è usata da modulo di attività per predictare le attività che sono aspettate per essere riusciti dai user. Nel prossimo, la modulazione di applicazione usa informazioni da tutti gli altri modulati per trovare una scadulazione per l'appliazione controllabile che può garantire la miglior trattazione tra l'energia di costa di reduzione e di confort di user. In più dettagli, l'approzione di riconoscenza è compasso da due fasi che sono la fase di attività e la fase di attività. Durante la fase di attività il sistema imparò la associazione e le sequenze dei dettagli dei moduli. Per esempio, come possiamo vedere nel slide, supponiamo che abbiamo un spazio di moduli consistenti in solo 3 moduli che consideriamo due attività differenti, dalle 1 e le 2. Tutti i moduli, le attività di 1 sono perfumate abbiamo un'altra insta di attività che dipende del sequenzio dei dettagli durante il tempo di attività. Abbiamo ottenuto un vettore per ogni istante che per ogni vettore attiviamo questo specifico vettore che è osservato con il rispetto al numero total di vettori osservati dentro del vettore. Finalmente, definiamo il vettore modulare per ogni attività così che ogni fattore corrispondente per un specifico vettore è l'attività di avverice per tutte le istanze osservate associate con la stessa attività. Inizia, durante la fase di attività, i vettori osservati sono diventati in subsequenze di perfine a lunghe, è uguale a W, e iniziando a T. Il vettore compiere un vettore con i vettori che sono ottenuti dal tempo di attività e è poi classificato a base di la probabilità di essere lunghi per un'attività. Il principale tasco della modulazione è di provare un possibile scenario in tempo T aiutato in futuro. Il vettore evalua tutte le probabilità delle attività per essere perfumate nel futuro grazie alla informazione di statistiche di tutte le altre attività che sono scelte e riconosciuto prima del tempo T prima del tempo T. La probabilità di questa attività è poi trasferita nella probabilità di uno di le applicazioni che sono usate e questo output è poi usato da il prossimo modulo di applicazione per fare un schedulamento coerente di applicazioni e per fare le evaluazioni di consumazione energia. Il prossimo modulo è il modulo di applicazione di applicazioni questo algoritmo di tasca dinamica di applicazioni controllate in modo che sia più conveniente come di piccole volte. E valore il schedulamento di trovare il miglior tratto tra l'energia di overall costa e l'energia esperienza dal user a causa di questo schedulamento. In la tabula sono mostrata diverse attività che possono essere perfumate dal user che sono usate da un set real-world e le applicazioni corrispondenti che sono usate durante queste attività. Solo tre delle applicazioni che sono presenti in casa sono controllate per la scelta e sono i suoi suoi suoi maschine e la stessa. Le due diverse attività sono linkate a queste applicazioni e il schedulamento di algoritmo perfumano le schedule solo quando uno o più delle attività hanno probabilità di essere perfumate che sono più o meno di un certo trassollo. Tutti i simulazioni sono usate da un set di the same data considerando 6 settimane di attività e 1 settimane di test. La l'attività del algoritmo ha un'acquirazione di 82,3% mentre il algoritmo di predicazione ha un'acquirazione di 67%. Per l'evaluationo del algoritmo tre differenti scenario sono considerati. Il primo è la situazione classica che le applicazioni sono usate da un residente e il schedulamento non è programmato. Il secondo è basato su un'acquirazione perfumante nel quale il usuario vuole usare alcune applicazioni nel suo paese. Questo caso può essere il quale il usuario instrazione il sistema per cui vogliono applicazioni per iniziare ma ha l'avvervimento tra il usuario e il sistema. Il ultimo invece basa su le evaluazioni di schedulamento su la probabilità di usare alcune applicazioni nel tempo t calcolato come ho detto prima per il modulo di predicazione. Ci chiamo questo scenario basato su probabilità. In la strada su la destra sono listi due differenti tarifi per l'energia con uno che è più conveniente durante le tue tue tue. Poi, nella strada su la destra è possibile vedere che le due sisteme con il schedulamento effettano la consumazione energia che può essere la crisi di almeno 39,2% in contrasto con il caso senza schedulare. Il graffino di questo slide è un'energia di usare di più di un settembre differentiato dai tri applicati che abbiamo preso in account e considerando la probabilità di schedulare e la knowledge del tempo preferito per il user. È evidente che la maggior parte dei salori viene da un usare di usare dal poterito grazie al schedulamento mentre ci sono un incidimento di usare dal mascino di usare e dal lavaggio. Questo può essere raccontato da il fatto che le prime volte per il usare di questi applicati sono già evaluate come il migliore compromiso tra la consumazione energia e il confort di user specialmente perché in molti casi sono usati in volte da periodi di non piccoli. E' anche evaluato la annuazione che ha causato ai usari per la schedulazione dei applicati i valori della annuazione sono da 1 a 5 dove uno indicava che non c'era niente discomfort per il usare nel cambio del tempo in cui i applicati sono stati conosciati mentre 5 indicava il livello di annuazione di i aiutati per il usare. Abbiamo trovato che in molti casi il livello di annuazione è sempre molto low. C'è solo una scelta nel caso dove la schedulazione è base su probabilità dove c'è un livello di annuazione nel uso del lavaggio di acqua due al fatto che questa attività è un link a due attività diversi e uno di questi è l'attività di guardare le mani che sono riconosciati e predicati con alcuni problemi perché è sempre confuso con un'altra attività che il usare perfumano. Quindi ci sono alcune errori nella predicazione di questa attività che hanno avuto un scelto less acurato e di arrivare a un livello di annuazione. In conclusione, i risultati che abbiamo proposto nel sistema hanno fatto che la schedulazione può garantire l'energia di salvare e ridurre la consumzione in comparazione con il usare classico di energie e applicazioni. La modulazione di predicazione permette di avere un schedulamento abbastanza acurato anche per diverse ore basato solo su probabilità di sviluppare. Per quanto più, è possibile garantire che la riferenza di anzio non era troppo alta per il confort di usare rispettato. I futuri funzioni fanno di testare la stabilità del sistema per alcuni scenario in caso di differenza e per approfondire la algorithma di predicazione. La stessa cosa sarà valutare la presenza di energie di renewable e di sviluppare le energie di sceglia e finalmente il sistema può essere expandito considerando informazioni di aiutare aiutati e trovare una correlazione tra le tue o le tue abitate e aiutare la presenza psicofisica. Grazie per l'attenzione.
Thanks to Building Energy and Comfort Managements (BECM) systems, it is possible monitor and control buildings with the aim to ease appliance management and at the same time ensuring efficient use of them from the energetic point of view. To develop such kind of systems, it is necessary to monitor users' habits, learning their preferences and predicting their sequences of performed activities and appliance usage during the day. To this aim, in this paper a system capable of controlling home appliances according to user preferences and trying to reduce energy consumption is proposed. The main objective of the system is to learn users' daily behaviour and to be able to predict their future activities basing on statistical data about the activities they usually perform. The system can then execute a scheduling algorithm of the appliances based on the expected energy consumption and user annoyance related with shifting the appliance starting time from their preferred one. Experimental results demonstrate that thanks to the scheduling algorithm energy cost can be reduced of 50.43% and 49.2% depending on different tariffs, just by shifting the use of the appliance to time periods of non-peak hours. Scheduling based on probability evaluation of preferred time of usage of the appliances allows to still obtain evident energy savings even considering the errors on predicted activities.
10.5446/51310 (DOI)
Coming to our next presenter, Katarina Baltaza from the University of Porto. Katarina is a researcher at the Faculty of Engineering at the University of Porto. And her research interests are in decision support systems and prognostics and health management techniques, especially in the area of industry 4.0. And her talk today is about a prescriptive system for reconfigurable manufacturing systems considering variable demand and production rates. So hello everyone, my name is Katarina Baltaza and today I'll be giving a presentation regarding our paper titled as Prescriptive System for Reconfigurable Manufacturing Systems Considering Variable Demand and Production Rates. So firstly, I'll briefly introduce our topics followed by a concise literature review and then I'll proceed by explaining our implementation. Once finished, the system validation results will be discussed and lastly some conclusions. So markets nowadays increasingly require customized products with shorter life cycles. So in response, the manufacturing systems evolve from mass production to mass customization and they achieve that through flexibility in their production lines. And in order to achieve that, reconfigurable manufacturing systems arise to deal with uncertainty and individualized demand by combining both advantages of dedicated manufacturing lines and flexible manufacturing systems. So one hand we have the versatility of flexible manufacturing systems and on the other the high throughputs of dedicated manufacturing lines. So also during the current industrial revolution, there is an interest in the upgrade of prognostics and health management as they allow improvements in reliability in the reduction of costs associated with maintenance actions. Also a good prognostics health and health management tool should be able to detect and predict failures in a timely manner and this corresponds to the first two blocks, diagnosis and prognosis and this information should also support decision making which is the last block and it is here where we insert our work. So our main goal is to develop a descriptive system that recommends sequences of throughputs that will compensate production losses due to maintenance actions and also will take into consideration the weekly production targets and the degradation of equipment. And since we are in a reconfigurable manufacturing system, the compensation for production losses can be achieved not only by tuning the different throughputs but also by routing pieces to healthy assets. So when we refer to degradation of equipment, we need to take into consideration that different throughputs will impact equipment differently. So if we increase the throughputs, the degradation will be faster and if we decrease the degradation of equipment. So prescriptive systems can be understood as systems that recommend one or more costs of action and there are prescriptive systems to help in the order of spare parts, scheduling and life cycle optimization and regarding their implementation, evolutionary and swarm algorithms are the most common ones. So before going into our implementation regarding the papers already published, we would like to highlight two where both optimize machine throughputs using genetic algorithms which will also be our approach. However, none of them take into consideration the possibility of routing pieces in order to compensate for production losses. So taking into consideration our goal, we decided to implement a simulation based optimization module and this simulation will allow us to evaluate the solutions, body scenarios and also simulate failure predictions. And our general approach is that the manufacturing system is running in the manufacturing environment simulation and once a failure is detected, the optimization module triggered and the generated solutions are evaluated through the simulation module, which is an image of the first block. And the optimization will occur until the termination criteria is met and when it stops, the suggestions are presented to the operator. And now that I've given an overview of the implementation, the solution proposed, I will discuss each module individually by starting with the simulation one. In our paper, we simulated the systems as the ones in the figure where a piece processed in any machine of the column, for example, one can go to any machine in the column two and this means that all machines in each column have the same available type of operations and the pieces are processed sequentially. So they enter at input and get out on outputs. And to model these kind of systems, we used graphs as they allow quick and easy changes in layouts, offer good readability and are easy to implement. So it since we used graphs, a node corresponds to a machine and the edges, the connections between the machines and these edges are weighted, the lesser the weight, the higher the priority. And this is how the machine knows where to send the piece. Also each node has associated an object of the class machine, which saves several information regarding the equipment. And those informations can be divided into three types of parameters. The identifying parameters such as edge and machine ID, however not limited to these two operation parameters, which is the types of operations available and the current throughput and reliability related parameters such as mean time to repair and mean time between failures. These last two parameters, the last type of parameters are very important because since we did not propose a predictive module, we need to somehow simulate the failures in order to validate our system. So we consider that the probabilities of failure are known and failure detection is done based on time intervals. So the meantime between failure associated with each machine decreases each tick of the simulation and once it falls below a certain threshold, a failure is detected. And that threshold can be seen here as the time to failure, where in the current tick the meantime between failure fell below a certain threshold. And the failure that once it falls below a certain threshold, a failure is detected. And the failure detection can result into two different types of maintenance. The emergency maintenance or schedule maintenance. And this depends if there is or not a maintenance shift schedule inside that time window. In the first case, since there is no maintenance shift associated, the failure will translate into an emergency maintenance and this kind of maintenance is always triggered optimization module. However, in the second case, since we have a maintenance shift, the maintenance related to the failure F2 will start as soon as the maintenance shift starts to. And in this case, the optimization module is only triggered if the maintenance time is not enough to repair the equipment. And now that I went through the all the main points of simulation, I'll start explaining the optimization one, which is the key to the scripted system as it is what allows for the compensation production losses. So the simulation module just exists in order to support the optimization one. And as I said, initially, we chose to use genetic algorithms. And each solution is an individual, which is a chromosome and each gene will represent the throughput of a certain machine at a certain day. So if you want the throughput of machine one at day one, it is the first position. And here we have the fitness function. And where the first term is the difference between the production weekly targets and the number of pieces produced. In any sense, it evaluates how far the system production is from the targets and the square ensures the non negativity of the values. The next three terms are all related to maintenance actions. The first one is the schedule maintenance, the second emergency maintenance that I already explained previously. And the next one is regarding new maintenance in the following three days of the following weeks. So in order to prevent that we cause new failures due to the decisions that we are making in the current week. And also since changing throughputs all the time is not practical, we introduce less two terms in order to the algorithm prioritize more homogeneous solutions. So the first of these last two terms are the number of changes in throughputs. And the last one, the standard deviation of throughput per machine. So to wrap up, we have the manufacturing system in the manufacturing environment simulation running and the failure that meets the requirements to trigger the optimization module occurs. And once it is called, it will generate solutions that will be evaluated through the simulation module until the termination criteria is met. And once it is, the suggestions are presented to the operator. In our case, the termination criteria is the maximum number of generations. And in our results, we always chose the suggestion with the best fitness value. So to validate the system, first we did some initial tests to assess the JHA behavior. And then we went to more complex tests, which we will show the results in the next slides. And after several tests, the parameters that show best performance were the ones presented in the slide. And we chose to use elitism, which means that a small portion of the best individuals are carried over to the next generation without changes. So to prevent the replacement of these good solutions during the next generation. And the metrics that we used to evaluate our results were the differential, which is the variation of ECs in relation to the production targets and the availability of the system, which is defined by the ratio of total real operation time, by the total theoretical operation time of the all machines in the system. And we simulated through doing one week that we consider five working days. We tested several scenarios, several configurations and each configuration to two different scenarios, one where the target is the same as the normal capacity of the system, and the other where the target is 20% higher. So in order to understand if the system can comply, if the market demand increases. And when we say that we have a configuration three by three is we have a configuration three lines, three columns, so nine machines in total. Here are the results. Each test was executed three times to understand if it was just luck or not. The results are the averages of those three brands and also the standard deviation of the differential. And the first table refers to the results to the first scenario where the target is the same as the normal capacity. And the second one where is 20% higher. And in all cases, the differential decreased and in some cases availability slightly increased, which are the market as green. And this example shows how the availability increased. And this is one specific run of test for this configuration 10 by 10, where we produced an excess of five pieces. If we notice the the maintenance is regarding the machines J5 and G7 disappear from the current week and the throughputs of those machines are in general lower than the baseline. So the algorithm decided to push these maintenance to the next week. And like this, the availability slightly increased. And this happens always in the configurations with the more machines. So due to the higher redundancy of the system, the algorithm was able to achieve the targets and push the maintenance to the next week. So to evaluate how the results vary from configuration to configuration, we plotted the averages of the differential. And when comparing the averages, we wanted that they gravitate towards zero with low deviations. And the solution seemed to follow this behavior. However, as you can see, there is a slight increase in the deviation from configuration three to configuration four. However, this increase that didn't surpass the margin that we consider acceptable, which was one percent. And so it doesn't seem enough to jeopardize the results regarding the tested configurations and can be attributed to the search strategy and convergence of the GA. However, we should do more tests as we just did the three tests per configuration. And that's one of the points that I will discuss in future work. So we presented a prescriptive system, either an optimization module based on genetic algorithms alive with a graph-based simulation module, and the tested solution, and the test, and the results were consistent among all configurations. And further research should be conducted to surely generalize results, and it also very important to decrease processing times. And once we can decrease processing times, it will also be easier to do this generalization if it is generalizable, however the indicators show that is. Also, this prescriptive system was developed with the future integration of a predictive module in mind, and that would be one of the first steps after taking care of processing times and in order to offer a more complete framework. So thank you for your attention. Thank you very much for your presentation. Questions from the audience, please. Nobody? Then I would have one. So how do you see your approach related to, like when I read about industry 4.0 and smart factories, it's very often it's about predictive maintenance with classic standard machine learning algorithms like learning training and predicting. And how is this comparable with your approach? And do you think it's one can can can use each other or it's contradictory or what's the relation to such a. I think they can be a light and that is the point when I say that we want to integrate these optimization module with predictive ones. So there are several works I use machine learning deep learning to do the predictions of the failures and then what we do with them. So in order to generate recommendations, we decided to use genetic algorithms. And now that we kind of validated our results, we integrated those approaches that you were talking about. Okay, thank you. Thanks again. Thank you.
The current market is dynamic and, consequently, industries need to be able to meet unpredictable market changes in order to remain competitive. To address the change in paradigm, from mass production to mass customization, manufacturing flexibility is key. Moreover, current digitalization of the industry opens opportunities regarding real-time decision support systems allowing the companies to make strategic decisions, and gain competitive advantage and business value. The main focus of this paper is to demonstrate a proof of concept Prescriptive System applied to Reconfigurable Manufacturing Systems. This system is capable of suggesting sequences of machines throughputs that best balance productivity, and the impact of the proposed throughput in the degradation of the equipment. The proposed solution is mainly composed of two modules, namely manufacturing environment simulation and optimizer. The simulation module is modeled based on Directed Acyclic Graphs and the second one on Genetic Algorithms. The results were evaluated against two metrics, variation of pieces referred as differential and availability of the system. Analysis of the results show that productivity in all testing scenarios improves, and in some instances, availability slightly increases showing promising indicators. However, further research should be conducted to be able to generalize the obtained results.
10.5446/51314 (DOI)
So now I'm really very pleased to introduce our first keynote speaker, Henrik Plate from SAP. Henrik is a senior researcher at SACP and his current research is focused on security and software supply chain, especially the use of open source components. Henrik is leading the Eclipse SteadySeries project, which supports detection assessment and mitigation of vulnerable open source dependencies. Dear Henrik, the floor is yours. So yeah, good afternoon again. My name is Henrik Plate. I work for the security research team of SAP and this talk will be about cobbles and potholes, which are rather lame, which is a lame metaphor for the kind of problems that we challenge these days when it comes to securing our software supply chain. Before starting the presentation, I'd like to use the opportunity to sincerely thank the organizers for helping me. I really feel honored for the opportunity to open this conference. All right. So this is how the agenda, how we will run this meeting. I will start with a very brief introduction of myself and a few words about my employer and the security research team. And then we will spend most of the time on what I believe are the two main problems that we see in the space of open source supply chains, the one known for quite some time, especially since Heartbleed in 2014, which is the use of components with known vulnerabilities. And the second problem that is, well, theoretically known since a long time as well, but which go on, which gains quite some attention in the last couple of months, few years, and which is gaining much more attention in the future, which is my set prediction. Quick, two quick disclaimers. First, regarding the level of detail. So as always, when preparing presentations, I've been carried away by my passion and enthusiasm for the topic. So I put a lot of content examples and references. My excuse is that I wanted to produce a self content slide deck that is useful even after the presentation so that people can refer to it, find the pointers references and information. But of course it requires quite some discipline for me to not explain every little detail and some discipline from the audience to not read all what is on those slides but rather listen to me. In any case, for you can always reach out to me after the presentation in case you have any questions regarding my present whatever I presented. The second disclaimer is regarding this fear mongering. And this is kind of a problem inherent to the whole security domain, right. It's always like as if we find pleasure in making people afraid of security risks and vulnerabilities and so forth. I tried to stay away by quoting and presenting information from research papers and studies that have been hopefully produced by independent researchers. Let's see whether this worked out. I leave that assessment to you. So very quick. I'm German as you hear from my accent 46 years old, and I spent quite a good share of my life in France. I developed software for many years already and rather accidentally I became a member of the security research team at SAP labs in southern France. And there was actually a great, great accident and I'm working happily working in this domain for more than 10 years where I did architecture code reviews develop some secure developer training acted as European as the coordinator of European projects and so forth. Sadly, I contributed far too little to the open source community. So shame on me. I try my best to catch up, particularly with eclipse study. And I'm a cycling enthusiast, which is explaining this metaphor and the occasional appearance of some cycling related pictures in my slides. And those those pictures come from this infamous one day race, Paris, which is actually known for its cobbled or cobbled roads in Belgium. So open source. Well, we could how does this fit together. I would say not very much if you think of the software that we developed 20 years ago. And here's this screenshot of the infamous travel expense management travel expense management solution maybe many of you have seen this blue ish intuitive UI. That was really all 100% closed source communication protocols programming languages ID and so forth. And the situation changed dramatically drastically as for the entire software industry I would say by now we are a heavy consumer of open source, but we also try to give back so we contribute to existing projects and we release and start new open source projects by us by our side. So a particular noteworthy example in the year 2020 is this Corona one app. So this is basically tracking. Yeah, relationships between people using a decentralized data store storage has been developed with a couple of other companies and a rather short timeframe of 50 days and is now available on GitHub as well. Um, for what concerns SAP security, we consider ourselves being an implied research team bridging academia on the one hand side and SAP product development on the other side so we basically try to transfer and communicate new approaches, concepts tools from academia to a product and then see what is applicable and useful useful and we try to communicate real world security problems to the academia and see whether it's a fit. We have been quite successful in terms of peer reviewed publications. And we have eight strategic research areas. I'm not going into those are the majority of those. One is the one area is open source security analysis, led by myself. And you will hear a lot about our work and current trends in this domain later on in my presentation and the other topic of relevance is secure Internet of Things which is led by a colleague named Laurent Gomez and for that topic I spent one dedicated slide to exemplify the work we are doing in the area of secure IOT. The first interesting observation when talking with Laurent a couple of days ago was that you would actually not call the secure Internet of Things anymore but he would call it distributed enterprise systems and this change in the name is reflecting a trend where those things are not any dump devices any longer that collect data and send this to some central backend but they become smarter and smarter such that programming logic and data is moving from the central enterprise systems to those devices which brings its own new challenges. This shift or this trend is also exemplified with two of the areas of work of Laurent and his colleagues. Sometimes they were working more on secure end to end communication channel channels where the problem was really getting data in a secure way to backend and a good paper and this domain is the one I cited here, where he was basically developing a cryptographic scheme to protect the confidentiality of the data being transmitted end to end and to end meaning the data is encrypted on the device like sensor data and stays in encrypted fashion in the database and decryption only happens shortly before any processing or display. There were a couple of requirements like device specific keys to support authentication, frequent key changes, and a couple of more. This solution is now used in the water distribution system of the city of on TP and southern France, where basically, sensor data like temperature water pressure water levels and so forth are sent to some analytics back end, and then presented and analyzed. And then some dashboard. The second example is of a paper I wanted to cite relates to, yeah, where problems appearing when you move application logic and data to those devices here he was recently working on, basically, especially protecting the intellectual models of machine learning models that are deployed on the device, and at the same time protect the input and output data. And what he did basically is he was using homomorphic encryption in order to protect the, the weights and the biases of the layers of the neural network. in order to protect the model. And which was working over the encrypted input data, which was in the evaluation phase encrypted video data. And so the use case of this technology is to basically protect or use video, the data of video cameras deployed in on-teap again, in order to find out whether there are any suspicious activities that could relate to terrorist attacks. So for example, if there are lorries or trucks being parked for a long time in front of some critical buildings or so forth. Right, now we come to the two main sections. The first one being dependencies with known vulnerabilities. So I will start assuming that we all agree on the fact that there are that open source consumption is steadily increasing. And so is the number of disclosed vulnerabilities in such components. One number in this context is that typically an application contains around about 100 dependencies upstream projects. And a good percentage of those have known vulnerabilities or have, let's say a subject to vulnerability. And what happened is that in, I would say with latest in 2014 with Heartbleed and even more so in 2017 with Equifax developers entered kind of a hamster wheel, which is an endless cycle of checking whether there are new vulnerabilities for the components that you depend on. For every finding, try to figure out whether this are false positive findings, if not assess whether those vulnerabilities really matter in the specific context of your application, because maybe they're a part of some code that is not invocable, not reachable in the context of your application. And of course you need to keep fingers crossed that your checks didn't have any false negative. Next part of the hamster wheel is the mitigation which can be very easy if your upstream users respect SEM-VAR and which can be very difficult in case you have vulnerabilities of projects which are long dead and that you need to fix maybe in your own fork. And then you release a patch, you're happy or congratulations to all people running software in the cloud, that is easy, that is really an advantage of cloud computing. And sorry for all those that need to patch software that is running in devices. So there was this Ripple 20 vulnerability a couple of months back, and this really exemplified the kind of the scale of the problem in this IoT domain where you had this vulnerability TCP stack that existed in hundreds of thousands of devices. Many of them are actually unknown. And for some you cannot even fix and release a patch to this device even if you find and identify the device. So in this context of the hamster wheel of this endless cycle we will talk about the following, the next 10 minutes about the following topics. Quality, timeless and content of public vulnerability databases, problems for developers to assess such vulnerabilities, the shortened response windows, which are shortened in particular because exploits become available on a very quick basis. And last a few notes and comments on the possibilities to auto-upgrade, to auto-self-healing if you wish of vulnerability penalty. So as a foundation of what is important for the next few slides is to understand the CVE and NVD concept. Whenever you talk about known vulnerabilities, you hear those terms in the first 30 seconds I assume. This is by now the largest publicly available database with information about software vulnerabilities, both in open source software and commercial software and operating systems. And basically everybody could submit a CVE, so basically whenever you, the security learns about the vulnerability, he would request a CVE from the Mitre, an organization in the US, they would reserve a number, then start the discussions with the vendor and the researcher and so forth until it is eventually published in the NVD, which adds a severity rating and the number of, or an enumeration of the effective products. This whole process can take a few days up to several years, which is already indicating one of the problems we will be talking about later on. By now there are 140,000 something CVE entries as of yesterday. This is an example of a vulnerability reported for Eclipse Mojara, one of the Java Enterprise components, if I'm not mistaken, handed over to the Eclipse Foundation, some years back, and this is really almost, this is the full entry, right? You have a very short description, a severity rating saying, this is a high, bigger problem, got a base score of 7.5. You have one reference to the fixed commit and one to the issue, and you see this affected product is Eclipse Mojara. This looks neat, but it is by far not sufficient to let developers downstream users of Mojara decide whether this vulnerability is really matters. And there are many problems, and I don't have the time to go through all of them. I just want to point to two, first of all, since this is a manual process, there are errors, I don't avoid it, so no way to avoid those. And so one thing is they actually identified the wrong versions in the first place. So in fact, the versions 235 and 236, were also affected. And they corrected this after we reported this problem, which we actually detected using Eclipse Study. The second big problem is that their entire ecosystem is not covered like NPM. You will find very, very few vulnerabilities about Node.js packages or NPM packages. And there are many more, mostly relying on the fact that you have humans involved in the process that give arbitrary names or labels to things that need to be mapped to other things. But I won't go into this here. This was an example. There are very interesting empirical studies of the inconsistencies in the NPD. So those researchers referenced here, they basically compared almost 80,000 CVEs with 70,000 vulnerability reports produced over the last 20 years and tried to find inconsistencies in the names of the products and the versions of the products being referenced. And so they say the strict matching is if the CVE and the report is mentioning the same product names and the same product versions. Loose matching is if they match the same, they talk about the same product, but different ranges of versions. And when looking at it, and if you talk about different version ranges, basically one is overclaiming and the other one is underclaiming the number of affected products, right? Or product versions. And when they did this, they figured out just by comparing CVE and NVD that only 70% of those really have a strict match talking about the same affected product versions. 90% of those CVE, NVD entries talk about the same product, but about different versions. And the consequence of that is in 10% of the cases, they cannot even agree on the number of the products being affected. And this is getting worse if you compare NVD to other information vulnerability reports like the exploit database and so forth. An important problem is related to response windows and the availability of exploit. So here the main problem is that the time between a researcher reports a problem and until the description gets available to the general public is can spend weeks and weeks. They analyzed that difference, finding that quite a number of CV entries lag behind, one week behind the first official public report. And the second problem or why this matters so much is because of the study regarding the availability of automated exploits. So here Palo Alto Networks looked at 11,000 exploits, so readily downloadable exploits from the EDV and checked when those exploits were available compared to when the patches from the product vendors were available. And they figured out that 80% of the exploits that people can find on EDV were available before the CDEs were published, which is quite alarming, I must say. Equifax is a particular case. Here there were three days between patch availability and the actual data breach happening on March 10th, which is an interesting time. So I think the consequences here are twofold. One is the severe problems of the data quality and timeliness of public vulnerability databases. And the second is due to those small responses, automation is really a must. There are two approaches to detect vulnerabilities. One is on metadata where you basically compare those labels. A good example, so labels giving to software components and labels giving to being given to vulnerabilities and you try to see whether there is a match, but that is partly because it's human provided names. One example is the OVAS dependency check. They do surprisingly well. They are considering this fuzzy mapping. They are very lightweight and map against CVE and NBT. The second approach is code base where you ignore all metadata and only assume that real truth is in the coding. So here we have a method and from that has been identified by the FIX commit. So this is the method fixed by the developers of, I think this was some Apache project. And you find vulnerable code only by looking at or searching for this method and checking whether this is in the vulnerable or in the fixed state. And this code based approach is out to form metadata based approaches in terms of precision and recall and allow for nice features such as impact assessments and update metrics. And then you can see the call graph from application methods to the vulnerable method I was showing before. Then one other important topic I find is, and this is a shameless plug for a session we will be giving at EclipseCon in one or two weeks. The thing is that there is no high quality code level information publicly available. So we have a really short time of the year, and we will be talking about that in a little bit later on. In the last couple of weeks, we will be talking about the data base. We will be talking about the data base. And what happens is that the providers stepped in and they started to build proprietary databases about vulnerabilities in open source software. That mining of information is labor intense, and there is a consequence that the information about open source components itself is not open. And because that data is not open, the general, the open source community cannot really develop proper tooling to solve the security problem by themselves. They rely on basically the proprietary tool vendors to share this data, which they do, admittedly, to be fair, but they do this drop by drop. You don't have access to the whole data set. And as all the machine learning AI guys can tell, that is what you would need in order to really work and progress in the area. Our approach to this is what we call rather clumsily project KB, which is meant to overcome this, and which is basically a tool and a database to support distributed collaborative management of vulnerability information for open source components. The next topic relates to, I mean, you can say getting out of this hamster wheel, automation is key. I think this is rather obvious. And what you should really do is to scan early, often, and automate it with the tools I was mentioning before. OVAS, dependency, steady, there are also NPM audits. So every ecosystem has kind of its tooling. And on top of that, you have the commercial vendors, of course. But they only go as far as to the detection of the vulnerability, vulnerable components. And the fixing, this is really left mostly to the developers. There are some tools that create automated pull requests for issues that they find in Git repositories. And this study I was mentioning here, I think it's very well proven or showed that projects using this automated pull request, indeed patch more often, are more secure than the baseline. But still a lot of vulnerability and a lot of dependencies and pull requests are not much because the developers are afraid of breaking changes. And the root cause of this problem is the wrong use of SEMVER. Theoretically, SEMVER is great. It's not a good idea to use SEMVER. But you can rely on, if it is properly used, you can rely on minor and patch versions not introducing any backward and coppable changes or breaking changes. But there are some several studies that show that SEMVER is not properly used. And even minor and patch releases contain a whole bunch of backwards and coppable changes. And SEMVER really has to improve before you can, before applications can become automatically fixed. The takeaways of this first part of known vulnerabilities is CVE, NVV has high, has problems with quality, timeliness and coverage. So you should really not use this as your only source of information. You or your tools, you will miss something and you will be late. This is not a blame important on NVV or CVE, because they do their best, but they are heavily underfunded. So the blame goes to the lack of appropriate support and funding to build such a public, high quality database. And it was important that commercial vendors stepped in, but I strongly believe that the open source community should solve this problem by itself and that requires a public database. And these are the two other takeaways of that part. Automated detection and fixing is really needed to address this shortened response window. And that code-based approaches improve significantly over approaches that rely on metadata. That is concluding my first part. I'm running, I'm running a little bit of time. Let's see if I make it. The second part is on supply chain attacks. And I would like to start this with a nice quote from, yeah, from a security researcher saying that installing code from a package manager has the same level of security as Curl site com bash. And what I like about the quote is that it nicely illustrates the dilemma of many developers, including myself. If I come across, or if somebody tells me, I should fetch whatever web page and execute it in my terminal, I would, I would become suspicious at, or I would ask me some questions, maybe have a look at the script. But in developer mode, when I want to get things done and develop stuff, I have, there's much less hesitation to just install, to run NPM install or pip install or what else, command. And why is that so dangerous? Is that many packages or ecosystems come with pre and post installation scripts. And if you install a package, there is some script being executed with your user on your computer and not only of the packages that you install, but for all the packages that this package depends on. So all the upstream packages. So you happen to execute quite some stuff on your computer or potentially if you, if you install a package. Um, this, the former quote was from somebody who developed a proof of con, uh, proof of concept warm for the NPM ecosystem that would replicate itself. This year is a real example from November, 2018, which gained quite some attention because of the number, high number of downloads, the high number of packages that depended on this package event stream. What happened is the alleged attacker wrote an email to the original developer and asked whether he could end, whether he would like to hand over the ownership and the original developer who lost interest in the open source project, kind of agreed, which, you know, opens all possibilities to the attacker. And, and this example is also noteworthy because the attack was relatively sophisticated compared to previous ones. So the malicious payload was encrypted and the payload only triggered for certain downstream packages. And it would, it evaded detection by only, by only running in productive environments and evading its execution and test environments or build environment. Um, there's an increasing number of such attacks. And this is work we have done together with the university of bomb. So here we looked at 174 malicious packages for which we could obtain the actual code. And so we were looking at the malicious code and, um, and we looked at the different dimensions and problems. So when would it, on the which content, on the which conditions would it trigger? How did the attacker injected the malicious package into somebody's dependency tree and so forth. And as well as temporal aspects. And here there's a clear trend for an increase in number. Uh, in 2019 in particular, there was a, uh, a bigger campaign on Ruby gen. And on the right hand side, you see the average or the, the, the, the number of days that malicious packages were available in the different ecosystems. And I think an average, a malicious package was like 209 days or so available before it was discovered and yanked from the repository. This is an attack tree, which is far too detailed to go to to go through all the notes and the attack vectors. Um, I just wanted to highlight two things. The most important attack vectors are type of squatting. So here you, this is a technique from domain squatting applies to open source ecosystem. So you would basically choose the attack. I would choose a name similar to a well known name. And my favorite example is a malicious package called mum pie instead of mum pie, this, uh, this, uh, Python, uh, library for machine learning. And the second to most important vector was the use of weak compromise credentials. So basically package maintainers had weak passwords that were stolen and the attackers uploaded malicious packages to pipe pie, NPM and so forth. Event stream was a matter of social engineering. The two ingredients that make supply chain attacks. So let's say possible, uh, kind of the trust that users, developers have in packages. Um, and at the same time, the automation introduced by build systems such as Maven that care about dependency resolution and installation and download in an automated fashion. Again, you run, you install one package and the likelihood of installing another 50, 100 packages is not that small. Um, I think I just have five minutes. So maybe I hurry up a little bit. Um, uh, actually I should probably go to the conclusions already, um, of this section in order to not, uh, go too much over time. So there is a number of consideration related to trust, uh, the implicit trust in the ecosystem. Um, um, but I think I, I, I really go to the conclusions here and as again, as before you're invited to go through the slides and contact me for all the details. So here basically is, um, the takeaways is many people thank you for putting trust in their security capabilities. And one of the examples I was dipping was showing how weak, weakly package maintainers use passwords and so, uh, and, and, and put entire ecosystems at risk. The reason of the increase of supply chain attacks is that there are many, the, the, the number of dependencies of projects increased so much over the time. And so did the number of actors and the complexity of the build processes and the related infrastructure, which all resulted in a considerable attack. There is this notable, noticeable increase in supply chain attacks. And in particular, Python, Node.js and Ruby are the primary targets. I suspect to the former two ones in particular, because of the presence of these installation scripts and installation hooks, but, but maybe we just don't know a few ecosystems like Java, maybe central to my knowledge have not been analyzed in a systematic fashion. And if you want to protect against malicious open source components, the two takeaways for me are that all dependency methods, not only the compiler runtime dependencies, but also the test dependencies or the, all the build plugins that you have, because all of that is executed when you compile and build and test your solution and could possibly modify the compiled code that ends up on a package repository. And if ever you're going to review open source projects because you're kind of six security sensitive and want to know what you're using, it doesn't bring much to look at the source code repository. You should really only look at what you download, which is sometimes ugly if it is for compiled languages, but looking at the source code will not help you against detecting supply chain attacks. Known vulnerabilities, accidental vulnerabilities, but no supply chain attacks. It is an active search of, an active field of research, which hopefully will yield some results in the near future that then will become integrated in the different. Toolings and different stakeholders or by the different stakeholders. A few closing remarks and I'm really sorry that I had to rush a little bit. I hope still my main message has got through. What is missing really in terms of supply chain attacks is that there is, there's no comprehensive and comparative study of the effectiveness of different safeguards in the different ecosystems and then the subsequent gap analysis. So kind of have best practices and tips and tricks here and there, but we don't have really some, a good study of how much that is, that is solving the problem. I have a couple of selective and opinionated or an opinionated list of technical safeguards. I didn't want to go into organizational, trade guard, safeguard such as training, awareness and so forth, but I'm more interested into the technical stuff. But before showing those, I wanted to mention a few things. One particular, and this goes to all, especially the commercial users. I think they deserve more support. Both the upstream projects used by commercial vendors as well as the infrastructure providers. And a good example is PIE PIE, which is run by as few as 10 administrators for more than 450,000 package owners and more than 260,000 projects. And there's no surprise that these few people struggle to fix and run after security issues. Out of scope of this presentation are very interesting topics, maybe not to work on, but to follow from the distance. One is a number of government regulations and standards that will be imposed in the near future on software development organization. An interesting topic always is liability of commercial software vendors. I mean, for all the open source providers, we have our open source licenses solving the problem in terms of legal liability. But for commercial software vendors, this is a dedicated topic. And then there's also this topic of moral responsibility. And this became apparent in this event stream example I was showing before. He had an open source license. He was fine from the legal side denying or liability for however the project is used. But there was a huge debate in the issue where they discussed the problem. And one, some people basically said, you should have taken care. You cannot just hand over the project to anybody. How could you do this? And there were other people, including the developer who handed over the project saying, well, this is a spare time activity investing hours and days. And this could be not demanded from him. So I find this a very interesting discussion and open problem if you want. Right. And this is a list of safeguards. The upper part is just standard stuff, well known, relatively cheap, mostly that should be applied where possible. I find the lower ones more interesting. And you don't see the upper parts because I didn't want you to read through all those. The lower ones are more interesting because they address the problem, one of the big problems, which is that the bills of open source projects happen on arbitrary system. So very often it's some built systems or even developer machines where the binary package is produced that will be uploaded to Pi Pi or to Maven Central or to where it is. And this is a huge problem. And this is addressed by these three first mitigations all from a different angle. And the last topic I wanted to put this because there are ongoing research works that suggest that the whole attack surface or that more than we start differently, that a good portion of the open source components that you pull into your project is actually not needed by your specific application. And so you can just slice it away in order to reduce the attack surface. And that is one technical countermeasure that personally I find very interesting and promising. And I hope there will be some research in this area soon. That is, and I was really rushing as I feared in the beginning, basically my presentation. Yeah, sorry again for the few examples that I skipped in the supply chain and text part, but I hope you got my main messages and main points. Thank you so much. Are there any questions? Thank you, Enric. Not question I think maybe during the break out session. Thank you very much because we are in a bit of an area now. And let's now I will give the stage to moderator to Rosaria and Marco to follow up on the next speech. Just to say that we took notes on all of the questions that appeared in the chat. So our speakers will be available at the end of this session for a breakout. So please join us and you can ask all the questions that you want. Thank you. Okay, very good.
Open source software is ubiquitous – all across the stack, in the cloud and on-premise, on all devices, in commercial and non-commercial offerings. This success, the dependency of the software industry on open source, combined with recent data breaches and attacks, puts security into the spotlight. This talk will provide an overview - for sure opinionated, hopefully controversial – about the state of affairs and current trends regarding the security of software supply chains, both from consumer and producer perspective.
10.5446/51315 (DOI)
Okay, hello everyone. I'm Sonia Betz from the University of Alberta. And Robin Hall from McEwen University in Edmonton. We're going to do a super speedy intro to some student journal issues we've been mulling over recently. And I'm hoping that we can start a bit of a conversation with you guys about some ideas we have for better supporting student journals. So I'm so glad Suzanne that your talk came right before ours. Robin's developed this fantastic caterpillar analogy for our talk and I think it's pretty apt if you can imagine students as emerging academics doing research worthy of sharing while gaining experience in academic publishing. Alright, so we have at the University of Alberta several student journals, both undergraduate and graduate level student journals. However, I think when we talk about success stories around student journals, we're almost always in the background balancing that by the many challenges and obstacles around sustainable student publications. For example, and I'm embarrassed to admit this, but of the 10 student journals we list on our website, only six are actually actively publishing or actively trying to publish. Four have ceased publication entirely due to reasons that I think we're all familiar with and that Suzanne just articulated a little bit in her talk. So what can we do to help support students more fully? Board the slide. I loved the comments about an advisory committee. I think that's a fantastic idea. And a few ideas at the U of A that I'm pretty excited about. We've developed a partnership with our undergraduate research initiative and they're a group that provides great opportunities for undergrads to do research on campus. They're working on a new student journal called Spectrum that we'll be hosting, but we've also been working with them on educational events for students. So we're planning a year-long seminar series that will feature sessions throughout the year on things like peer review, author rights, licensing and copyright, recruiting reviewers and content for a journal, how to use OJS, all those great things. And another small step that we're taking is to develop a bit of a community of practice for our student journal editors. So earlier this year we held an introductory round table inviting all of those student editors to meet each other. We facilitated a conversation amongst them and asked them about what challenges they'd faced, how they'd solved their problems and how we could help. So it was a good opportunity for them to meet each other and we got some really useful directions to pursue for the future. Thank you. So at McEwen University we only have two undergraduate journals, but they've both been going for about five years. So I consider these successes so far, but we're still learning a lot as we go. It takes a village, a large village to keep undergraduate journals going. I've learned definitely helps to have faculty members as the journal managers and the faculty champion to keep them going, but also support from the library and the research office and IT, all of that stuff. Here's the important stuff that we really wanted to get to you though and talk to you about. Academic publishing is robust. There's lots of options out there for established academics. Could we be doing more to help our little caterpillars in nurture undergraduate publishing? We think yes. There's lots of initiatives out there. There's definitely at least hundreds of undergraduate journals in existence, but we... Whoops. Yes, we do think that we should be doing more. One thing that we've definitely noticed is that it's really, really hard to find undergraduate journals. There's no one-stop shop to locate these things, and then it's really hard to figure out if those journals accept works from students outside of their institution or if they're tied to a class or not. So what we'd like to propose and what we're wondering from everyone here and beyond is whether it might be viable to create a directory of open access student journals similar to the directory of open access journals, where people can go and actually find undergraduate works, but also find some documentation on best practices for starting these journals and keeping them going. So we're just hoping to start a conversation. We have given you our Twitter handles and our email addresses, and now you know our faces as well. So we hope that today and beyond that you'll come and speak to us if you would like to partner with us, if you think it's a good idea, if you think it's a bad idea. Yeah, that's kind of the seeds of our idea here is to start supporting these more in that capacity. Thank you. Okay, at this point I'd like to invite all of the speakers on stage and invite questions from the audience about some innovative and interesting uses of OJSNOMP. There are a couple of mics out there. Don't be shy. Okay. Just curious about mediating new and young scholars as they start to produce scholarship for undergraduate journals. How did they break OJSNOMP? These are kids who have been doing things online and maybe differently. They aren't having any prior lucid Microsoft Word like we have to with many of our more established scholars. How are they breaking it? What are they doing wrong and that we should be listening to? So I don't think they're breaking anything. I think that they're just using it for purposes, which they intend that maybe we don't intend them to use it for, but it has a lot to do with the metadata stuff that Mike talked about in his lightning talk yesterday. I think forcing things into fields that you wouldn't expect them to put things into. They also really want more flexibility around the content. I found that they really want to create pages that describe what the journal is all about or they want to do things like we have the Alberta Law Review and every year they do a special issue on energy law and they really want a way to highlight those special issues in exciting new ways and they can't really do that with OJSNOMP. So I don't know. Does anybody else have other things they've seen? I can speak from my experience with the classes that we're working with. Students are really interested including a significant number of images in their work and so that just creates kind of a logistical challenge for us getting all those up and running. They want to do audio. We have audio and video so students are really interested in doing audio video posts and it's sort of tricky right now at OJSNOMP to provide nice seamless streamed media content for a student that may not have the skills to develop nice XML files. So I'd like to see those developed a bit. Just one addition to that. In terms of processes for OJSNOMP, the key thing that students want is ease. Easy submission, just seamless, kind of one-step processes. So I really wanted to highlight that because it's really important. So I don't think any of the students that I worked with broke the OJSNOMP. I think the OJSNOMP may have broken them. In some ways, I mean it's a great tool but it is difficult sometimes to maneuver. It's not intuitive for them but I did attend the session discussing the 3.0 and I'm really excited about those new features and those new interfaces. I think that will really ease the use of OJSNOMP and participation in the scholarly journal process. So just some comments first of all but I will get to a question and again this is from the self-serving perspective of PKP. It's really exhilarating, exciting to just hear the whole range of rather strange, unusual, even possibly bizarre ways you're repurposing OJSNOMP to do all sorts of things. Some of it's really affirming because when we talk about student journals, for example, we see lots of evidence out there that there's incredible number of student journals happening. In fact, Kevin and Jana Avan, another iSchool student from UBC that's here, we collected a bunch of information on over like 400 plus scholarly journals. OJSNOMP's journals hosted at Canadian sites. Fully a third of them were undergraduate or graduate students. So it's kind of, you know, something's happening there and I think your presentations gave us a lot of good examples of that. And I think, Sonia, where you were asking, always there a list out there of at least in the Canadian context, we can give you a little starter of probably about 130 student run journals if you want to go and mine them. It was equally interesting to hear from Israel and how you're using OJSN in another way and this is one that's been kicking around because about every few months we see another one popping up where somebody has ended up using that sort of workflow around peer review almost essentially as an academic adjudication system. And not that we're looking for more work on, you know, on the PKP side but it's fascinating to see those. Diddleford in terms of, you know, digital humanities projects and looking at, you know, at Queens there whether OJSN3 can be morphed into that. It's exhilarating. It's also, from our perspective, a little bit overwhelming because potentially that represents a whole lot of kind of stretching and pulling and twisting, you know, OJSN3 in a whole bunch of different directions which requires resources and I'm getting to my question now but, you know, essentially OJSN3 is really purpose built in many ways to be, you know, an academic journal peer review workflow and it's interesting to see how you're using it in different ways but the question is did you go and look at other sort of alternatives for, you know, say academic adjudication or a digital humanities sort of support or scholarly, you know, product support for them rather than using just OJSN3 and is it, you know, what, how did you end up on OJSN3 for some of these rather different uses so I'd be interested just if any of you can shed comments on that. Okay, so for the Queens perspective for this sort of digital humanities push we used OJSN3 because first of all it was a ready platform that we already had. Secondly, the journal team, they, yeah, they're one of our founding journals and they want to push the envelope a little bit with their journal and I guess respecting the sense or the original goal for OJSN3 being a journal publishing peer review infrastructure I guess that one of the key messages that I think I took from meeting with this journal team was and it fits with the questions that have come up so far today around deconstructing the traditional journal. I think it's already happening. I think it's a need that faculty want that we need to unpack so even if our systems are have originally been designed in a way that wants to be sort of, you know, your traditional PDF file that is a journal with an abstract discussion and I think things are moving more quickly and I really think we need to take the challenge and adapt. PKP, it's at a nice point where it's institutional led, it's not a commercial entity. I think that's something that we really should capitalize on in responding to these new needs that are already there so I guess, yeah. Just to clarify what I'm doing here, I'm just a volunteer to Werx, it's the State University of Hickory-Dusso but in my employee job we use OJS to our journals too and this is the older history and the beginning we need to start, we had a journal from 1987 and we have to move from emails to something that helps us the workflow so it's the same thing that works and it moves from email to something that's not expensive because at Werx in a specific case, we do not have a TI staff to take care of OJS. The only person who does that is me and even the backup, I need to talk to someone, may I do the backup, I have to go to the building and my lunchtime and do everything so OJS was the alternative choice because we do not need to expend money or something. All other solutions we would demand or training with the money or something else so this solution was not touch a single line of code, I do not have touch code, develop new code. One thing just the authors of research ask to change author to coordinator that will be changed someday. Well, it's another problem I have to face to code outside here in Canada, I can't touch the code on the OJS in the Werx so the solution was taken to not expensive and easy to understand and common knowledge of all researchers just I'm an author and coordinator. That's it. Thank you all. I have a long list of to-dos and to-explorers. Great. With regard to student use or use of OJS as a pedagogical tool, I differentiate a bit between student journals where it is a formal journal process versus course use and I'm wondering what is your experience working with those journals about the student who is not successful? What about the sort of refusal rate and also the students who decide maybe they are not so proud of their undergraduate work backward, how are you doing with repudiation and take down requests? It's something we take very seriously. I think I didn't have time to talk about this but we do encourage instructors to pitch this to 400 level students who are ready, they are writing a capstone paper, it's an appropriate use of their final papers. I wouldn't encourage someone in a 100 level course to necessarily be putting student work online. That doesn't mean you can't but it's one of those things where we want to make sure they are proud of their work, that they get the chance to do a peer review, they get the chance to go through the editing process and revising and we want to be responsive to students who say, you know what, this just isn't for me. Obviously that's negotiated with the instructor deciding whether or not students must publish their work online but there's no real reason why they couldn't use OGS to go through the scholarly communications workflow, experience what it's like to do the peer review and then just not post the final paper. We've also worked with an instructor that only posted excerpts of the students work because she just felt that that was a stronger representation of their writing ability. So we're trying to negotiate that and be respectful of what the students want. Yeah. So on the internet there's one of my undergraduate student essays. I'm so embarrassed about it but I don't know how to get someone to take it down. Don't search for it. What we've noticed for, we've hosted undergraduate journals for a very long time and maybe Leah can confirm this but I don't think we've ever had a take down request from a student for an article that they've written. So I do, I think the problem is out there and exists but I think that the, you know, the cases are so infrequent that we could probably deal with it on a case by case basis but definitely I think that's a concern. We are working on a project right now with a high school to put capstone papers for AP students so that's great. 12 students working on a final big research project and I do, I've been thinking about this a lot too. So how do we manage those, you know, students who aren't even 18 yet when they're publishing that content and what are their rights around owning it and taking it down when they want it taken down? I think that's a falls in with best practices though and it just occurred to me. We have nothing about take down on our websites for our student journals and we probably should have a little statement there. If you want this taken down here's what you can do sort of thing but we've never had that request either but conceivably I could see that happening so we should take that into account. Well so Sonia and Robin kind of invited this chat here with this idea of there's at the end so maybe I'll kind of get that conversation started certainly we can continue it later over lunch or whatever. So you floated this idea of a directory of undergraduate open access journals or open access undergraduate journals or whatever. Student journals, right sorry. So you know my first thought with that is that the directory of open access journals I think kind of serves two different purposes right now so when you go to the website it looks like it's meant to be a portal to discover scholarship which it sort of can be but my sense is that not many people use it that way right. They come to these journals through other discovery paths which is a broader trend in like library resources as a whole for example. So what it actually does is mostly serve as a white list of vetted journals and a sort of talking point for open access advocates to say there are however many 9000 you know vetted journals so see there's a lot of open access out there and so it actually kind of serves that purpose and I think for a directory of student journals the the need for a white list or for numbers are less clear but if you really just want to share best practices I mean maybe what we just need is a kind of Google Docs file of best practices that we start sharing around and all contribute our ideas and try to sort of evolve and build up something a shared resource in that way so those are my initial thoughts on that. Thank you for the feedback there is it's just it's really hard to find these journals and more and more I'm getting students and faculty coming to me saying you know the student or I produce this really great work and I want to publish it where can I publish it it's so hard to find journals for them I don't know they exist they definitely do and so having that one stop shop and having search facets that release are specific to students would be really helpful to you in such a directory because there are a few journal student journals in the DOAJ but again it's really hard to search through those so just to add to that I think one of the problems that we do have with student journals is sustainability and so if we were able to create you know really strong subject specific student journals where you know film studies student who wants to publish their work would know that there's a good one to go to rather than deciding to create their own it lasts for a couple years at their university until someone can sustain it so I think it's a possible way to kind of respond to those those issues. You're talking about journals that are subject specific but will publish student work from anywhere and I assume we were talking about journals that publish more or less any topic but only from their discipline from their institution so everything I said was from that perspective when you have journals like this it's a different picture and now I see where you might well need a directory. I just wanted to point out that we do have two DOAJ ambassadors at least in the crowd I think both Ina and Yvonne so maybe you can just raise your hand so that if you can yeah and then you can if you want to follow up on how DOAJ is actually used they're probably two good people to talk to. Not going for a question. Oh, fake out. Okay so I think we'll we'll wrap up here I did want to take a moment to again thank all of our presenters for sharing their work with us this morning.
The rise of open access, locally-hosted undergraduate journals over the past few years has been significant, providing students with valuable opportunities to disseminate knowledge, and gain experience in academic publishing and peer review. These journals often tend to publish erratically, however, due in part to student-run editorial boards that experience frequent turnover given the nature of undergraduate commitments and time constraints. Drawing on recent initiatives at two Alberta universities, this session will highlight ways that library services in partnership with other campus partners can provide technical support, editorial guidance, and the infrastructure necessary to form communities of practice among and within student-led editorial boards to help create and maintain high-quality, sustainable open access student journals. Participants will also be asked to consider ways that the PKP community might come together to enhance support for such journals in line with publishing best practices and content discoverability. Directly after the presentation is a panel discussion with following speakers of the Ligtning Talks (Round 2): Alperin, Juan Pablo; Moore, Allison; Gillis, Roger; Coughlan, Rosarie; Silva, Israel José da; Ludbrook, Ann; Jay, Suzanne; Hall, Robyn; Betz, Sonya
10.5446/51316 (DOI)
Right here. Hey. So my name is Mike Nason. I'm a Scholar Communications Librarian at the University of New Brunswick, and I'm also a publishing services associate with PKP. I'm here to talk about this broad idea of square pegs and round holes, or I think really it's mostly about a conversation since the tech and tools talk, but this is mostly about what people do, and they don't perceive themselves as having the tools to do things appropriately. So I guess in a way, I'm mostly referring to this as metadata abuse and what we can learn about it. I think that this is probably a good crowd to talk to. I know it, to me, it kind of goes without saying, but I think metadata is actually pretty important. And I think many of you do too. It helps people find research. It helps your research get found by folks. It helps expand the research. It's really what makes all of this stuff go, but a lot of people see metadata fields in a CMS or OJS or any other place as a place you can put chunks of information to make it appear in certain spaces on a website. So for example, I had a class when I was in library school, and one of my instructors one time told me that I should put a YouTube link in a DC subject field, because that would make a video appear on one part of the website. And I yelled at him, and he no longer liked me. But usually when we're talking to a lot of people who are writing scholarship, we're talking to people who often aren't writing XML. They're the kinds of people who, when they write a header or a title or a subject field, change the size of the font and bold it. They're not people who care about semantics. So it doesn't always work this way, and it ends up being a little problematic. We have two really great examples in OJS 2, and we're looking at kind of, there are obviously changes in 3. One is that the OJS 2 native ETD only has first and last name author fields. I guess there's a middle name too. First of all, this is sort of culturally insensitive. This presumes that everybody has a first and last name. That's probably not good. Secondly, it means that people write things like the editors as authors. Or under first name, the Canadian, and then last name, library association. So that generates a citation that says library association, the Canadian, which is really not appropriate. But I think what we've learned in dealing with people for a very long time is that they don't really care about what the citation looks like. They care about what the table of contents looks like. They're more worried about the look, because they came from a print world, and that's their priority. The other major one is that I believe that long-dend people don't have email, and they probably don't need to. And many of us have made up, I'm sure, fakeguyagmail.com. And that email just does exist, has many emails from people who have registered stuff in OJS. But the point is that there are a lot of people who manipulate and abuse metadata to make OJS do something that it doesn't currently do. Some of the examples we've seen in the hosting team include DOIs and titles, extraneous information in page number fields, people abusing metadata fields in the journal setup to add more keywords or a description than it appears at the top of the journal, multiple languages jammed into one locale field. That's a big problem. Thousands of probably fake, but sometimes I bet actually not as fake as intended email addresses. And one amazing one I saw the other day was actually an HTML table field embedded in an abstract field so that people could put all kinds of information, like keywords and DOIs and all this stuff in an abstract field. If you export that metadata, you have a giant mess on your hands, but it's because they perceive they're being missing content. What can we learn? I think a lot of things. My early relationship with Alex Mecher is that I would write him an email saying, OJS should do this. And he says, that's not the intended use of OJS. That's great. But it doesn't really matter to the people who just want that to happen anyway. I think that we need to worry a little bit less about trying to incentivize good metadata, because I think that's kind of like, if you're the kind of person from good metadata is important, you're also the same kind of person who would probably say, doing taxes is fun. But that's not, you can't incentivize that behavior in other people. So instead, we need to mitigate it. One of the conversations I had actually with Alex yesterday is I think that we should do things like decouple article metadata from the display and the table of contents. And he made a face that was like, oh, no. And I made a face that said, but that would solve a lot of problems for people who aren't getting what they want. I think one of the biggest issues that a lot of people aren't aware that OJS has different display options that might meet their needs. And like I say, in three, and I really don't want to undermine the work that's going on with the folks in the UI UX group, that a lot of the stuff in three has been taken care of. But these are longstanding issues where solutions for solving your display options are too buried. It might not be obvious to you that you can hide an author field. The other major issue is that the native DTD from 2.x is way too simplistic. We're recording metadata that isn't anywhere nearly granular enough for current demands. So I think what users are saying is easier, please. And clear takeaways are that we should probably move to Jets. Display options should be more transparent. And this is the takeaway for me. Seemingly oddball customizations made by users to solve seemingly unique issues should be carefully considered. I think that'll do it. See you all later. Thank you.
Open Journal Systems has had a notable impact on academic publishing and open access. It's given scholars with short resources the ability to create and disseminate quality scholarship. It's also made it easier for many established publications to move into digital distribution. But, sometimes, OJS doesn't do what users might expect. Years of OJS use shows a variety of emergent uses from users thinking outside the box intended by its developers. Some of these changes are dramatic, unique one-offs. Some of these changes are common and sensible.This lightning talk looks to discuss the ways OJS users have taken the software outside the box, for better or worse, and suggests that we should learn from - and sometimes adapt to - these emergent behaviours.
10.5446/51319 (DOI)
I am, we'd like to sort of kick us off again for the afternoon in a panel that I'm quite, not just very pleased that we have a panel on perspectives from the global south, but also very proud and honored to be, have been asked to introduce this panel. There, you know, from in the 10 years that I've been, I've been with PKP and that I've been working with PKP, I started working in Latin America and really understanding open access from the, and getting involved and really buying into being committed to promoting and pushing open access because I was living and working with and interacting with the editors of Latin American journals. And it was from understanding what were the things that they were doing, their needs, their challenges, and seeing the benefits that open access could bring them that made me personally commit with the cause, commit with PKP, and seeing the role that PKP has been playing in the global south. While a lot of the discussions continue to be, there's some recognition of that, that there's some sense that it's changing, but the discussions for a long time have been that open access was supposed to be good for the global south because it was about giving them access to the things that were being produced elsewhere. Whereas when I arrived in Latin America and I saw that open access was this, there was a vibrant sort of open access culture and that open access wasn't, they weren't valuing open access because they were, oh we're getting access to these things coming from somewhere else. It was, we're using this because it's, as a way of having a bigger voice and having greater participation in the global knowledge exchange and the role that PKP has played in giving people that wanted to publish their works in the region and giving them a platform on which to make their voices heard and have them contribute by showing what it is that they're working on. And I think more even now there's greater recognition that it's not just about them putting out their research, but also showing the rest of the world what it is, what are the ways in which open access can be provided, or the ways in which the models of open access can work that are different than the ways that it works in the north. And so playing this role of providing a platform, I feel like I've been bouncing back and forth between the north and south, trying to help people here in North America look at the way that things are done in the global south. And it's having a panel like this where we are trying to give a voice and giving a place for those perspectives, the lessons that the south has to offer to the rest of the world, the models that it has to offer, and two of the platforms from Latin America that are going to be on this panel are examples of that. We're also going to hear from the Academy of Sciences in South Africa as well as some of the pretty innovative and creative things that they're doing that are lessons that the rest of the world can learn from. So it's a real honor for me to have an opportunity to present these next three speakers who are going to give us this perspective and what are the things that we can learn from the way that things are being done in the southern hemisphere. So the first speaker that we're going to have up is Alex Mendoza, who has been working with Cielo, a longstanding initiative, one of the largest publishing platforms from Latin America, publishing over a thousand journals. Alex Mendoza is in charge of working with the editors and interacting and coordinating all of the editors and their submission. And he's been a big part of working with OJS and with Scholar One and having that interface between what the editors and the authors are doing and the Cielo platform as a publishing platform. So please join me in welcoming Alex to the stage. APPLAUSE Well, hello. Good afternoon. It's a great pleasure to be here today. Thanks Juan Pablo for the great introduction. It's my first PQP conference, hopefully not the last. And I will be talking about the role of Cielo on the bumpy road towards professionalization, internationalization, and financial sustainability of developing country journals. That's a wordy title, sorry. So just quickly about Cielo, for those who haven't heard of us, we are an international cooperation program for the advancement of research communication. So basically everything that we do is focused on that. That is our main goal. And in order to achieve that, we have some specific objectives such as to maximize the availability, visibility, use, impact, and credibility of nationally edited journals and the science that they communicate to improve the quality of such journals, of course, and to complement international bibliography. So where does, how does Cielo do that? Where does the money come from? Well, Cielo, the program is led by the São Paulo Research Foundation. It's the largest research foundation in Brazil, also known as FAPESP. So as Juan Pablo said very, very greatly, so just to quickly show the timeline for the open access movement, Cielo was born in 1997. It was way before some of the most important open access initiatives took place. So that way you can consider Cielo as a pioneer movement in the open access scenario, and also responsible for inserting the journals in Latin America in the open access movement. So Cielo is actually a network. It's now formed by 15 countries, mostly from Latin America, but also Spain, Portugal, and South Africa. It's been on reparation for 20 years now, so this year is our 20th anniversary. And so you want some numbers? Okay, I'll give you some numbers. So we have about 1,000 active journals across all of those countries. Over 52,000 documents published per year, and around 650,000 documents overall over all these years. And Cielo Brazil alone has more than 1 million downloads per day, and that is using the counter methodology. So it's a huge operation. And just to clarify, the Cielo is a Brazilian initiative. It started in Brazil, and the other countries joined it later. And each country is responsible for their own founding sources, and governance is also decentralized, although they all have the same policies, the same principles, and they follow Cielo Brazil's guidelines as a model. So we have three main functions in Cielo. So that is indexing, publication, and interpretation. So the indexing is the continuous evaluation of our journals that want to be indexed in Cielo, and also they are already in Cielo. So for that, we have a criteria that we evaluate our journals. So that criteria is based on best practices, on things that have been doing around the world. So the journals that are in Cielo are constantly being evaluated. Our second function is probably our most popular one, is the publication. So we have a website where we publish our articles in full, open access. We preserve, we allow retrieval information, et cetera. And most recently, we started providing online submission services. And our third function is an interoperability, which is the sharing of content across other indexes and services. So for that reason, we consider Cielo as a meta publisher. This is the Cielo Brazil team. We are based in Sao Paulo, and we are around 50 people in the office. And this was taken at the backyard of our office in Sao Paulo. In 2013, it was a very important year for Cielo because we established three main lines of action that we call professionalization, internationalization, and financial sustainability, which are the title of my long, the long time of my presentation. So professionalization is to produce journals according to the state of the art. So we are always looking at the trends and what other journals are doing. We are pushing things on our own journals to do the same or not the same, but what's good of it. Internationalization is to insert our journals in the global flow of scientific information, and financial sustainability is to find ways to do this all with a low cost model. With that, we hope to grow the performance of our collections and our journals as well. So just to give you an example of professionalization, as of last year, all of the Cielo Brazil journals are using an online submission system. So this is part of the Cielo criteria now. So if a journal wants to be indexed in Cielo, they have to be using an online submission system. And if they are already in Cielo, they have to be using as well, of course. This system must provide easy to obtain statistics or manuscript on number of papers submitted, number of papers rejected, sent to associate editors, reviewers, approved, et cetera. And also time between submission and final decision and geographic affiliation data. So we need that data because it's one of the ways that we have to evaluate if the journal is following our guidelines, if the journal is having international submissions or international participation from reviewers and editors. So this is one of the ways that we have found. This is a distribution of online submission systems in Cielo Brazil. Half of those, almost half of our journals right now are using a scholar one, which is provided by Cielo thanks to an agreement between Cielo and Clarivate Analytics. That is 46% of them. Then it's followed by OGS, both the one that is provided by Cielo and the ones that are now provided by Cielo. And that's most of the case of university portals. So we have a big part of Brazilian journals using OGS in university portals. And then the small portion, 13% are using other systems available in the market. So we take care, we provide system for 180 journals of our collection. So that is 63%. It's a big number. And we provide support and training, full training for those journals. That is when is the case of scholar one by Cielo or OGS. So we provide support pre-training, during training, post-training. And in the case specifically of OGS, Cielo also provides all the infrastructure. So that means the editor doesn't have to worry about installing OGS or hosting it, upgrading backup. We do all that for them. The other half, the other part, 37% is journals that use a different system that are not by Cielo. And we also hope that this number will go lower and lower each year because it would be great to have them all under our responsibility. Cielo has developed its own publishing schema. We call it the Cielo publishing schema. It's based on JET's DTD. And we took some of the DTD from PubMed Central because that would be easier to make our journals available there. And we also have some specific Cielo's needs. And so the intersection of those two things, both PubMed Central DTD and our own DTD, is what we call the Cielo publishing schema. And it's all under the JET's umbrella. So is this working? Is this all these guidelines, this criteria, these requests that we've been making? Yes, the numbers show that it is working. So we have set some thresholds, some minimum and recommended percentage of international authors depending on the thematic area for our journals. And by 2015, we had some interesting numbers for the areas that we had already met. Four of the areas already met, the minimum requirements. And the estimation that by 2020, most of them, all of them would be above the minimum and most of them would be above the recommended. That is their projection that we have been doing. This is a 10-year resolution of the publication in documents in English. Last year, we published over 60% of our documents in English. And by 2020, we hope to have over 75% of our documents in English. Again, for the internationalization, we are always working to index our journals and papers as much places as possible. So we have all of our papers available in Google Scholar, director of open access journals, Crossref, we provide DOI for all of them. 70% of our journals are available in some major commercial indexes such as Copus or Web of Science. We provide some services as well such as ReadCube, it's enhanced PDF. So apart from the regular PDF, every document in CIO is also available in CIO Brazil available in a enhanced PDF. We provide altmetrics and soon, later this year, we will be providing fixtures as well. So this is the same distribution that I showed earlier, 46% Scholar one, 41% OGS and 13% other systems. Regarding standards, we have been fully working as of the past three years in Jets, following Jets XML DTD. 80% of our documents are under the Creative Commons by license and we expect to go over 95% by the next year, which is great. CIO also has a very good relationship with other major players in the open access movements such as PKP, OASPA, director of open access open journals, counter, etc. So this is just a diagram showing that where we were at the beginning and where we will be going for the next few years. So in the beginning when CIO established it, CIO kind of made the most of the work and it was a major player. The editors weren't that much involved. The authors even less, so the editors sent to us their files the way they were and we did all the work. This changed in 2013 when we adopted the CIO DTD. So we threw the ball to the editors. Now they are doing that. They are providing the files following our DTD. So we share some of the responsibility with the editor. It's also the same year we started working with continuous publication and that because of the XML adoption we were able to broaden our interoperability. For the next coming years we will be joining, we will be inserting the author as well in this play. So we will start to work with preprints repositories. We are also starting ways to offer authoring services. Also open data is a big thing for us so we are looking for solutions for that. So now we can have more shared responsibility and more structured data from the beginning, not only at the end of the workflow, but have some structured data since the beginning of the workflow when the author becomes a more active player. Regarding the CIO Pica P cooperation, just some historical data. We started using OGS by 2006. We have around 50 journals that are using it. In 2009 CIO Chile has adopted it. But for this particular case the support is still made by CIO Brazil. So all the infrastructure is with us and we also support their team, their technical team. This year CIO Brazil joined the Pica P technical committee. We hope that our experience not only with OGS but with other platforms will help to enhance OGS and make it a more competitive platform. CIO's recommendations are to be included in the core of OGS, of course after approval from all the committees, etc. We also have been collaborating with Texture Development. We have published a blog post yesterday in our blog. So make sure to read that to find out more about it. And we are also going to contribute financially to Pica P for the development of its tools. We are expecting to have a $30,000 investment from CIO Brazil, CIO Chile and CIO Mexico. It would be a yearly contribution. And cooperation would of course be extended to other products such as OMP. We have been working with OMP for CIO books. It's still a pilot program but it's going somewhere. Moving forward, so CIO preprints for the next year. We hope to start using that not only for CIO Brazil but for the whole network. Also research data, we are finding ways to make it mandatory for our authors. Adoption of continuous publication. We have been doing that for quite a while now. There have been some adoption but we need more so we've been promoting it and it's working. Several journals are using that already but hopefully the next few years this will increase. Of course we will also strengthen even more the insertion of journals in the global flow of scientific communication. And our new platform is coming later this year or beginning of the next year. So our website has been looking like that for like 20 years and it's time to change. So we are building everything from scratch and it's looking very great, very new and very modern. So look out for that. It's coming up shortly. You are all invited to check our blog. We have a blog on blog.ciela.org. It's available in English, Spanish and Portuguese. And we cover all sorts of topics about scientific communication such as indexing of journals, bibliometrics, centometrics, ethics and open access. We also have a newsletter that follows it. It sends us somewhat monthly digests of what has been published in our blog. So you are more than invited to sign up for it. We are also on the social networks so you can find us on Twitter, on YouTube and Facebook. And I'm happy to invite all of you to the CELO 20 years conference. It's going to be held next year in Sao Paulo on September 26, 27 and 28. It's going to be a huge conference to celebrate our anniversary and also talk some very interesting, trendy topics about open access and where we are going. So you are all invited to be there. And that was pretty much it. I thank you very much for your attention.
The SciELO Program is implemented through the 20 years old SciELO Network composed by 15 national collections of open access peer reviewed journals from Latin American countries, Portugal, Spain and South Africa. Through its three main lines of actions known as Professionalization, Internationalization and Finantial Sustainability, SciELO aims to improve quality, visibility and credibility of nationally edited journals of developing countries and the research they communicate.
10.5446/51320 (DOI)
Hi everyone, my name is Kambis Gizai. I'm a research associate at Eco-Politiknik to Montreal. I'm going to talk about a podcast that we are developing, it's a podcast on open science, which was started by Ilya Stabiae and myself a few months ago, and it's called Corporal Science Podcast. So basically what we are doing is we interview those who are active in open science, for example, those who are doing something interesting with open data, open source programs, or developing an open access publishing platform. In our previous episodes, we interviewed Jane Berpie, an associate librarian at McGill University, who is here with us. She talked about open access publishing and how researchers can help open access movements by sharing their manuscript in open access archives. She also talked about non-traditional scholarly communication. She introduced video journals or a story about a student in Harvard who submitted a rap album as his thesis, which was a very successful story. We also interviewed Dr. Patrick Dio, who is also here with us. He kind of introduced some open source software, which can be a good replacement for the proprietary software, and kind of talked about how to move from a proprietary software towards open source ones. Here are other examples of the episodes we did. Like Bastien Grisakhe, he talked about his project on open SNP. Dr. Monica Granadas, she talked about how to work in open and how to do research in open. Dr. Rachel Harding, she talked about her blog, which was called Lab Scribbles, and she updates very frequently the data she obtains from her experiments on her blog. If you'd like to hear about these stories on open science, you can find our podcast on our webpage, called PerScience.com. There is a link to our podcast. You can choose the episode that you'd like to hear. Also, if you'd like to get informed about the episodes that come out, you can subscribe to our RSS feed. If you'd like to follow us, there are links to our Facebook and Twitter at the bottom of the page. If you'd like to listen to podcasts using your phone, you can find us in Google Play Music, or Apple Podcasts, and other podcast apps such as Podcast Addicts. The most important thing is if you're working on a project which is related to open science, like you're working on open data, open source programs, or you're developing an open access publishing platform, feel free to approach us. If you'd like to cover your story, we would love to do that. You can contact either me or Ilya's my colleague here. Also, there is this link to our email address, contact at corporatescience.com. That's pretty much it. Feel free to approach us and talk about your stories on open science. Thank you very much.
The progress in technology had a great impact on people’s daily life, especially in the last two decades, by delivering new ways of communication, education, collaboration and sharing information. If we compare that to the evolution in the scientific publishing process during last hundred years, we realize very little have been upgraded regarding editing, peer review process, accessing scientific contents/data, etc. For the realization of any major change in the scientific publishing, we need to inform the research community about the existing issues and initiate constructive discussions around the subject. Recently, we developed a project called Colper Science with the aim of advancing the scientific publishing towards a more open and efficient system. We inform people about the concerns regarding scientific publishing through our podcast and social media hoping to trigger productive discussions in order to discover practical solutions. The future publishing systems can be evolved based on novel ideas derived from consultations among the research communities.
10.5446/51321 (DOI)
Okay, versioning of published articles in OJS3 or OpenHeart procedures. I'm working at the project OJSDNet at the free university in Berlin at the project, I already said. We are doing software development for the German community and also building a network of German OJS users. So why would we want versioning for published articles? Well, imagine you have published an article and you want to publish an updated version because something changed or you found new information. So you would want to publish this changed article and also keep the old version because that's what you do. So you want to display also the metadata accordingly. This is what I did most of the time. I draw images about the concept for this kind of thing and really took a long discussion with PKP about how to implement this correctly and draw a lot of images, how this could be displayed at the back end. See here. So let's have a look at the result. This is what it looks like in the back end. We have, this is the workflow stage, the last one, production. You can see a tab for each version. And the metadata and the files are displayed in this tab according to the version you have selected. And you can create new versions very easily by clicking a button, which automatically copies all the data that you need. And you can edit it. And when you're done editing the new version, you can publish the version. Yes. At the front end, this is the article page, as you know it, but there are some changes. The version history has been displayed, so you can view the old versions. Each version is displayed at an old article page. And also the URIs are containing the version number. So researchers are citing the correct version. And for that reason, also the site block has been added by the URL containing the version number. And you always can reach the most recent version at the known URL. And there will always be also the URL containing the version number. And if it's an old version, there's also the hint that it's an old version and the link to the most recent version. What we learned during this new feature for OGS is better to base on the stable system because we started implementing before OGS 3 came out. So that was really hard because the concept was changing and we had to rewrite the concept sometimes. Also it's better to write the concept before starting coding. And yeah, communication is always very important. And now I'm too fast as always. So I will talk a lot about the next steps because there are so many. Yeah, we implemented all that and we have to merge the code in mass-avantage. But it will be very interesting because there have been two major changes. And I'm really looking forward to do this merge after my stay here. And also there will be need to adapt third-party plugins because we changed a lot of the core functionality in OGS. So other code parts must be adapted as well. Thank you. Thank you.
The option to publish multiple versions of one research paper, while maintaining the original version and allowing for a version history, is a much seeked-after feature for open access publishing. OJS did not support this kind of agile publishing - until now. The German OJS network project OJS-de.net has created a solution to add versioning of published articles in OJS 3. In this lightning talk, I will briefly outline the main functions of the new versioning feature, while also mentioning some of the complex workflow decision that had to be made while designing and developing it. We decided to implement an option to centrally enable or disable versioning for the entire journal. Once activated, the editors can outline their versioning policy in the "About" section of the journal. The design of the versioning workflow is similar to the concept of review rounds and new article versions can be created by the push of a button. Once a new version is created, older versions cannot be edited anymore, in order to encourage editors to transparently label all changes in an article - rather than just changing an existing article, like before. Versions are linked at the article page under the heading “version history” and the corresponding metadata and files are displayed at its own page, while old versions have a notice and a link to the recent version.
10.5446/51324 (DOI)
So, both for-profit and not-for-profit organizations increasingly use big data not only to study what has happened, which is called data analytics, but also to predict future trends, which is called predictive analytics. With certain notable exceptions, such as student recruitment in US institutions and compulsive evaluations of research productivity in the UK and Australia, academia has generally lagged behind other sectors in its use of big data. One domain that has moved halfway into collecting and analyzing big data is scholarly publishing, whose stakeholders of varying size include libraries and other research institutions, learned societies, for-profit publishers, and not-for-profit publishers. So, when I say big data and scholarly publishing, what am I talking about here? So this is not data sets created by researchers or other types of research outputs, or we sometimes call them research data, but I'm talking about big data about published research. So what kind of big data about published research? So this is data generated by publishers and aggregators of content, such as purchasing data, licensing data, online usage data, web analytics, and even subject classifications of products. But also data from research institutions, so for example, library data, a few types are given here, and what might be called structured productivity data that comes out of these sort of online faculty CV systems. They go by many names, perhaps you know Chris, but RIMs, people call them many different things, and they're all more or less the same. But I'm also talking about data from third parties. So for example, bibliometric services and social networking sites. So all of these, like other forms of big data, can be used for various types of assessment, but also for predictive analytics. For example, which types of publications are most likely to be purchased, used, and cited. So gathering, integrating, interpreting, and reporting data about published research takes a lot of time and expertise. Not something that many publishers and libraries, especially smaller, not-for-profit ones, have a lot of. These publishers and libraries struggle to identify, much less predict, important usage trends and opportunities through which they might extend their impact. But it's not just data collection and analysis that are expensive, there are also concerns about the use of analytics by those who can afford to collect and analyze the data. So what if we formed a cooperative of libraries, scholarly societies, publishers, aggregators, and other stakeholders who would each contribute to the governance of this cooperative, this member organization? What if the members contributed data that they create about scholarly communication, so their kind of small view of the world, to this cooperative, and then the cooperative, thanks to fees paid by its members, had the staff and tools to aggregate, normalize, and contextualize this data for its members, showing them how their little piece of the world, the data that they gathered and contributed, relates to that of all the members, but in a way that would adhere to a code of conduct about the sort of ethics on this use of data here. There's really much to be worked out here, and perhaps these ideas are already premature, and it would be safer for me to say simply that a community of stakeholders would come together to jointly develop governance, sustainability, and ethical frameworks for how data about publications should be gathered, analyzed, and shared. I'd like to think that such a cooperative, let's call it a publishing analytics data alliance, could be designed to provide something of value to even the largest commercial players, and that a shared governance model would ensure a continued community voice in how data about publications is gathered, analyzed, and used. We really need the involvement of as many stakeholders as possible. We can't just convene a group of experts to propose the whole model, and then expect everyone else to jump on board. If they don't feel invested from the start, it won't go anywhere. So the vision I've just outlined is something I've been developing with some colleagues here. You can find out more information here. We are interested in hearing from others on their thoughts. We want to do an environmental scan, but none of us have time for it. And so if you have some ideas of how we can move forward on that, we'd love to hear from you. Thank you.
Both for-profit and not-for-profit organizations increasingly use big data not only to study what has happened (data analytics) but also to make predict future trends (predictive analytics). With certain notable exceptions (student recruitment in US institutions and compulsive evaluations of research productivity in the UK and Australia), academia has generally lagged behind other sectors in its use of big data. One domain that has moved halfway into collecting and analyzing big data is scholarly publishing, whose stakeholders of varying size include libraries and other research institutions, learned societies, for-profit publishers, and not-for-profit publishers. These stakeholders generate and collect various types of data, especially relating content usage and sales, but often lack both resources to explore the data and ways to compare their data with that of other stakeholders. The situation is not one where a market participant tries to acquire competitive intelligence to help them compete against others; rather, because the stakeholders are so tightly related, they nearly all have some sort of data that would help all of them function more efficiently. Unfortunately, the challenges associated with gathering, integrating, interpreting, and reporting usage data limit the ability of individual publishers, libraries, and other stakeholders to identify—much less predict—important usage trends and opportunities through which these organizations might extend their impact. At the same time, there are real concerns about ownership of, access to, and analysis of data for "predictive bibliometrics". Furthermore, while all stakeholders would like to have rich data and be able to carry out predictive analytics of some sort, the high cost of providing or purchasing data-related services risks reinforcing inequalities in the landscape of scholarly publishing. This paper will present a vision for a cooperative of stakeholder institutions called the Publishing Analytics Data Alliance. Member institutions will contribute data that they gather about scholarly publishing to the cooperative, which will normalize and aggregate data for exploration by its members, who will be able to see their data in the context of their peers. The cooperative's members, through a system of shared governance, will also establish an ethical framework governing the functionality of the cooperative's data services and, more generally, the use of data by members. Beyond shared governance, the pooling of resources by the cooperative offers a way for members to achieve more than they would be able to on their own -- namely, to explore and analyze data about scholarly publishing. It is hoped that the cooperative will lead to increased cooperation and efficiency in the scholarly publishing ecosystem, all the while addressing ethical concerns raised by the power of data.
10.5446/51328 (DOI)
Hi everyone. My name is Graham Slad. I'm a copyright outreach librarian in the scholarly communications and copyright office at U of T and my colleague and graduate student Samantha Elmsley couldn't be here for happy reasons and she's done most of the labor on this project so I just wanted to acknowledge her lack of presence on the stage today. When I started doing these slides I realized I'd produced a very verbose title so in response that I produced an alternate title looking the gift horse in the mouth and I'll know if I haven't gone to the horse at the two-minute mark to speed up. So this is sort of another story from the trenches of Green Open Access. It's about a project that's been running at U of T since 2015 in collaboration with the Faculty of Social Work to create a open access collection of faculty scholarship which would be hosted in U of T's repository T-space. The project grew out of the Faculty of Social Work's academic plan which was written in 2011 and in 2015 they realized they hadn't really done anything about the first pillar of their academic plan which was establish an infrastructure to develop and take the lead in knowledge mobilization in social work. So they got together and created a memorandum agreement with the library to basically hire the library's copyright office to do copyright clearance on their faculty's research and ingested into the repository and they did this in kind of an atypical way. The estate just in keeping with this kind of the sub-theme of this conference seems to have been dead people. The estate of a rich alumnus had donated money to the faculty and it's with this money that they secured the funding to support the ingestion into the open access repository and social work the social work I should say the social work faculty has a strong research profile and global public health so it sort of fits open access quite quite well. One thing thank you the horse needs to be coming up pretty soon just one aside dealing with a sort of informal philanthropic relationship has its challenges my colleague Maria who's in the audience has received a phone call from the children of the estate the person who the library is named after just inquiring about the project. Moving on the project structure is sort of that the model is embedding to embed the green open access ingestion into the faculty so we were closely with the research officer we're working on putting buttons open access buttons into the faculty bibliography pages and doing sort of more flashy attention economy stuff on the actual faculty of social works website rather than in the repository itself which we all know can be fairly plain. See in here so it's it's sort of like a typical faculty bib project but with some added skull come literacy to the faculty and communications which I'll get to at the end so so far so good and we've managed to secure more funding and it's now sort of an endowed position in the library that will support the ingestion of social work scholarship permanently on an ongoing basis so here's the horse part so this is great but when we're looking at expanding it making it permanent we sort of have to do a environmental scan of what are the faculty members in the fact in the social work faculty actually doing and there's a fairly small group of them so we can actually do this manually there are 31 of them and we notice right away that 16 of them are active users of research gate and these 16 years I'd uploaded a total of 458 full-text articles into research gate and 313 of these infringe copyright or the policies of the publishers that that sort of as green open access trench workers were we enforce so that's a copyright infringement rate of about 69 percent it's sort of in keeping with some of the recent studies that have looked into research gate usage by by researchers so then I sort of start to ask myself what are we doing exactly are we just sort of a copyright enforcement arm of Taylor and Francis which happens to publish all of the big journals in the field of social work what can we do to sign of kind of create some create some other value aside from doing this work and what can we also what can we also do instead of being sort of corporate enforcement agents when we're when we're reaching out to faculty what can we do to kind of change this conversation more towards the themes of this this conference about ownership about participation and about social justice so with this amazing gift what we're going to start focusing on is sort of the communication piece which I put two asterisks beside earlier in the talk so what that's going to mean is sort of videos of faculty talking about their work to start with thank you very much if you have any questions please feel free to come right up to the microphones we've heard a couple of presentations that we're talking about copyright and either how it's not clear or how we're infringing it and about a year or so ago heard a presentation by Brewster Kale of the Internet Archive whose approach to copyright was well let's just put it up and see what happens you know the legal advice he got bad things will happen but as he pointed out invariably nothing much happened and in a few very cases where something did they took the material off right away and the problem is largely solved and I do sympathize trying to trace down those you know incorrect or incomplete copyright statements is a lot of work but what if we shifted the focus a bit and did it more as a risk assessment and decided so like what is going to happen possibly if we put some things up in the copyright around them as kind of murky so I know a couple if you address that topic I'd just be interested to hear your comments on on that as a strategy oh yeah I I think about that a lot and there are collections that we've brought into our repository that are where the copyrights feeling unclear or we know we're taking a risk when we're doing that but it's interesting the interplay between you know taking those sorts of risks and thinking about what you're like who are you representing when you're assuming those risks like am I assuming risk on behalf of my university when I'm doing that who do I need to ask permission of to assume a risk like that like it's it's we have like these workflows for processing those CVs and it's just really hard to like adjust our workflows to account for sort of like the apparently infinite ways that people express copyright in different ways somebody tweeted that right statement org might be a helpful framework to think about in all of this and we're definitely looking at right statement org is a potentially helpful framework to use with RIR that might yes help us just just get those materials in without fussing around so much with that just to say something a bit more about research gate just in light of that things that have been in the news recently publishers don't seem to have an interest in enforcing their policies on on a site like research gate and they don't seem to have an interest on and you know in set special cases enforcing their policies on institutional repositories either so there I think there is value in a risk assessment framework but I think especially that the project I'm talking about where the faculty members are all publishing in in sort of commercial commercially on journals they've already given up ownership of in most cases of their work and they're much more inclined to participate in the the sort of the economy of attention rather than the sort of things that IRs are really good at which is processing information making things available there's there's other things that have you know there are other pieces of the puzzle that they put they're more attuned to that you know they they're willing to overlook some other aspects that we think are important I just want to add that we in Latin America especially in journals I think it copyright information is a pending task because we always say that Latin America is open and open access and that but actually since my experience with editors so much of many of them are are not aware about permissions and they for example put license like CC buy or another creative commons license but are not sure what they are stating so I think it's a very pending task and and again the how can we solve this is training and knowing more about it the the landscape does vary by institution and if you are functioning in a higher risk perhaps a more litigious landscape then then practicing due diligence and being able to show that with every item that you've deposited in the repository may be prudent I'd like some clarification I have I have a research gate account which research gate made for me I have never deposited anything there in my life every once in a while they notify me that they found another full another full text of some article of mine and they put it on you know I am not a thing about this whole profile every once in a while they ask me am I really the author of this article well I am and so I say yes that that is my entire participation in this process am I supposed to be going through and see if they stolen these things from someplace and and and and I tell them take it off what what is my responsibility here I initially had that one of those emails in my slides like like Graham is this you and you click on it so so there's in some ways yes their recent their their business model is kind of a spam spam model but it's a good question I mean in the research that we did of the social work faculty about three-fourths of the material actually had been uploaded by our faculty members about a quarter of it hadn't uploaded by faculty members at other institutions and these in our faculty members were co-authors and so I'm not sure if some of your material is a co-authored work that someone else may have made available but I know Researchgate you know their data processing bibliographic data processing they may just have pegged it pinged you monthly or something like that with something that's in their database you know you go on to Google you ask how many papers did I do and then if you want to you just go and look for them and and grab them in places stick them on my profile you know it wouldn't be hard to put there a lot of illegal things was that ever asking me do we have any more questions for our speakers this afternoon all righty well thank you again to everybody for your great
This lightning talk will address the specific challenges of administering a donor-funded open access collection of social work scholarship, the Sophie Lucyk Virtual Library. Since 2015, the University of Toronto Libraries has embedded a graduate student within the Faculty of Social Work to collaborate with staff, graduate students and faculty to locate, ingest and negotiate with publishers for rights to publish the results of scholarship within a certain subject area. All of this has been supported by gifted funds to the department from the estate of a deceased alumnus. Recently, this effort has been expanded via a new donation and matching funds from the Faculty to be an endowed position, sustainably and permanently assisting with the “greening” of Factor-Inwentash’s research output. The talk will address the particular advantages and challenges of this model of funding for open access. Most of all, it will address how the locus of ‘ownership’ of the project, nominally within the development officer’s portfolio but with significant oversight from the Dean’s office as well, has posed some interesting questions to how the library typically supports open scholarship. How ‘deep’ does the library’s commitment go when, to some, a repository seems like only a highly functional but low-impact container for files. How can we better “sell” the knowledge mobilization inherent in Green Open Access, and embed it within the business practices and workflows of departments and individual researchers? What in fact, is a donor getting for their money? Key take-ways: Demonstration of an atypical funding model for mediated deposit, that embeds open access in the day-to-day operations of a Faculty. Details about the costing out of the model; can it be spread to other departments or contexts? Aligning interests: getting creative about funding and sustaining open access, i.e. how to talk about it in the language of philanthropy. Directly after the presentation is a panel discussion with following speakers of the Ligtning Talks (Round 3): Swatchena, Janet; Kosavic, Andrea; Appleby, Jacqueline White; Šimukovič, Elena; Lujano, Ivonne; Vanderjagt, Leah; Hawkins, Kevin S.; Slaght, Graeme
10.5446/51329 (DOI)
Thank you. Hi. We made an analysis of the use of OGS interoperable resources in the different journals that are included in the classification system of Conacy in Mexico. We analyzed the interoperability challenge of journals included in this classification system sponsored by the Council of Science and Technology of Mexico. According to information provided by Conacy, 80% of journals included in this system are using OGS, but most of them only as a publishing platform, not as an editorial manager. We think this is one of the main problems. The objective of this research is that as each one of the 110 OGS journals has different usage levels. We identify the main problems present for exchanging information with various platforms and databases. DOAG, Cielo, Dialnet and Rediv. The problems that we identified are grouped into two kinds. Those attributed to the lack of technical training for the journal editorial teams and those related to the conditions of interoperability of the platform itself or that of other systems. We think that understanding the causes behind these interoperability problems contributes to the generation of technical improvements, proposals for different scientific information platforms as well as the training of the editorial teams themselves. This is about the methodology. This presentation is on a fixed share. You can read all the numbers. I only want to focus on the conclusions. But probably this is important. We asked, we made a survey with different editors and we asked about the type of use of OGS, the degree of knowledge of the use of metadata harvesting, the use of Open Archives Initiative Protocol of Meta-Ataharvesting, about the interoperability with other databases. We asked about their major problems when they are using OGS. The results, as I told you, this presentation is on a fixed share and you can show each different numbers. But I think the most important thing is the conclusions. The journals that we analyzed have metadata exchange tools with other bases and aggregators. So the main challenge is not related to the technological infrastructure but the professionalization of the editorial work. We say that this is the human interoperability, the main troubles. Editors opt for what means less work for them, like being harvested by other aggregators, even if it means losing visibility of their own web patients. They don't know about the operative functions of the software that they use. In Mexico, but not only in Mexico, but in all Latin American countries, the institutions don't have a specialized job in scientific publishing. Nor do they offer opportunities to professionalize editorial work. Most editors are self-thought. For future research, it will be important to analyze the perspective of the editorial teams about the additional value that each database and aggregators gives to the visibility of the journal and the decision that each team makes regarding technical actions that must be implemented. Okay, thank you. Thank you.
We analyse interoperability tasks of 40 OJS journals with different usage levels, and identify the main problems present for exchanging information with various platforms and databases (DOAJ, Scielo, Dialnet, Redib). The problems are grouped into two kinds: those attributed to the lack of technical training for the jpurnal editorial teams, and those related to the conditions of interoperability of the platform itself, or that of other systems. Understanding the causes behind these interoperability problems contributes to the generation of technical improvement proposals for different scientific information platforms as well as the training of the editorial teams themselves.
10.5446/51330 (DOI)
Hi everyone, my name is Rosari Cochlin. I am a scholarly publishing librarian at Queen's University in Ontario here in Canada. So briefly today I'd like to talk about thinking again beyond the journal or the traditional journal using OJS as a platform for non-traditional scholarly outputs, a use case at Queen's. Great, can you all hear me gobbling up my minutes? So four points in four minutes. Briefly, building a non-traditional journal using OJS 3.0.2, managing the upgrade to that new version, using it as a platform for a non-traditional journal and developing outreach beyond. So in about the spring of this year I was approached by one of our founding OJS publishers, it's a journal called Encounters in the Theory and History of Education. They said they wanted to start publishing a new and radically different digital section in their otherwise reasonably traditional journal. So we started with a couple of questions, as is always good in life. I wanted to get a sense from the, you know, how do you want your readers to engage with the content in this new digital section that they'd be creating? What sort of experiences did you want them to have with the content? What would be the form and delivery mode of the content, so the format, the types, etc. And the question in my mind, you know, to what extent does an out-of-the-box solution like OJS meet these needs? So we have about 15 faculty, whoops, faculty and student-led journals at Queens using OJS. This was really the prompt that forced us to move from OJS2 to OJS3, so we're quite excited by that new version. When I asked them, you know, to sort of give me a sense of what they were looking for, this is the kind of thing that they wanted. So four key things, they wanted it to be visual, to be integrated, to enable reader interaction, it was keenly going to be reader-centric, and reader and author-centric content. Some of the things that they wanted was the ability to be able to integrate digital methods and or digital media, so they said we want text analysis, text topic modeling, computer vision image analysis, GIS analysis, or any form of network analysis to be enabled in this new digital vision, if you like. The kinds of digital media, interaction with visual displays, data visualization was something that they wanted really to bring to the fore, and to enable interactive narratives, so to enable a kind of real-time exchange and commentary between authors, creators, and the readers themselves. They placed a fairly reasonable technical demand on their authors for would-be submissions. They were hoping that again it would attract submission from authors that were fairly comfortable using JavaScript, and they also asked for a level of autonomy with the platform, that it would support WordPress and that they would be able to sort of get into the back end workings. So it was the prompt for us to do our upgrade, a little bit on how we manage that process, it was a fairly kind of painless process, I put together a guide, all about the new version, we developed a kind of timeline and a plan. Keenly for our journals, planning of the upgrade and clear communication really served us well, and I also developed a blog to keep them up to date with how that process was going. A couple of points on using OJS as a non-traditional platform, it offers greater editorial flexibility, that was a real sell for this new version, which has gone down really well. Improve look and feel to the journals, much more updated. The functionality supports though more or less the traditional journal structure, that traditional kind of article form. I think we are sort of wanting to push the limits on maybe positioning it as a more of a digital humanities infrastructure, so I guess in many senses it largely is a journal, a traditional journal platform. Should it be built in a way that supports more flexible digital humanities projects, could it be? For obvious reasons, as a hosted service at Queens, we have some practical limits on the extent of autonomy that we could give to this publisher to actually delve into the back end of our hosted site, Keenly Security and other considerations there. The editors' perspectives, this new digital issue, it hasn't been published yet, so they're still receiving submissions, so I can't really show you the end product, but the feedback has been very strong. They really like the fact that OJS3, it really streamlines the editorial process, so feedback for our PKP colleagues, thumbs up there. It makes it easier for anyone in the editorial team to really see where submission is along the way. Just at the bottom there, I wanted to highlight, so this came from Katie Brennan, the editor. The easier it is to run the back end functions of the journal, the more time and energy that they have to put into the final product and actually think about the future of that journal. The person responsible for the new digital section, so it's not yet published, but yes, they have received submissions in HTML, including video image and some interactive data visualization written in JavaScript. OJS3 allowed the author to submit this digital work with these interactive features, which is great. One improvement, the ability for the authors to submit multiple images at once rather than one at a time. Where do we go from here? We really want to build on our gains. In OJS2, it was a hard sell. I'm approached by lots of student journals, our early career researchers. They're often very savvy using flexible platforms like WordPress. OJS2 really was a turn off, and I found it hard to sell this sort of service and product with them. I'm hopeful that OJS3 will be able to beat the bushes there and attract some new authors. Need to keep adding value to the service. Working on minting DOIs and analytics are going to be key, so usage analytics. Want to build up our services there. Really, I want to get out there and target both the willing and those that are unaware that libraries are engaged in the publishing process in this way. I'm mindful that I want to keep my messaging short and active, so really encourage uptake, not just tell them about OJS, but encourage them to actually use it. A learning experience from this journal team, they had some quite high demands. I need to manage those expectations and keep listening. Thank you very much.
Queen’s University Library maintains a Journal Hosting Services using Open Journal Systems. In May 2016 the faculty editor of the Journal Encounters in the Theory and History of Education submitted a request for support with the creation of a new non-traditional structure. Moving beyond the traditional text-based, linear structure of the traditional journal article, largely a legacy of the print era, this new ‘digital section’ would include a range of interactive materials drawn from all aspects of the disciple and facilitate interactive participation around a shared discourse by both authors submitting content and readers, interacting with that content. Leveraging the new features and functions of the latest release of the OJS software, from version 2.4.8.1 to version 3.0.2 this lightening talk will provide use-case experience of: Building a non-traditional journal in OJS 3.0.2: balancing the needs and wants of authors, editors and readers Making the leap and getting everyone on board: managing a successful upgrade experience OJS 3.0.2 as a platform for non-traditional outputs in the digital humanities – perspectives from the editor Developing an outreach program to attract new publishers on campus Successes and lessons learned for the future.
10.5446/51331 (DOI)
It's a very special moment to me to be here today at Canada. It's the first time I'm so far from my home. It's not easy to speak in English, not my mother language, so I try to be my best. That's my ad. Is there also a freeing my Twitter? If someone take a picture, please just sit me with this. My mom sees in Brazil. She has a Twitter. OK. We use OJS in the art journal Scientific in Brazil. And any universities in this is my state university. I'm a volunteer working with them. I'm not an employee from them. I work in the research institution public of social indicators. What the context? Vargas is the presence in the 24 cities in my state. He's a public state university. As a publicist, no tuition. He's free to study. So every funding comes from agencies, or like CAPS, or CNPQ, or my state's, Phaparags. There's such a similarity to Phapesp in Sao Paulo. We have a, oh, I show where I come from. This, the marker red, down is my city. Very far from Montreal. And we have a map showing the presence of works. In every city, we have a campus to study. Or as about 70 regions with more than 260 professors. So we have a scenario. You have submit, access, archiving, and feedback, research projects. Why this problem? Because the fundings is just granted to approve or assess research projects. And everything was dealing with flash drives called pen drive in Brazil, or you may imagine that if you lose a flash drive, you lose the project. So we try to manage it with a Gmail account. But we have the problem. Research are very creative and always give the same subject to the email, my project. So we have many emails with the same subject and the same written Gmail. It's just impossible. Or just a question. This is my project right. The same subject and the same emails is a great problem. So in 2015, we have to start WERGs, Journal Scientifiki, to publish in the same year. And we thought, oh, yes, is the OJS 2.5.7? Is the old version? And it works best. And we started to think in the WERGs, why do not use the OJS to research assessment projects? We have the registered users with the authors and professors. OK. And we do not need to make new workshops. It's just instead you send your paper, you send your project. We have co-authors, we have collaborators, even translators. And we have just attached three documents. The project is scheduled in the co-authors. So in the results for the last year, 2016, we have 193 submit projects with 166 qualified projects. And now we avoid the problem when the researcher submit a project he always was asking to us, where is my project and why it's not approved or why it's not qualified? Everything archived in the researcher may verify why his project or his project was not to qualify what was a lack of information or a lack of a due date. That's the biggest problem was the date. We have several dates, and the research was not to respect these dates. So if you send your product to the right date with the right documents, with the right project and subject, perhaps you may qualify to receive the funding to your research. In fundings, we say scholarships to students to collaborate in research projects. Following steps, we need to move a new server. We have the machine, but I do not have the authorization political to do the move. It's a problem such we face in Brazil. We will be able to do OGS3. I have made this update in the other journal I managed as I editor. And the customized team, thanks to Nate, to the Bootstrap team we now have, it's just easier to customize them with Bootstrap. And thank you. That's my Twitter, Facebook, GitHub, LinkedIn. If you want to keep touch with me, please help yourself. Thank you. Thank you.
The need to deal with the entire process of research project evaluation so as to meet the deadlines and requirements of the public notices that foster science is inherent to institutions of higher education. Regarding the State University of Rio Grande do Sul (UERGS), this process used to occur via printed documents and files on CD. Three years ago, with the new Research Coordination, the process started to take place by email and, in 2016, the Coordination suggested the Open Journal Systems (OJS) should be used as a way to expedite the processes that were still limited as to their management by email. The OJS (known as SEER in Brazil) was already used for the Scientific Electronic Magazine of UERGS and started to be used in an innovative, creative way for the whole process of managing and archiving the research projects submitted to the Dean’s Office for Graduate and Research Studies (ProPPG). The process adapted the use of the OJS, which was meant for scientific magazines, in a way that it could be used to receive the projects, review them, send them to reviewers and inform the authors about decisions in order to to optimize the time spent on the whole process and increase the net of project reviewers. After the implementation of this resource by the ProPPG, other Dean’s Offices at UERGS, such as the Extension´s Dean, started using the system to improve their internal processes. The use of the system brought agility, transparency and legitimacy to the whole process of submission of research projects by the teachers of the institution as well as the necessary perennity within the public scope.
10.5446/51333 (DOI)
Hello, I'm Leah Vanderyot, I'm with U of A Libraries and I am our Repository Services Coordinator. I'm here sort of as an interloper or a masseter as you will from the institutional Repository Manager community to make a plea for something from OJS journals, from Open Access journals in general. So, authors at my institution approached me and they asked me to archive their Open Access articles in their repository because they want to boost the visibility of their articles. They want multiple entry points to their articles and also if we're going to be putting all of their work in their repository, they really want to complete picture of their whole CV in the repository and this is all really human driven work so we can talk about the metadata aspects of what I'm about to talk about maybe after everyone's sessions but I am focused here on what I think is a very basic human mistake that's going on right now. So, institutional repository managers make deposits in compliance with the contract law that they observe in the articles through a variety of review practices and this scenario here is really, really common, like really common. So, this is one paper that has a Creative Commons license on the table of contents, a very forbidding redistribution statement like actually saying redistribution is forbidden by law as well as a copyright statement that has no reference to terms of use period. So, I can't deposit this actually because I don't know what I should even put in my repository for what the copyright statement is for this article and like ain't nobody got time for this. Like, then I have to send permission request to all of these journals who are like why are you asking me this question we're open access or they don't know what a repository is and that's surprising. It's just really, really expensive and wasteful for everyone and our authors certainly don't understand this at all. I go back to them and say, well, I can't share your article and they're like what, it's open access, of course you can, it makes me look dumb. I don't like that. So, then in talking to our copyright librarian about this, this also drives her crazy with regard to course packs because instructors come to her all the time saying can I put this in my course pack, this is an OJS journal and this is from an email that she sent to me about this problem. So, there was also a study that was done on this where someone analyzed over 300 library published journals in North America and found that fully 76% of them were doing like inconsistently expressing copyright and in some cases not expressing any terms of use or copyright statements at all whatsoever. So, we just have to decide to fix this. Like I really think we just need to together say okay, like consistent copyright information everywhere in terms of like putting text in all the same places is a completely achievable goal. So, just to make some specific recommendations for any library publishers or journals here in the room, in the about section of your journal, in your author agreement which you should have is a separate document that would easily be downloaded from your website easily found please not buried in a submission prep checklist on your table of contents very importantly on the articles, the articles are things that are moving around and flowing all over the place and have a Sherpa Romeo record. So, there's a to-do list, right? Library publishers can ask their IR units to analyze their journals with a really risk intolerant lens and that's what we did at the U of A and that gave Sonya a chance and a baseline to work from to go to all the journals and say look, you know, here's the comparison of what you've got up against DOAJ's requirements, up against what we would need for course back sharing, up against what we would need for repository archiving. So, there is some good information you can get from that process that will help you move forward and really open up that conversation with editors. Okay, so like I said, we just have to decide to do this so like I'm going to ask you see how though on the count of three to shout this with me, okay? One, two, three, consistent, copyright, information everywhere. Thank you and just.
Although it may seem an implicit truth that open access journals exchange their content easily with open repositories (institutional or disciplinary), in practice, those who manage open repositories often have difficulty locating, identifying and interpreting the sharing policies of open access journals. This results in blockers and inefficiencies in bringing OA content into open repositories - which are environments can significantly increase the visibility of openly accessible research. Recent research by Schlosser (Schlosser, M., (2016). Write up! A Study of Copyright Information on Library-Published Journals. Journal of Librarianship and Scholarly Communication. 4, p.eP2110) revealed that 76% of journals in an analyzed sample in a mostly-OA set of journals did not have clear copyright articulation or sharing policies. This session will make recommendations to journal editors about articulating open archiving policies on journal websites. Furthermore, this session will also suggest strategies for institutional repository managers and open journal systems/library publishing managers at academic libraries to collaborate to inform journal editorial boards about the existence of open repositories, and help promote best practices for sharing content with these sites of open research discovery.
10.5446/51339 (DOI)
I'm Alex Garnett. I don't need this mic, do I? I need the mic. Okay, I'm Alex Garnett and I want to talk to you all about texture, our new WYSIWYG Jats editor. This work also had major contributions from Juan and Kasim, so not the remiss. It's just my name on the talk, but they've been a huge part of this project. So as you all may know, for the past several years we've been working on a project called the OpenTypeSettingStack. Oh, it comes out. Yeah, okay. Which, until very recently, was a fully unsupervised automatic parsing solution. It was never going to be perfect. We claimed that for a little while. We didn't stick to it very long because it wasn't a very plausible lie. But very important to get the parsing as good as we could because we were wary of trying to take on the huge project of making a good WYSIWYG editor to go from 80% automatic best-in-class open-source parse from Word or PDF to Jats PubMed Central XML. Going from there to 100%, we can publish this. It's actually like production quality better than the status quo of outsourcing your XML type setting. We wanted to actually get a good editor in there to close the gap. We were afraid to do it ourselves because the risk of making a chocolate, orange, and lettuce no butter bodega sandwich was too great. If we want to make something, we want to actually do it right. We don't want to make it like a sociopath who doesn't know what a real sandwich is. And again, absent actual UX UI design for a WYSIWYG editor, these things can happen. So we're very pleased to announce that as of last week, we have a very new release of Texture, which is a new WYSIWYG Jats editor from Substance, the developers of the Elife Lens, who for those who have seen that, that's the default XML viewer in OJS3, provides a complete full text XML workflow. We've integrated it into OJS3 with a plugin. It works standalone, and we hope this will encourage a transition away from Word type setting. You can basically get your document into Jats XML and edit it even more easily than you could in Word earlier on in the type setting process. No more like manipulating Word tables until the very end when you send it off publisher and they send it off to somebody else and it's a whole waste of effort. This is a way forward, we hope. It's a lightning talk, so I have just this slide with a lot of features on it now. So we can kind of look through these. I can do a demo later on if anybody wants one, but it's got things like configurable views. We can hide certain elements from authors or from editors to make the interface better suited to either marking up a document or authoring from scratch. We can act as a guide to tagging, like, hey, abstract goes here, or a plain authoring environment. It can do double duty. There can be an offline version pretty soon. It's a JavaScript app, we can just package it. So for anybody who's hung up over kind of authoring into a browser, you've got to work around for that. We can import any valid Jats, including, you know, the 80% output of our parser or any article currently in PubMed Central, for example, and export along Jats for our convention. So it's Jats, but it's kind of a opinionated subset of Jats to make a huge XML schema more usable. It supports collaborative editing, like Google Docs, so you can actually have your simultaneous editing feature, which is, again, something else Word does not have. We're better than Word now. It stores weak links to external services locally, so that if you pull in a DOI from Crossref, and for whatever reason, the matter that it changes remotely, it'll get pulled in automatically. So there's no just pulling down raw citation text strings. It supports raw XML editing, so it replaces Oxygen, right? You know, if you have an XML professional editor who's used to doing their work in Oxygen, they can have a nice whizzy wig, 95% of the time, a 4% to 5% fallback when they want to change an attribute in one tag, they can still do that. So it really fits existing workflows in forces context. The editor cannot produce invalid Jats. This is something, again, that, you know, it doesn't really work in WordLand. Anytime you're changing a section heading, you're indenting it that actually nests the section in the article. So good stuff. It's very smart, we hope, and it is not a chocolate, orange, no butter, lettuce, but thank you so much. Thank you.
Microsoft Word's dominance as an authoring tool creates substantial inefficiencies in the scholarly authoring ecosystem. Many journals and journal management platforms are designed around uploading and downloading incrementally updated drafts of Word manuscripts, creating a difficult-to-manage ecosystem of individual change-tracked files and annotated PDFs. For most end users, there is no sufficiently easy to use or widely accepted alternative to this. Yet, when it comes to publishing, the scholarly publishing industry has (mostly) settled on a structured format—JATS XML. This disconnect between the tools and formats used for authoring and the formats required for publishing has meant that, for several decades now, manuscripts received from authors will need to be entirely XML-typeset by publishers at considerable expense. Texture is a WYSIWYG editor app that allows users to turn raw content into structured content, and add as much semantic information as needed for the production of scientific publications. The primary goal of Texture is to remove this requirement for XML expertise by providing a solution for publishers to bring accepted papers to production more efficiently. Texture reads and produces valid JATS files. This allows Texture to work seamlessly in existing publishing workflows. The Public Knowledge Project has continued to develop their Open Typesetting Stack (OTS) application for automatically transforming Word or PDF articles into JATS XML. We currently have an alpha plugin for integrating OTS into our Open Journal Systems publishing platform; this plugin includes Texture. Our solution, using the Open Typesetting Stack and Texture, aims to address the impracticalities of trying to "reverse-engineer" an author's work in Word while still supporting a polished, professional typesetting workflow.
10.5446/51341 (DOI)
Welcome to the University of Montreal. Good afternoon. So we're now going to start the official program for our 2017 PKP conference. To get us started we have a few official welcomes and I'd like to call the Dean of Arts and Sciences at the University of Montreal up to the podium here. You can't have a conference at the University without having somebody wearing a tie. Now we have two actually going on stage. I just want to say a few words. I'm also, so I'm Dean of Arts and Sciences, but I'm more importantly, I'm Chairman of the Board of Iridzi who is of course deeply involved in development of publishing of scholarly research in Quebec and in Canada. But more importantly, I'm a philosopher. So let me say a few words as a philosopher. You're dealing with issues that are especially whether it's technical or more in terms of principle you're dealing with issues where human beings are especially bad. Human beings are very bad thinking about the future. And you're dealing with the future not just in terms of what's coming next, but in terms of how do we transmit knowledge through generations in a way that is sustainable, whether it's economically or socially or politically. You need to be very forward looking when you're dealing with any issues in the either publication or dissemination of knowledge. And that makes your challenge huge because Homo sapiens is really bad thinking about the future in a constructive way, I should say. We can think about the future, but usually it's not very constructive. And that means you need to convince a lot of people that the tools you're developing or the processes you adopt are worth distributing at large. And that takes me to the fundamental mission of universities. So universities are very good thinking about the very long future. Why? Well, because the fundamental mission is of course the creation and transmission of knowledge, but it's fundamentally about human flourishing. So universities are deeply optimistic. We think that if we co-create, transmit, share knowledge, it's for the betterment of humankind. So that's a noble mission and something that you won't find many enemies. There are very few people saying we're against human flourishing. And those that believe it won't say it or will lie about it, but it's easy to find partners. So this is the place where you're at where you're dealing on temporal frames where the species is maladapted. You're dealing with issues about the future of publishing or the future, more fundamentally the future of knowledge transmission. And that makes it very difficult to deal because human beings are not good at that. But you're dealing within an ecosystem, more importantly universities that are in some sense built to think about the deep future. And I want to end this very impressionistic, philosophical amusing about what you'll be doing with a wish or a hope, and if you work at Université de Montréal, an exhortation. But don't forget the students. So you're dealing with in some sense you'll be talking about economics or you'll be talking about coding, you'll be talking about standards, you will be talking about various aspects of publishing. But keep in mind that if universities are to be the honest partners with what you're doing, it's much easier for deans and for other people wearing ties or boring clothes to get on board if you've got a very clear and powerful story as to how it helps the human flourishing of our students. So this will take different forms depending on the organization you're involved with. But if you can think about how to get more students involved, not necessarily advocacy is nice, it's important. But beyond advocacy, in terms of, you know, do they feel like they own the means of production, or do they feel like they can contribute knowledge in an easier fashion by helping you out, you'll be on much stronger footing with your respective universities or university partners. Because you're talking about the future, you're talking about transmitting knowledge through generations, and you're talking about human flourishing in a very concrete way for universities and that's the human flourishing of our students. So I hope you all have a great meeting and that you share a lot of interesting and subversive thoughts. But don't forget, whether it's in your conversations or in your future plans, how can you better involve students in your projects so it contributes to their human flourishing and to our collective human flourishing. Thank you very much. Thank you, Frederick. By the way, I forgot to introduce myself. I'm Brian Owen, the Managing Director of the Public Knowledge Project. But now I'm going to pretend that I'm here on the behalf of Simon Fraser University where I'm the Associate Dean of Libraries at the SFU Library. And as part of that, it's been, I think Frederick was alluding to this PKP and ARUD, and I want to give a big shout out to Tanya and all of the people at ARUD who are co-hosting the conference here in Montreal with us and all of the local arrangements you've been enjoying so far have really been undertaken by ARUD and some of their staff. Is Gwendol anywhere here? He's probably out busy doing something else, but it's been fantastic the support we've gotten from them. But I do want to just sort of disentangle some things because in some ways we've got two very large universities here in Canada that are represented as part of this endeavor, the University of Montreal and Simon Fraser University. At the same time, we've got two very active and significant projects that inhabit that scholarly publishing ecosystem, ARUD, which is housed here at the University of Montreal, and the Public Knowledge Project. And I want to say a little bit about that because in many respects the Public Knowledge Project, we do a lot of our activities in largely a virtual environment. A lot of the people that work on PKP are scattered all around the world. They work in different time zones which presents some interesting challenges, but we're quite accustomed to working in that mode, which means maybe once a year at best we actually get together at events like this and have an opportunity to actually talk face to face. I've met at least four people that have been working with PKP for the past year. This is the first time I saw them in person. And that's great and we really enjoy working in that environment. We have a lot of partners, other academic institutions. They range from some of our big development partners, all of whom are represented here at the University of British Columbia, University of Alberta, the Ontario College and University Libraries. We have University of Pittsburgh that's another major partner, and I could take five minutes to list all of the other partners. But the key thing is all of us that are here in one way or another probably have an association with a larger academic institution. I realize I'm not sort of bringing greetings from SFU, but it's a very good opportunity for me just to acknowledge and express our thanks and appreciation for how large academic institutions are often instrumental to how we can conduct projects like this. More often than not, PKP can't just exist purely in a virtual environment. We actually have to deal with the real world. And we've been very fortunate since 2005 at SFU Library, but more generally Simon Fraser University has been willing to provide that home base for us. So all of the mundane and bureaucratic things that we often need to deal with, human resources policies and procedures, financial systems and budgets, procurement, legal details, MOUs, agreements, research grants, it's SFU as an academic institution that provides that home base for us. And that's a really key part of all of our activities. So rather than bringing greetings from SFU, I just wanted to use that as an opportunity to express the appreciation of PKP and I'm sure that AROD has that similar very strong relationship with the University of Dayton, Montreal. So that will conclude the opening comments, but now I'm going to segue to my next role here, which is to introduce our speaker today. And I could get off easy and say, here's a man that needs no introduction and you know who he is, but no, let's not take the fast, cheap, easy way out of here. John Wilinski is the founder and director of the Public Knowledge Project and that goes right back to 1998, which is almost 20 years ago. When John was, no I'm not going to say a modest humble faculty of education professor at University of British Columbia, but nonetheless he was there and that's where the whole genesis of public knowledge and that concept that it was really important that the work that comes out of the academy should be more widely available. That in turn led to what became a very interesting software project. John has continued to be our director and in many ways our visionary and a lot of the things that drive PKP come from John. But he's got a few official designations too. Here's the COSLA Family Professor in the Faculty of Education at Stanford University. Quite coincidentally, and this ties back to my earlier comments about SFU. He just also happens to be a part-time professor in the SFU Publishing School, which is a very critical linkage and part of, again, the important role that SFU plays for us. Probably many of you know John as a very ardent and persistent advocate for open access. John's publication, The Access Principal in 2006 is one of those hallmark monographs that people often quote or reference when they're talking about the development of open access. But John continues to be a scholar. And I'm going to give a plug for his latest monograph that was just published in 2017 by the University of Chicago. And that's the Intellectual Properties of Learning, which is basically a multi-century review of just intellectual foundations, concepts like the comments. So John is more than just the director of PKP. He's a man of many talents. Now we made the mistake possibly when John said, so what am I supposed to talk about today based on past experiences? What the heck? John disguised the limit. We may regret that. I think we did ask, perhaps you could tie in the odd reference to PKP or ARUD or any of the other folks around the room here. And I'm sure you will. So without further introduction John, I will turn the floor over to you. Thank you.
Official Welcome from UdeM and SFU at the PKP 2017 Conference.
10.5446/51343 (DOI)
I do. He was one of the earliest in terms of conceiving of this movement in the internet, and one of the first to draw the historical analogy with Oldenburg in the first journal. And I certainly agree with Alec on the notion that we need a sense not only of our own community gathered in this room, but a sense of a larger world that is moving towards cooperatives and that is thinking about alternative forms of economy. The whole notion that another world is possible should be thought of as something that it's already here, and it's already now, and it's represented here. So I want to thank everyone for that. But I really think we need to acknowledge the two sitting here in the corner. Kevin Strannick and Brian Owen, I want them to stand up so everybody sees that these two people have made. This is the fifth time they have organized one of these events, and the third time they've organized it remotely. But no one could ever say they remotely organized it, because it was very direct and immediate in terms of this experience. I do have a count, and it was in a day and a half, 34 presentations. If you check your gauge, you will see your brain is full, and that you can take the rest of the weekend off. Because your commitment and the, just the whole scope, I want to point out to you the scope and the range. The global dimension that's been represented over the last day and a half, the range of ideas and points of entry. When we opened, excuse me, Federique made this plea, this exhortation, as he called it, and somehow the University of Montreal has an exhortative nature to it, that we consider the students, that we take into account the students. And in all the time we have presented in these conferences or held these conferences, we've never had as much student representation, undergraduate, graduate, and high school level. And this idea that we're building a community beyond the traditional academic bounds. So opening with the notion of students, closing with Mickey is a very encouraging aspect. And actually, let's go back to the sprint. So let's make it from Wednesday, Thursday, and Friday. We have had our hands on. We have gotten dirt under our fur coat, under our fingernails. We have built things in this period. And we have talked about large principles and important ideas. And all of that needs to be acknowledged. But the other side of Brian and Kevin is Tanya and Airudit, that as Dominique also mentioned this morning, that our partnership, which 10 years ago you couldn't imagine happening, Dominique, always in the corner of my eye was this dream, that the two founding peoples, always omitting, unfortunately, the First Nations, but that there would be a kind of uniting. And this aspect, let me ask Tanya to stand up so you see Airudit. And Gavin, are you? All of the Airudit. Perhaps I don't even know. Oh, yes, yes. So this is a thank you very much. You are our host. Thank you very much. That we are at University of Montreal because of this kind of partnership and this kind of association. And this idea that we are working together is a very, very important one. So today ends this conference. But many of us who have only met online will go back to that online existence. Many of us who started projects and ideas here will continue and follow up on those ideas. And some of us will forget that we met and will reintroduce ourselves, as I'm very good at doing, two years or three years at the next event. So please allow for that. So let me just close by saying I want to thank everyone, not just the organizers, but all of you who have participated. We had over 60 at the Sprint, which is the largest ever for us. I think it should be clear to you that we are an open project in many, many regards. And we are open. In fact, one speech, one of the lightning talks, had some criticisms about the aesthetics of OJS. She wasn't able to present. We are open at all points to emails, to contributions to the forum, to any kind of ideas about how we can do more to make the available knowledge, or excuse me, to make the knowledge that we know, what we know about the world available to everyone, thank you very much. The conference is over. Thank you.
Closing remarks from PKP Director John Willinsky.
10.5446/51344 (DOI)
I want to welcome everybody in my tie. It makes me feel like a dean now after Frederick's comments. To our conference, to University of Montreal, to ARUD, to PKP's partnership, that this really represents a culmination but not the completion of. We recognize Tanya and the other team members from ARUD that we have been working together for a good number of years, and this is like a coming home for us, we work to establish a conference together to be in the same room for the last two days in the sprint, and to look forward to future projects in this collaboration. So this is an old Canadian story between the East and the West, the Francophone, the Quebecois and the rest of Canada, and it's one I'm very proud and interested in continuing. We did the sprint in probably not the most interesting building in all of the University of Montreal in the last two days, so what a pleasure to be in this building. This is industrial design, and I just want to pause for a moment on that notion because what we are doing with PKP, with ARUD and with other related projects is we are the future, I'm hoping, of industrial design. That the industries of the future will be more open, they will have less smoke and pollution, but they will be very, very committed to design, to notions of how to solve problems, to notions about how to make people's lives richer and maybe easier in some ways, and to how to be more inclusive on a global scale in a way that the last industrial revolution wasn't so much. So this idea that we should meet in places like this and recognize their history is very, very important to me. And I want to do a little bit of history. Brian gave you a bit of it, the 1998 start for this project when we had no idea what this would mean, it was before there was a term open access, I was voting as many of you know for free to read, and we lost. And open access raises all kinds of questions around who owns it and where it's going, but I'm happy with the growth of it and I want to talk a bit about that in a moment. I also want to recognize the benefactor, the role of society and the academy. Fred Rieck introduced that notion of our responsibilities, particularly around students, and I want to address that as well. I also want to get a little philosophical in honor of his own tradition and discipline. But this idea that we have a responsibility in terms of our design and in terms of our thinking is an important aspect for us. And our benefactors, that is, those who support this work, those who have trust in this work, are not always just the federal government up the road, are not always just the provincial governments, but in fact our private benefactors. So the public knowledge project, and many of you, or some of you at least will know this, was started with an endowment from the Pacific press. I know, it owns both of the newspapers in Vancouver and the true spirit of democracy, as I've noted many times, you have a single owner of all the newspapers, it takes care of a few of the questions that sometimes arise. When I assumed the chair, or the title of Pacific press professor, the press was owned, sorry, the Pacific press was owned by Conrad Black. And I'm very happy to report that he has now been released from prison, and that all of that is behind him, and he refused at the time to give us any more financial support, thank God, or I would have had to join him perhaps. But at any rate, the generosity of the benefactor, the no strings attached gift from Pacific press, who thought that there was some future in technology that the Vancouver sun had not yet recognized completely, and I'm not going to take you through the story of our stumbling attempt to work in terms of the technology and research online with the newspaper, but I want to recognize that that kind of support, and the way in which we need to look to the larger world to gain that support, and the way in which we enact that trust is a very important part. That was 1998, no software at the time in terms of our thinking. I originally thought we would convince the world by argument that the internet in the 90s, if you can remember the 90s, was even more chaotic than it is now. Everybody was stumbling, everything was kind of distributed on different file systems, and everybody was proposing different ideas, and mine was that we could distribute all of knowledge. What we learned is that people in 1993 had already started email journals, and that that process was spontaneously underway. And what we tried to do is to build a platform for others to take a step into this new world. We didn't start a journal, and originally we didn't even host. In fact, in principle we didn't host. We only built the ladder for others. We only wanted to be a vehicle for their expression. We only wanted to move journals online in the 90s. Do you remember that when that was the question? That moving a journal online would automatically drain it from any integrity and value and worth? That was the argument in 1999. And we thought that had to be overcome by some manner in order that we could begin to talk about a wider global distribution. Because even then it was obvious that the worldwide web, as it was the information highway, was going to be global. And that was an important aspect and opportunity to have knowledge that we hadn't experienced before. Let me jump ahead to 2007 in terms of this history. Because in 2007 the PKP team decided we needed a conference. And we held our first conference in Vancouver in 2007. With about, actually there were seven more people than are here today, I don't know why. But it was an event where we started to build the community in person. And Brian's experience of working with people, never having met them, we realized needed to have a periodic time when we could meet people. When we could get together again with people we'd met four or five years ago and have an opportunity to sit down and talk with them again. In 2007 we had approximately 3,000 journals using OJS. But that's an exaggeration because I know in 2003 we had almost no journals using OJS. And only now we can retrospectively see that in fact, apparently in 1991 there were hundreds of journals using OJS, no in fact their content is from 1991. But in that period, that is from 2001 when we released the software with zero journals, to 2007 it was a slow one journal at a time gain. Now I would love to be able to announce today that at 2017 that we're at 10,000 journals as we've announced three or four times in the past. But I can't. We keep improving the accuracy of our count and we are currently as of this morning although Juan may be updating it. Okay, we'll get an update in a few moments. Wait, there's a call from New Brunswick. Sorry, I need to take this, just one second. I don't know whether I'm Lady Gaga or on a call center somewhere. At any rate, if you can imagine how many times I've heard those words cross Juan Pablo Alperin's lips. 10,000 and 35 journals. Alright, 10,000 and 35 journals. Globally distributed, Juan will be talking later about some of the demographics we have about those journals. But this idea that there has been a movement in terms of the access to knowledge since that initial period of 1997 and on to 2017, I want to focus on it in a particular way. I want to say that this is not a critical moment so much as a tipping point. And I want to draw attention to a few of the major players in that. But first, let me give credit to Fred Rieck's point about students. Because in the last month, we have had a revitalization of the high school peer reviewed journal. In 2003, I want to say, at Gladstone Secondary School in Vancouver, we had our first high school journal. Started by Sarah Toomey, a teacher there. It was all girls. I was very proud of it. They decided to call their journal in 2003. It's still online, it's been archived, it's not been active since 2003 or 2004. They called it the Pink Voice. Now, they didn't do a Google search in those days to see whether who else had the Pink Voice, but just think for a moment, it probably wasn't the most appropriate title for a high school journal. In the two years that I worked with them, and Sarah led them, I just stepped in every once in a while, the most amazing development was the first issue in the first year had a column called Fashion News. The second year, they changed that to Philosophical Views. Fred Rieck would be very happy. Inside that, which I opened with great anticipation, would it be John Locke, Wittgenstein, Nietzsche, it was love. They had an advice column that was peer reviewed on issues of love, and they called it Philosophical Views. This month, we had 14 journals launched in the Bay Area by a number of high schools, and in fact one middle school. I'm working with a student at Stanford right now to build a journal in a box for high schools that will enable them to very easily assemble, convince people to serve, and undertake this kind of commitment at the student level. Because in the country that I'm currently working, we have a great need, a huge vacuum, if you like, which nature abhors around the truth, around verification, around authority and trust around knowledge. The idea that high school students would think about how can we verify what kind of checks and balances. And if I didn't get an email on Monday that said one of the articles in the high school journal had been plagiarized, and someone had detected it, and they wanted to know what checks and balances were in place to prevent this in the future. The journal only had one issue. Mind you, it had about 85 articles. So this idea that we are building a culture on a very old tradition. Jean-Claude Guidant first brought our attention to Henry Oldenburg, who started the journal for the Royal Society, or not for the Royal Society, using the Royal Society in 1665. This tradition of trust and authority, very much in need today, very much an issue for students to begin to be engaged in. Every library that hosts OJS, and fact, every library that hosts B-Press, has student journals. And when we do the counts, when I work with Kevin, we have to sometimes work through hundreds of student journals that we want to separate because people are thinking, oh, they're not, you know, they're student journals. But of course, when you work at Stanford, there are different kinds of students and different kinds of journals. And I work with one, you're welcome to look at it called Intersect, an undergraduate journal of science, technology, and society. And they came to me this last quarter in the spring and said they'd crossed the 150 citation point for their journal of undergraduate students without any self citation. And this idea that students can begin to contribute is a very important part. But let me focus on my topic for today. You did not get a title. Brian did a nice promotion for the book. It's not out yet. The galleys are sitting on my computer. I need proof readers. It's my least favorite task. I squirm. I cringe because in proof reading, you can only make the slightest, smallest changes. And I want to rewrite the book every time I read a sentence. But I'm going to share with you some of that because this conference has been critical for that. In 2007, I didn't have the book. But in 2009, the next conference we held in Vancouver, I had started on this book. I sound like I'm proud of the fact that it's taken me this long. I'm not. But it has been something I've kept coming back to. And it allows me to step back from the work of the sprint, from the work that we've been doing around building code, the document support that we've been doing, the book support that we've been doing to talk about this larger sense of purpose, the historic place of our work. Now, those of you who have attended all of the conferences, you know your next one is free. If you get your card stamped at the door on your way out, then the sixth conference, you get a free one. Kevin's looking aghast at that concept. But let's hold that idea perhaps for a moment or two. In 2009, excuse me, when I first started, I did a whole thing on John Locke, an intellectual property. And John Locke is an English philosopher who provided a theory of property that was not so much original to him, but that had a sticking power, the timing, the need for an idea of where property stops and where the commons begins, or rather maybe the other way around, where the commons begins and where private property encroaches on it, and what are the rights and limits of that property. And Locke has been fundamental to that and nobody cites Locke more than intellectual property experts, because his notion of ownership is something that you only have a part of, that you earn a part of something, that you have a property claim on something, and that there is no absolute ownership. There are only earned or rights in terms of those claims are very important. But that was 2009. In 2011, I talked about the monasteries, the role of the commons as the origins of the sharing of knowledge, and how in monasteries there was no private property, and how the monastic life based on the benefactors, on the gifts and donations of Pacific press, of the church, of the nobility, created a space for learning in medieval Europe where it was nowhere else available. And that idea of the protection of learning is a very important idea for today. And that's why I want to transition to the chapter I'm presenting today. This is like before you go to bed, you get another chapter every two years of this, and then you won't need to buy. There will be an open access copy of this book. We negotiated with University of Chicago Press, they had never done this before. They had a very weird notion of what open access is. You could put your file online as long as you change the title and all of the words to it. I negotiated them down from that. We'll have the actual title of the book, the intellectual properties of learning. But let me give you this context of why this is so important. What the battles that the monastics fought to protect their autonomy, to protect their support from benefactors, the battles that were fought by the Bodleian Library actually gave a talk at SFU as part of my project at SFU, or my appointment at SFU, but the Bodleian Library, which has been very reluctant to buy any books and was based almost entirely on donations of illuminated manuscripts and established a contract with the publishers to give one free copy of every book. I'll come back to that in a moment. That was in 1600, the beginning of the 17th century. I'm going to jump to the end of the 17th century, because I want to talk about the origins of intellectual property, and I want to relate it to the Elsevier Napster moment that we're having today. Now, I and Elsevier are not working in cahoots, but the timing of yesterday's purchase, or this week's purchase of BE, I don't really know, BE Press or B Press, their purchase of yet another of the players speaks to the issue. So I want you to keep in mind the following facts about where we are on this open access quest. Here in Montreal, when the leading centers of information science in this continent, we have established, and with some help from people like Juan as well, we have established that the open access point we're at is 50%, 45%. I read your abstract in 2015, but there's enough argument between 45% and 50%. Half of the literature is available. Your chances of getting an open access article is 50%. Your doctor, looking at your symptoms, looking for what is new, has a 50% chance. Now, some of us still think that's kind of ridiculous. Others are thinking, my God, it's up to 50%. And the official count, and Juan's recent study with Vincent and others, have established around 35%, 38% for the whole of the literature, and a recent literature, sorry, but in 2015 in particular, this 50% point seems to be about three studies seem to converge on that aspect. That's one factor of the tipping point. Second factor of the tipping point, Elsevier has announced a surprising fact. I love the way they did it. You should look this up. They say, here are some surprising facts, and they have a light bulb going on. And one of those surprising facts is they are the second largest open access publisher in the world. Exactly, the response they were expecting, and my first response too. Elsevier, second largest open access publisher in the world, and it only took 40,000 articles to become the second largest publisher in 2016, excuse me. Third fact, Elsevier as well. It won a $15 million suit against Syhub. It sued this young, again we can be careful about some of the facts on this. Alexander Ali Bakken is being sued by Elsevier as the key representative of Syhub, so he certainly has owned up to that. Questions, Jean-Claude has raised some of them very recently about who's behind it all. In fact, you suggested a spy novel perhaps could arise out of such fabrications around Syhub. But Syhub has established, and the research that came out this month, sorry, last month in July, has established that almost all of the literature online is available through Syhub. This idea that all of a sudden the subscription model has had the life drained out of it is part of this tipping point. And that Elsevier on the one hand is very proud of being the second largest open access publisher, and goes to court to establish its ownership, its outright ownership, its treatment of that ownership as absolute in a non-John Lockean manner. That it is a corporate asset, that all of the work that we have done as researchers, all of the work that we have organized and provided for others through libraries, is a corporate asset. And Vincenzo Le Riviere's research has established that five publishers are now in possession of close to 50% of the published literature. Think about these 50% that there is a 50% ownership, there is a 50% open access, and that is the kind of tipping point. And there is a sense of chaos that I want to come back to, a chaos around what model going forward for open access. Is it the APC? Is it the institutionally sponsored journal? Is it going to be owned by the second largest open access publisher? Do you have any predictions who is going to be the first and largest open access publisher in two or three years? And yet who owns 50% of the literature at the same time? Now, I set that out on the table because it is definitely a tipping point, definitely a teetering in terms of our future. That is our future as scholars, as members of this institution, as people concerned about the future of students, and the future of the veracity of knowledge. And I want to provide a historical analogy. I do want to go back to John Locke and create a comparable situation, and I do want to pull some hope from that. Because at some points, and some of the responses to be presses acquisition by Elsevier, on Liblicense, on Skalcom, and some of the other listservs, was people throwing up their hands and saying, what can we do? So the period I want to return to is the turn of the 17th century. What I want to provide to you is the idea that learning held its own at the most traumatic moment in the history of publishing, which I want to portray as that moment when the origins or the legislated state of copyright was put into place, when authors were first recognized in 1710. Locke is dead by then, but he plays a role, and I want to use Locke as the scholar-activist, as the public defender of learning. And I want to make an argument that says we have a reason for activism, and a reason for hope, and a reason to be realistic, and a reason not to be naive about who's at work here. Joe Esposito, not my favorite consultant in terms of publishing, although he took me out for lunch 30 years ago, weirdly enough, and it's in New York, Joe Esposito said last night in response to the BE Press that it just shows that Elsevier is smarter than everyone else. Now, I hope he did that as a provocation. It certainly applies to him, but I think at the same time it is an exaggeration. And we want to think about the possibilities in terms of these historical precedents. So in 1695, actually in 1693, let's start with John Locke at the beginning, the Licensing Act for Books is about to expire. The 17th century had a copyright. It was very much the law in the 17th century, but it was the stationers' company, the printers and booksellers. There were no publishers. They were printers or booksellers. Elsevier, there wasn't Elsevier, another Elsevier that Elsevier picked up the name of. Elsevier was a bookseller. And this idea, a little bit later than this though, this idea that they could own monopolies as a privilege from the crown was copyright. They had perpetual ownership, the printers and booksellers, throughout the 17th century in exchange for censorship. The state, the crown, and the church wanted censorship. They understood the power of the press. In turn, they granted the publishers, as we might call them, the booksellers and printers, granted them monopolies. Not just monopolies on a single title, but monopolies on all of law. Monopolies on schoolbooks. Monopolies on all of Milton's work. Monopolies of any kind or sort. The stationers' company was happily granted those in return for censorship. And this law passed repeatedly, going back to Henry VIII, of course, as everything does, in the 16th century. This law came up for renewal in 1693, and people were fed up with it. This was the 1662 version, an act for preventing the frequent abuses of printing seditious, treasonable, and unlicensed books and pamphlets, and for regulating printing and printing presses. You've got to admit that there's an honesty. It's not called leave no child behind. It's called we're going to stamp out any form of treasonable works. The universities, even then, were recognized. Oxford and Cambridge were allowed to print on their own permission. They did not need the stationers' company in London, and there was a lovely divide between the London printers and booksellers and the universities. They were a world apart. Now, it may only be 55 minutes on British Rail today, but in those days in the 17th century, Oxford and Cambridge were considered to be so far removed that we can let them print whatever the heck they want. And so it had been, but everywhere else there was regulation. And in 1693, after the Glorious Revolution, and there was the first Bill of Rights a year before, people were fed up with book licensing, John Locke in particular, and Locke's battle, he had a very good friend, Edward Clark, who was a member of parliament. He began to engage in lobbying, not something that philosophers often do, and maybe Fred Rieke would be someone to do that. He began to engage in lobbying through Edward Clark to stop book licensing. He felt that the licensing of a book was to gag the freedom of speech. He felt that it presumed you were guilty even before you had spoken. He felt that anything that could be said should be able to be published. You can sue for libel. You can sue for blasphemy. He had no problem with that. Locke was quite a religious man. But the idea that everything had to be licensed first was contrary and worse than that, it made learned books really expensive because they were all published under monopolies. Because these monopolies were perpetual, because they kept out foreign books, because you could not undertake a new translation. Locke himself did an ESOP fables, could not get it published, because his printer did not have a license for fables. So this idea that censorship was critical to academic freedom and academic freedom was related to the price of access to knowledge is a critical aspect. He lost. It was renewed in 1693 for two years. But in 1695 he was back at it and they were able to prevent book licensing. The book licensing act of 1662, the seditious and treasonable unlicensed books law, was not renewed. And for the first time in the history of printing, well not quite. There was a little period in the early 16th century when there wasn't. For 150 years of censorship and monopoly ended in a single day. The next day was a reader's paradise. There had been only one newspaper in London, the official cassette. The next day, I'm exaggerating, perhaps not the next day. It took them a day and a half to set the type. But within a decade there were 18 London newspapers. If you waited a day, any newspaper that was printed would be available the next day by someone else who reset the entire newspaper and sold it for half the price. It was Sy Hub Galore. And from 1695 to 1710 it was an unlicensed press state. A field of academic freedom that England had never known. Holland had had the whole time, but England had never known. There was, I have to admit, some obscene literature published. Continuously on a 24-7 basis for 15 years. The Crown faced a lot of criticism. Everything that was published was republished and pirated. And the publishers, the printers and the booksellers were upset. And every single year they lobbied. They paid, they went to Parliament and they asked for new bills. Please relieve us of our misery. Every Elsevier of the day. And there was an Elsevier by that point. Every Elsevier of the day begged for a return to licensing. Can't you see they would gather up cartloads of obscene material and dump it on the doorsteps of Parliament and said, This is your freedom of the press? And the members of Parliament would take a few, go home with it, check it out. In the meantime, they changed their strategies. In the meantime, they started with the licentiousness of the press from 1695 into the 18th century and complained about the obscenity and got nowhere. Because having 18 newspapers made for a vibrant democratic spirit. The members of Parliament found people that were in support and against them. They gained new ideas. They were able in fact to see that there was a different kind of democracy through that process. And the press was exasperated. The printers and booksellers looked for new strategies. Now, in 1695, Locke had made some proposals that had not been accepted. In 1695, Locke said, and he does this in a beautiful way, he takes the author and the learned reader. And he starts to balance their interests. He says for the author, the problem with a monopoly is its perpetual. That impedes learning. It discourages further criticism or critical use, critical additions, new translations. On the same time, we need an incentive. So Locke actually proposed that the property in 1695 should be vested in the author. They don't have an absolute ownership. It should be vested in the author for a period of, he originally starts with 50 to 70 years. Now, I give you that figure because it's amazingly prescient. Do you know the Canadian US situation right now? Exactly. 50 Canada, US 70. Whoa, spooky. Let's get back to the facts. That is a fact, by the way. But the second time he proposed it, again unsuccessfully, through Edward Clark, he left it blank. And he just said for a blank number of years. And what I think is a very democratic spirit and I think a very smart lobbying technique, in which you allow the parliament to make some of the decisions while you insert a principle that will protect learning. Because he balanced that author concern with the reader's concern. No restrictions on imported books. The principle that Bodley had established a book deposit needed to be honored because it hadn't been. The idea that every book that was published should go to Cambridge and Oxford, one copy free into these outer learned economies that were separate from London, had not been honored. And Locke wanted it reinstituted in a way that said, first, if you want to publish a book, it doesn't have to be licensed. It has to be deposited in the library. We need to guarantee scholars access in order to ensure the value of the work that we do. And he proposed, not imposed at all, proposed that unsuccessful but still kind of clever solution. He got involved in the mechanics. He was the Heather Joseph slash Michael Geist of his day. Involved in proposing legislative approaches, but it all went for nothing. Jump ahead, this unlicensed state, this chaos to 1705, 1706 when Daniel Defoe, remember Robinson Crusoe comes on and he says, you know the problem with this unlicensed state is that it's creating chaos and it doesn't encourage learning. And this phrase, the encouragement of learning became critical. The publishers, the printers and booksellers picked it up and they said, we've got something here. What if we said, we don't care about the licentiousness, we don't care about the piracy, we're thinking this is not to the encouragement of learning. And within two years there was a bill that was successfully passed in 1710 that was an act for the encouragement of learning. And that carried Parliament, that principle. Now there is a debate among historians that if the publishers, if the Elseviers of the day put forward encouragement of learning is the reason to vest the ownership in authors for 14, it could be doubled, there could be renewed to 28 years, was it really for the benefit of the publishers? Who did profit after the statute of Anne and the first copyright law was instituted even though it mentions the authors or their assignees, it was the publishers, the big publishers, definitely gained and profited. Did the authors thrive? Well, when you look at the actual clauses of the act, what they include is book deposit, one copy of every book to nine libraries. Scotland was now part of the Union. Edinburgh was my native. And the North was recognized. So there was an idea, that idea of preservation for academic purposes was protected. A second one was instituted and this was the idea of pricing, that the university, the administrators, the provost of the university could roll back high priced books. Now this was dropped, but it was an amazing, it was about 1735, I want to say 1737. So for a period of 25 years and the argument it was never really used, but even the idea that the universities could roll back book prices, that Parliament had passed such a law. And that law had been proposed by the booksellers and that Parliament had overwritten the booksellers' interests at a number of points, that they had tweaked and twigged the nose of those publishers in order to protect learning. Is this moment of hope that I want to talk about? Another aspect, let me just briefly introduce, another aspect was that after 14 years, no matter who owned the material, it went back to the author and could be renewed for another 14 years and could be sold. But the idea that you could at some point as an author take back your work and edit it, take back your work and revise it, was built in to the origins of copyright. And this possibility of balancing the rights of the author and the rights of learning in each of these clauses is a powerful concept. In a law that was not only sponsored, but was paid for. That is at the time the tradition was whoever was backing the bill had to pay the costs of printing of the bill and the management of the bill. And the stationers company is on record for having sponsored the Statute of Anne in 1710. The results of that, I mean the interesting story is about the Bodleian Library being overwhelmed by the number of books it received over the 18th and 19th century and all those kinds of aspects to it. But it was generally a turning point. We think of it as the age of modern copyright. We think of the author's rights as being instituted, but we don't give enough recognition to the rights of learning, the protection of learning. And so what I want to say that today, in looking at our current tipping point and looking at the might of the oligopy that is the five major publishers, and looking at the ownership of intellectual property, and looking at the future of these models, that the reasons to become involved as an activist, the reasons to start a high school journal, the reasons to support open access and open data and open science wherever you find it, there is some encouragement for that. It was a time of unlicensed printing at the turn of the 17th century. It was a time of very powerful printers and booksellers. It was a time when they were ready to exploit learning to achieve their financial goals, to ensure their profits, and through all of that, learning managed to establish its place in the law. Now, much of that has been lost and much of that has been protected. Legal book deposit. There are countries all over the world, including this one, that have legal book deposit. Not every university library gets a free copy, unfortunately, but we do have a national library where these materials are still preserved. We have exceptions under fair use. My own personal feeling about this is that we need a separate category, that in this book that I've been working on for way too long, and that will come out, I'm hoping, in December, if not January. In this book, I talk about the intellectual properties of learning as a distinct class. There is something different about the work of learning, whether it's a research article, whether it's a learned book. There's something different about how it has been funded, supported, invested in, and how its value is realized by a society. And the balance that John Locke tried to strike between the commons of learners and the interests of the author for 14 or 50 or 70 years is exactly that sense of balance. And so when Conrad Black made his donation, actually it was the people before Conrad Black, but when the Pacific Press made its donation to start the Pacific Press, Pacific Press made its donation to start the Public Knowledge Project. There's that same kind of relationship between the larger world, that same kind of balance, and same kind of accountability, and level of responsibility, so that all of the work we do, we make it public because we owe it to the public. We make it public because we are accountable to the public. And we make it public because we find it will have only have its value through that public circulation, through the broadest, widest possible circulation. Each of the projects that you see presented in what follows. Yesterday we had the lightning and hail of Montreal. Today we have the lightning talks of the PKP conference without the hail and a little dryer. And each of these presentations that you see, you'll see the results of the sprint today at four o'clock, where people were developing code under open source software principles in order to enable others to find a platform to share what they know and to help others to share what they know. In all of that I want you to see that we are reenacting a history of intellectual property, but more importantly, a history of that relationship between learning and the world. Thank you very much.
John Willinsky opens PKP 2017 conference.
10.5446/51345 (DOI)
Okay, my name is Obia Julo. I have some presentation from my university. I will start with some geography. Yes, this is where I'm coming from. I'm coming from Northamptown, part of Norway. On my university, we have five campuses. I just want to show you how the university justifies. We started using OGS in 2003. I don't know many of you. I've seen a lot of new faces here. So when we started it, there was only a few people who attended Vancouver conference. We have 15 publication channels. Because we have different type of publications, we have some challenges. We find out how to solve it. Because we felt that OGS has some limitations, especially in templates. We have to make some redesign and modify certain things in OGS to suit our needs. We use the HTML stylesheet and it's responsive. We are talking of OGS 2 version. But we are moving to 3 and we are doing the same thing. One of the challenges we have is thematic organization of articles in a journal or in a journal session. We came up with an idea, what we call fake metadata only so that we can organize the articles in a section. I just want to show you part of it. You see, there must be a lot of articles there and we want to organize thematic. This is what we came up with. The problem is that when we do such a thing, what we call the fake data, some harvesters are harvesting on our site and we get it, which is not very, very good. So we have to find a way of doing it so that they will not harvest it. So that's what we are doing for the meantime, but we are trying to find a way to solve the problem. And we have journals that they would like to upload the videos and they would like it to be streamed and to reside in our journal system. Today we have all these videos in another platform, but we wanted to reside, as I said, in our OGS. So we are planning, we have started doing the job, trying to create some special form letter links in our HTML to refer to these supplementary files. We are also planning or we have started creating a plugin that we permit the direct presentation of videos that are uploaded in our, just uploaded as an article. But we are moving to OGS3 and we are going to do the same job. So if any of you have done it already, so just give me a tip. And we have some interactive exercise. We have some journals that we already developed Java based exercises that we would like to integrate in OGS. We have not done it, but we are finding, we are looking for a solution, how to integrate it in OGS or also writing maybe a plugin to do the job. And today we have installed OGS 302 and we are customizing it and we are waiting for no work completion of Norwegian language package. They have not finished it. There are some groups doing the job, so it has taken a long time. And we hope before the end of this year that we upgraded to, our production server will be upgraded to OGS probably 3.1 coming soon. And that's what we are doing. Thank you for listening.
In 2003, the UiT introduced the use of the OJS for its publishing service, Septentrio Academic Publishing (SAP) which hosts 15 publication channels, whose diversity offers some challenges in the use of OJS. OJS is a template-based but, it has its limitations, when it comes to the front-end design. SAP is confronted with creating a front end to engage journal-specific that reflects the brand of each journal. Presently, the SAP homepage is implemented using HTML/CSS and is responsive. In addition, there is the issue of thematic organization of articles in the journal section of the SAP. For instance, to derive thematic headings in the table of contents, “fake” metadata-only were introduced, so that articles with titles would correspond to the headings. The problem is that these “fake” articles are exported and harvested to various services. So, a proper functionality is needed for the thematic grouping of contents within a section. Further, some journals in SAP would like video/audio to be streamed to reside in SAP directly. Therefore, we are considering two alternatives. (1) How to create specially-formulated links in our HTML to refer to supplementary files and (2) how to create a plugin that permits the direct presentation of videos uploaded as articles. There is also a need for interactive exercises in SAP, e.g. in language textbooks. It would be beneficial for us to intergrate such (Javascript-based) exercises in OJS and stop signposting learners to an external website. Notwithstanding all these challenges, we are planning to upgrade to 3.x and look forward to its improved functionalities.
10.5446/51350 (DOI)
Okay, good morning. My name is Suzanne Jay. I'm going to take my four minutes to talk about a couple strategies that I put into place for building greater sustainability for a student run journal. So I'm a, over the last year, I served as a managing editor for a student initiated and run open access journal using OJIs, publishing the scholarly work of students at the UBC High School. When I decided to use this slide, I thought it might have my name and email address on it, so I would get away with not creating a title slide, but it actually just has the email address for the journal. But that's okay. So this is a graduate program where students typically graduate with a degree in two to three and a half years, depending on whether or not they're doing a single or dual degree, and I'm a student in the MLIS program. So as you could possibly guess, the C also journal began life and what I've come to learn is a very typical creation story, which is a group of students or a student decide that they'd like to learn about scholarly publishing, or they want to have a journal. So they start when they gather all their friends, there's lots of enthusiasm at the beginning, they publish a journal that has lots of content, people are excited and happy, and proud of their achievement, and then they graduate. And they sort of bestow the journal on the next person, on a person that they find that they convinced to take it over. And that hapless person is left with some documentation, sometimes the OJS is very helpful in providing that documentation, but you have to learn it. There's a very steep learning curve. So they're left with, as you know, learning OJS, learning peer review, recruiting a team, discovering that their team needs, members of their team need different types of education and professional development, convincing students who don't really want to show other people their work to submit their work, and then trying to convince people to do peer reviews that are actually helpful and kind. So it seems typical that student journals, I think many of you who are in the library understand that many student journals will peter out after the second or third year, and I'm the third editor in the third year, and I could be the journal killer. So I inherited this journal, but I also inherited from the outgoing editor a set of ideas for developing sustainability, and I put some of those into action and I added a few more other ideas. So some of the ideas include an editor's toolkit that includes recommended timelines for recruiting, promotion materials, job descriptions, short guides or cheat sheets that help specific members of the editorial team do their jobs. But what I'm going to focus on for the next couple of minutes is a really valuable thing that I inherited and then sort of grew, had to foster a bit and probably need to foster more. I got an advisory committee. I think there are three people on it, and one of them forgot that they were on it. So in the time that I had, I recruited a few more people, so it's now a five person group. And I promised them that they would never have to come to a meeting with each other. So the team includes two members of the teaching faculty, one from libraries, one from archives, a UBC liaison librarian and the student services coordinator for the school, and Kevin from the PKP team. Is that actually five or is that six? Anyway, it doesn't matter. So having a committee really helps with sustainability because it provides skills, credibility and improves the institutional memory because it resides with those people even though they don't have to attend meetings. So professional development was a really key reason why students took on a role with the journal. So I was able to offer two workshops, one with Kevin, and these were open to everybody who was in the school. They didn't have to commit to the journal at all. But one of the workshops was with Kevin to learn about using the OJS. And a second workshop was with the archives faculty member who also happens to be the editor of Archivaria. So Jennifer Douglas conducted a peer review workshop and she's committed to do another one for the next editor, for the next journal, and that's already scheduled. So, and of course I'll be booking Kevin again for next year to help out the next editor. So one of the things that I did fail to secure was someone to help lead a workshop on copy editing. So if you have any leads, please come see me and send them my way. I want to pass that along to the next editor. And then, oh, okay. I don't know, if you're up here, you're going to see this sign that says please stop. You've used up all your time. So I will. But I think I'll just wrap up here by saying thank you to the PKP community for creating this tool and also for the effort to build a community. One of the things that I did do was I made the editorial team come to meetings. And that was really important to have that face-to-face time with each other to do some problem solving and also do a little eventing with each other. But I think the community really does help support the independence and access to scholarly work and gives students an entry point into that world. So thank you, everyone. Thank you.
Lightning Talk proposal: Open Journal Systems is used by over 10,000 journals world-wide. Library, archives and other information professionals are central to the publication and dissemination of scholarship. Digital journals are increasingly used and valuable to collections while student journals have proliferated in recent years. Early hands-on experience may also help new information professionals develop a deeper appreciation of the value of open platforms. Library, archives and information students who gain early experience with open access journals have an opportunity to develop confidence and skills to contribute, create, use and may be more likely to include open access journals in collections they develop. In 2014, UBC students created See Also: The UBC iSchool Student Journal as a way to showcase student scholarship and to get early career experience with all aspects of scholarly publishing. Starting a new journal is exciting, but what does it take to build in longevity or sustainability? This talk presents a case study focusing on year three of a student-run open access journal.
10.5446/51351 (DOI)
And now Alec over to you you have a couple of minutes go All right, so this is gonna be Chaotic and part of that's because it's gonna be chaotic anyway But we're gonna make it more chaotic by bringing up each of the sprint groups to have I was gonna say two to three minutes Now it's gonna be about 45 seconds each so Here's the sprint what we did is basically we got people together who had some Interest in working on projects and some skills to contribute we stuck them in a room and it's kind of a microcosm of how we do Things around the PKP community. We have people with too much work not much time a lot of pressure Maybe not the the people they need necessarily, but the people they have and we put it to work I feel a bit like we've just Given you a sausage meal and now we're gonna take us a little tour of the sausage factory Which is not there's no good order to do that in but here we are anyway. This is how we make the sausages This is how you write the software The rules for a sprint are there is no homework This that was a rule that I maybe didn't articulate well enough at the start because now we have homework But you shouldn't bring a homework home from you when you're finished Everyone's welcome. So we have coders non coders people interested in Everything from design to documentation to actually getting down writing some interesting code And you bring your own projects BYOP. So you come in at the start We'll set up the the groups based on who's present what they want to work on and we'll get to it So after a day and a half We went through a number of different projects If the sprint participants who have slides on this could make their way forward, I will start with mine and we'll see who else Has to say a few words. Yes, don't be shy come on up Okay, so one thing we did is we had a request from the forum. That's our user community for a way to make a metadata field required not just available but required and That request will often make its way into a GitHub issue, which is where the coders will do their work As you can see we've created a GitHub issue and we've linked back to the forum so that somebody who's asked for this can say Oh, well, there's there's something here that's now specifying in more technical detail what's involved We added a new column to one of the setup forums to flag a field as required So your language your rights your source whatever you have and finally we hooked it into the system so that now when you submit An article or when you edit metadata it checks those requirements We then went back to the forum and said hey, it's here You'll notice says the very top in very gray letters six months later So we don't always get to it as fast as we would like but that's a really typical example of how a request from a community Turns into a bit of work turns into a feature that's added makes it into the software and then gets back to the user in the first place Book sprint All right, so we had a very big group with for us and we were really focused on Updating a document that's called getting found staying found this is first released in 2006 was written by Kevin Stronik And it's a really a guidance document around some of the best practices for open access journals So what we wanted to do is look at the document pull out different sections and update them so Here are just some of the sections that we wanted to do so and we had people converge on these specific sections So we did things like orchid encryption security the PKP index Dealing with library union catalogs them so there was a lot of content produced and a lot of came out of that too So where we're going from here is to pull all this information together and revise the new document and to align it with the documentation Roadmap that another group is going to tell us about because some of it may or may not fit within this document and we had a lot of great participation and Thanks to all the people who? Allowed us to exploit their intellect Hey So I was part of a small group of folks who were focused predominantly on the issue of sort of documentation And the architecture of our documentation is sort of optimizing it for usability Which actually ended up sort of being a conversation about all the things that are wrong with their documentation in general Chiefly among them that the whole onus was that there was some documentation in the wiki there was some documentation and get book It wasn't really clear what was where so Jana and Janet I did an amazing job sort of combing through the wiki and seeing where all the contents redundant So we have sort of an actionable item and maybe before we sort of Bury the wiki at least update in the wiki to point out You know which sort of software stack that that information is relevant to and give people kind of a sense of what? Documentation is currently deprecated. So we know which parts of the wiki are useful or not to migrate to another platform We've had a lot of conversations internally, but using get book We started using it two years ago to sprint actually Marco and I were part of a team that tried to get that going And get book has had a lot of issues. So we were talking about where we want to put that material So today actually Alex Kevin and myself Did sort of a quick run through an environmental scan of other places to host documentation? I think the likely location is read the docs. That's my homework. I believe And then we inventoried What was you know sort of things that we immediately wanted to put in and the best part as we got a huge amount of recommendations on? How we might want to describe that information? So we have a problem where we call everything documentation But maybe it's better to call things below three pages a guide something above that a documentation or document So we can have people find what they're looking for a little bit easier Usually as we all know you used to look for something and you'd see something entitled OJS in an hour And it was really OJS in the summation of the rest of your life. And so this way This way you needed something a little bit more expedient You just needed a guide on how to you know help users do something Maybe these guides would be a better way to go So there's actually a shocking amount of homework But the team did a ton of great work and sort of providing us all the information we needed to do that intelligently That was earlier than I expected it to be So some of you may have noticed that there are some elite hack sores out there who have taken the opportunity to upload profile pictures of themselves saying I hacked OJS Which is roughly the equivalent of me if Alec is collecting Name tags handing my name tag to him writing some profanity on the back of it and saying I broke your arm But it annoys people anyway so We fell back to one of the requests that some journal managers at Pitt have been asking for for a while unrelated to this was the ability to Mediate accounts to approve and a new user on the system before the system the user can use the system so we implemented that in 2x and a untested Pull request in 3x and you'll see that in 249 and 3.2 Hi, my name is Ilyas so we've been working around the open data systems that might appear or get developed in the next years Now the main issues we have from our field is that people keep doing experiments and other people keep doing the same experiments because the Experimental data is not available from materials and mechanics fields We also have another issue is that experimental data itself is not valued as often as a publication Basically the experimental data's purpose is to write a publication after that the experimental data gets lost No one knows where it is and we keep looking for it never finding it So we just redo the same experiment So there are currently several initiatives that are showing up. We were able to discover them through this workshop It was really interesting to exchange with different people about these different initiatives that are trying to be data repositories at the Canadian federal level or also in some universities for example But the data set still don't have any kind of value even if we store them So we thought it would be interesting to make them citable so that they can be as valued as a publication So the way to implement this would be to fork OGS OMP to create ODS and open data systems That would help us go to the review and Evaluation process of a data set until it's actually citable and we can get a clean metadata for it So in the next slide We're just showing what we kind of built which is the review ideal review process for data And that would be per field so in our field mechanics and materials It would look like this and we'd have an OGS for our field But then it would be necessary to develop other OGS for ODS sorry open data systems for each field That would ensure that the metadata necessary to share that data and make it usable by others will be available through that review process So yeah, the review process will be done in two parts internal one just to check the data format file formats to be able to build a Metadata out of it and make it usable by others while the other side will be the external Review just to check the quality and the content if it's actually understandable after that This set with its metadata could then be shared in one of the repositories I was talking about at the beginning one of the federal initiatives was able to talk faster than you Okay, so our group was responsible for coming up with some solutions for internal workflow statistics Because I don't know if some of you have noticed but the internal statistics statistics Not the ones that are about downloads or accesses, but those that are about the editorial workflow Itself have some maybe strange numbers pulling out sometimes So we figure out Well, the the goals were to identify desirable statistics for to be reported from journals improve the journal level statistics and And providing more options other than we what we already have and identify statistics that are not being produced correctly so the solution we came up was to improve first we would have improved some of the CSV files that are reports you can then import to excel and we also came up with the dashboard kind of thing that would show some of the most important data statistics sort of in at a glance fashion with maybe some graphics on the fly updated on the fly and so for implementation implementation we made a list of required fields for those statistics and Filters as well. So we've added some more filters and date ranges We sorted out by priority of developments. So which ones would like to go first and so on we also did a benchmark with OGS2 taking into account what it already has out of the box So we would just add the ones that there isn't yet and we also were able to identify some of a Long-standing issue that was causing some wrong data being pulled from reports on OGS2 So this is mostly documentation level We didn't do any coding yet because the group was primarily non-technical people but It's a good start So my group was pretty small just two of us Demetris and myself I've been working with Demetris For a better part of a year and this was kind of cool because it was the first time we met in person So that was interesting The specific problem we want to solve is converting the export XML that you get out of OGS2 Into a form that can be imported into OGS3 and the specific use case We have is that we have over 500 journals in the PKP private locks network now that have have OGS2 XML and We're anticipating having to take them out and make them publicly readable again into an OGS3 instance Sometime in the near future or maybe 15 years from now. We don't know for sure There could be other use cases for this kind of conversion as well We didn't get as far as we wanted to But we the solution we're working on is we want a script to make it completely automatic We you import some OGS2 XML into an OGS2 instance You then upgrade that instance to OGS3 and you export the XML as XML as OGS3 XML It's kind of a roundabout way of doing it, but we think it's got some legs and we're gonna pursue it after the conference Hi, so one of the new and exciting feature coming to coming with the next release of OGS is the REST API So basically this feature will Hello Scripts or people to interact with OGS system without going through the user interface So in order for programs to be able to query OGS they need to identify first so in instead of having our own authentication Stuff we decided to comply with an open specification Which is JSON web token and this integration was successful so that part is ready Then And we know that OGS is made of plugins so in order for the plugins to take advantage of the REST API we had to discuss about the ability for plugins to be able to extend via hooks so with Nate I was able to To talk to discuss a reliable architecture to prevent plug-in conflicts so developing for that has started this morning and finally Do live which is also Was was part of our group was it has worked on a document to outline requirements for OMP API So this is I didn't really make a pitch for this but there's something I already started before there was a need for editors to be able to kind of Customize or personalize rather their journals and a lot of the default themes are like kind of not really Distinguishable so I think there was a need for new themes. So right now I'm working on six themes And during the sprint I had the chance to speak to people who are actually working on OGS 3 So I had some more specific questions Alex showed me around Around the code, but there was no code then it was mostly like making mock-ups and I worked a bit with John as well So yeah, pretty much right now. I so far I've made a theme for the health health science based journal The idea is to make themes based on academic areas of research So yeah, I'm getting feedback. I have the I have the Prototype online as well. So anyone can comment on it and check it out that'd be really cool Actually and helpful to get points of view from different people and the second part actually on which we John and I spent the most time was Just kind of reworking the logo to the bottom of it. I think there was something that was a source of conflict maybe not conflict but So yeah, so we just did that and yeah, so some eye candy pretty much Yeah Thank you, so I can't underscore the size of the brain trust we had in that room and being as we do work remotely We never had the chance to meet each other never let much less the committee members and the community at large So it's a really rare opportunity for us to get some interesting work done and figure out what everyone's like as a person as well Not just as a screen name Watch the pkp blog the link is here for some detailed reports that will provide some more details on everything that we accomplished and some links to Results and that sort of thing so you'll be able to find out more if you're interested and see what legs each one of these sub projects has And finally, please consider joining us next time We have roles for techies non techies everyone if you you won't necessarily be able to bring a specification and have somebody write it But if you're interested in getting your your knuckles dirty, then that's a really great way to do it We try to hold them in spring and fall generally speaking although this year that we're only doing one issue one event So consider watching for the next event to be announced on the blog pkp blog and coming out
Reports from the development sprint groups on their activities the previous day of the conference.
10.5446/51352 (DOI)
Well, my name is Pedro Lopez. Right. I have a very strong relationship with OGS. I'm still learning about the platform, not only with the processing the policies or metrics. Now we're learning about the accessibility specifically with OGS. So just the topic for today is about OGS. It's not technical, but it's what I just said to show two new tools that we're working to better the platform for people with any possibility for example, a visual or how it is, it's got a similar deal with showing the contrast colors and maybe reading an XML file with the system. The robot site is the word. It's not for people, it's really the article and XML for a machine. Well, this is the, the, the object of this, this topic. So, yes, I think there are some talk about the inclusion and I saw a minutes about the web accessibility, not in access to journalists index it. The directory of open access journalists in large. And, well, the results of the research is very impressive. And the numbers, all the platforms with OGS failed the test. The team is working a lot with the developing tools and we have to get a score and the contribution to show you how well this is a video. So thank you for the $10 N This. Thank you. a ticket from the World Wide Web Consortium about what accessibility perspectives video captions video isn't just about pictures it's also about sound without the audio you'd have to guess what this film is about please yeah well it's a media infrastructure music technique moment that means what's going on that's the situation that everyone we can't work access to the team perspectives video captions video isn't just about pictures it's also about sound without the audio you'd have to guess what this film is about frustrating isn't it not knowing what's going on that's the situation that everyone who can't hear captions make videos accessible which is also handy for people who want to watch video in loud environments or what leads you to be very very quiet well come up with good contrast there's something about great this is a video for us so it's not a lot but it shows how the difference the sub-perspective of the consortium and the first topic was about the captions the caption is not only for the people with uh this world is a facility so you need to sorry it's only it's too for the people like you like me when a noise environment for example we can read the titles and maybe we can read very well we can read the classes or the device the special device so the the what the accessibility is not only for the business world and even the not not only for for us humans for devices too the browsers or computers the players too in this site you can see you can see video player for video with captions you can increase the font size the font it's certainly the tool very interesting code to uh the people who who was working in the consortium uh try to develop new tools different tools then that we can see and you do for example yeah you can uh back to the presentation please thank you well this is the the sample you can need this again with the yes icon if you see the pictures no the the the the previous one in the icon of your screen right i mean you drop the right thank you yeah what you can see uh the bottom i developed earlier my name is Shadeh so yeah you can see at the top of the video player uh a group of players different you can see the caption for example you can change the language of the captions you can see the transcription in the yeah in the if you look in the bottom please you can see all the text that person is talking in the video you can copy and paste maybe there are many tools and the accessibility is kind of the sound and you can change the language in real time and the caption to change too so this is a web accessibility it's not for us it's for devices it's for in this case for the video player thank you and then with some of the presentation please thank you so the web activity is a set of rules and norms that the moral and consortium publishes and the official website and with time given for it all this is a large document it's a big document but it's very interesting how you understand the accessibility in all the sites or websites well this is the results of our graphics for the alessandro amines about the research in the open access journals and how the ogs is the most used or indexed in the language and he he found that 100% of the open open-site access journals present web accessibility issues all of the journals we have to change that number it would be a lot of work but I think we work together and try to instead of to do something different every day we can change that number not only for the open-run assistance in the world of life too but what this is an example that we were working about the new plugins and the new components can you do we have that blue is the blue sample of the team to review maybe the kind of publishing an expression or in the future to be available available in the next version of ogs in the sidebar the left right side you can see two volumes the first one is for increase or increase the text it's similar when the ogs two version is not really a prototype the code for that version and yeah it is very historical for the text and the second one can you it's about the contrast you can change different colors and all templates for to be using the great content it's a better version to the jet but we are working a lot to evaluate this this this project and you have a atop the article the the list the full text you can see the video of a vmioc xml a dutuf and a mp3 this mp3 is a really is an xml file for machine and we have to work in a with a pleasure to interpret in the in the website but if you can see the the audio and infiltration during your crop sequence under no tillage summary soil properties very over time with soils presenting different susceptibility to runoff and erosion during the year under no tillage cropping soil physical properties could change the engine thank you you heard is not natural speak this is very machine but is the because i'm an xml uh just so we can we can take all the you have often put the the mark of xml and use any tools to to to build time in between files right well we have to work a lot we need developers and others to do web designers especially with the opportunity already anyone who wants to contribute for this new blue means if you contribute with us i i promise if you copy your team and as if you make it right well this is uh in the presentation thank you very much enjoy
With the upgrade of OJS, we can read content on a mobile device and have different themes for a better web design. But will this increase the visibility and positioning so that any person with internet access can read or consult the papers in a better way? Is OJS really for everyone? If OJS was upgraded, we have to do the same. We have to think differently and work differently. We need to innovate for people with disability by designing, developing, and implementing tools that facilitate reading, by customizing site colors and texts, and by facilitating access from the keyboard. These are just a few examples from the User Agent Accessibility Guidelines by W3C.
10.5446/51355 (DOI)
Hi everybody, I am Vanessa Robert and I am the electronic publication manager at the University of Pittsburgh and I am also the current president of the Library of Public Shave Coalition. I'm here today to talk about the Library of Public Shave Coalition and how Library Publishers are doing big things by working together. First I'm going to talk a little bit about what Library Publishing is, why libraries are engaged in this space and who benefits when libraries get involved in publishing. Just a quick check around here, how many of you are identifying a Library Publisher? Okay, so quite a lot of you. So some of this is going to be familiar to a lot of you but hopefully there will be some new takeaways in that presentation as well. So what is Library Publishing? What sets us apart from other types of publishers is that in addition to the publishing activities that we perform, it also has a set of values, such as focus for openness, inclusivity and sustainability. We are driven by admission and not by profits and our goal is to serve our communities and improve the scholarly publishing ecosystem for all. Serving our community means, so these key characteristics that are very familiar to libraries in general that we also share through our publishing experiences in a new way. We are willing to experiment with new publishing models and new content. We are looking to facilitate learning and providing core services and engaging in partnerships with our community. Why do we publish? Library Publishers are a variety of reasons and it's not always the same for every publisher, but in general we have an interest in the access and discovery of knowledge and preserving the scholarly record. For many academic libraries, our role is beginning to shift away from the traditional model. And so services like publishing and data management are of the ways to continue to demonstrate our value throughout the research lifecycle. We're also supporting research that is coming out of our institutions. The majority of library publishers are focused on their publishing content from their own campuses. Publishing activities in the library are also usually subsidized and this means that we're not dependent on cost recovery and can publish work that others might not, such as hyper-need research, material of local concern, voices marginalized by traditional scholarly publishing, and new forms of scholarly out-of-home. Many of us in the library community are focused on developing and implementing an infrastructure in practices that benefit authors and readers in their tribute to a more equitable scholarly communication system. Who benefits from library publishing? Who? Okay, so what is that? What's the time? So I'll just talk about who benefits from library publishing. Humanities in social sciences and other highly specialized areas of study, for which there are extremely small markets, even by academic publishing standards. Also student publications, there's not a strong market value for many student publications, but it's a great learning experience for students and it gives a home to their academic output. Also journals that serve as in-practice space disciplines such as public health or social work that may not find a home in traditional publishing. And a vast majority of library publishers who were surveyed in the 2019 Library Publishing Directory indicate that open access is a primary motivator for their library publishing programs. And that aligns with our core value as I raise a fostering openness in the scholarly publishing system. So just a brief history of library publishing. Many libraries got their start by partnering with university professors who are already involved in the publishing process. Also early on in electronic publishing, libraries were involved with things like Project Hughes and Pirate Press. Spark was formed in 1998. And then in the early 2000s, platforms such as OJF began developing and this gave us a venue, a platform for our publishing activities. This was a huge boost to library publishing because we can't afford some of the most of the commercial online publishing platforms. So having open source options was a really big boost to library publishing. LBC formed in 2019 and here we are today where open source infrastructure is proliferating and bound to everywhere. We have a lot of options to choose from those options are getting better and better all the time. So what do libraries publish? Some of the early publishing that was done at libraries was PCs and dissertations, pre-print, spray literature, conferences, technical reports. Now we see a lot more of library publishing is journal books. A big one is open educational resources now. Data sets, things that are supporting their institutions, their scholars and making knowledge open. Here are some examples by some of our members of the various types of publishing that they are doing. So what kind of services do library publishers provide? Some library publishers are able to offer a home for the journal through space something like OJS and giving that platform to the journals. But other library publishers are able to provide a wide variety of services in support of these publications. And these things, these publishing services and what it takes to create and output this content are things that libraries already know a lot about. So the expertise that is needed to provide these publishing services are things that libraries are in a great position to provide. One of the ways that I often describe what I do to people, which is you are a library publisher, know, very difficult to explain to people who are not in the space, is that the editors for all of the journals are experts in their fields and their disciplines, but they might not know a lot about publishing. As library publishers, it's our job to be the experts in all of those extra things. So you need skills like copyright and fair use, project management, not just for your library publishing program, but teaching how the journal editors to have project management for their journal and advice on their publishing workflows. You need to be experts in a variety of publishing workflows so that we can work with our individual editors to figure out what works best for them and their resources and their discipline, as well as contact acquisition. And this is all, this is conducting through reviews. This is building an editorial board and this is soliciting in the service. So the library publishing units can be home to a variety of places within the library, digital scholarship centers, co-operations with the presses, their offices. Some common questions that we face are how to decide what to publish and what not to publish. Only published things produced by faculty on our campus, students, faculty only, what kind of services the library was willing to provide, for what services would be needed to charge, and open access and what rights the authors will retain and how that content will be made available. Also what role activity, diversity, and inclusion will play in our program, in the content that we publish, and very important, how to sustain, how to build infrastructure and sustain the library publishing program over the long term. One way to do that is by developing a business plan. Many library publishing programs start out informally. A faculty member comes to the library, a place they're used to coming to, to help and support them. I want to start a journal or I have a journal, can you help? So these may start out as pretty informal things, but for just a long term publishing program, you need things like business plans, policies, financial structures, and a way to measure your success over time to ensure the longevity of your program. So that's a lot to navigate. This is the part where I get to plug in my cat and get sure some tips for everything. I was not to be asking my surprise. So that's a lot to navigate. So as a library publisher, it can feel like we're trying to take your things out on our own with limited resources. Often it's only a few people at the library providing the service, or maybe even only one, trying to meet their community needs. Also, the publishing landscape is filled with publishers who are bigger, who have been around longer and have made it able to provide more services, which were not yet prepared to provide. But together, we are building the library publishing coalition, a community of support with common values to share our knowledge and resources. We believe that we can do more together than we can do alone. The library publishing coalition is an independent community led membership organization of academic and research libraries and library consortia, engaging scholarly publishing. And its mission is to extend the impact and sustainability of library publishing and open scholarship by providing a professional forum for developing best practices and shared expertise. Our membership institutions are located primarily in North America, but we do have a few international members and we are welcome any one from the international community to join our coalition. We engage with the community through various professional development opportunities like the LPC listserv, the annual library publishing forum, which is a place where we can get together once a year in person to share our knowledge, our successes, and our ideas for the future. Members can lead change in the scholar communications and publishing, helping set strategic direction of the library publishing and through participation in committees, task force and the board in a growing and highly collaborative community practice. Groups form and involve reporting to the users community and the purpose of the group. Some are expected to renew each year. Here's an example of a few of the committees that we have right now. The program and directory committee are ones that renew pretty much every year, but we're always adding to it when a new area of interest comes up. A fairly recent is the diversity inclusion task force because we felt that was a need for the community. And together we have to create that and explore what we can be doing better in that area. So new committees that task force are forming all the time in response to what is in the community. The library publishing coalition offers a variety of resources to support its measures. The professional development guide implies training and professional development materials, reading courses, webinars, readings for library publishers. The show documentation portal is a space where members can share and find resources, support the program such as memorandum of understanding copyright agreements, business plans and product workflow guides and being added to on an ongoing basis. Whereas most of these resources are freely available to members and non members, the share documentation portal is specific to members only. Other resources include the ethical framework for library publishing, which introduces library publishers to important ethical considerations in a variety of areas and provides concrete recommendations and resources for ethical scholarly publishing. The curriculum we partner with PKP, NACI and HUSKIDE and HUSCRINT. The library publishing curriculum includes four modules that address major competencies in library publishing. Content, impact, policy and sustainability. We have a HUSHIDE guide that we created in partnership with the directory open access journals. And we also have a library publishing bibliography, which brings together a scholarship about the fuel of library publishing and that is added to our ongoing basis. Three current major, sorry, sorry again. Three major LPC initiatives at this time is that for the library publishing curriculum, we see this is a very important resource that should be updated and added to on an ongoing basis. So we've appointed an editor in chief to this, Cheryl Ball from Wayne State University. And at this time, Cheryl is gathering an editorial team that is going to support the curriculum moving forward. It is from at home and library publishing. LPC also partnered with Ejocopia and 12 other partner libraries to study the workflows for library publishing programs. We believe that if we can study these areas, workflows, we can learn from each other and share them with others so that we can all improve our workflows and that we can support those who are just starting out. We also have the first starting our second round of the fellowship programs. The first round of fellowship programs was incredibly successful. We were able to bring on two fellows who were not from member institutions so we could include their voice in our discussions. And these are geared toward those who provide access to underrepresented groups and also those who are just starting out in their library publishing careers. The LPC is always bringing on new projects and new initiatives. So these three are like definitely not the only three major initiatives that are going on. We're continuously partnering with other organizations who have shared interests with us and have an idea of something that they want to work on and bring it to us because they think we may be interested in it as well. And responding to the needs of the community. So we always have a lot of balls in the air that comes to initiatives. And so these are just three examples of what we're currently working on. So the LPC developed its first strategic plan in 2018. It's a five year plan. We use this strategic plan to evaluate new opportunities that come our way like some of the forms I'm just describing. And ensure they fit with our communities needs and our long term goals. This can also mean that not only using it as a way to evaluate opportunities that come our way but to guide the development of these initiatives and partnerships by strengthening those like expanding their scope or incorporating our own values and priorities into that proposed project which can result in even more meaningful outcomes for the community through those projects. We also don't want to just publish the content. We want to create a solid publishing landscape that is open, inclusive and sustainable. So for us it's not just about publishing the content it's about affecting real change in the way that the publishing landscape works as a whole. So what is it that our community needs to move forward after a greater impact? Well, we need more of you. Our partnerships are our strengths. And so we are stronger together. The more people that we can bring into the conversation, the more we all benefit as a result. Local investments of each scholarly publisher brings something to the table and when they invest locally and then bring it back and share it we all benefit from that. What we need to work on is developing more of us practices and refining our standards and making sure that our library publishing activities live up to what is expected of the library publishing. We need to do this and we develop and share common-body expertise and knowledge. That's what we're trying to do through a lot of our programming and our shared documentation and a lot of our activities are keeping this in mind. We also want to know the development of what the library publishing ecosystem at scale means and we want solutions to the libraries at all resource levels. Some of our members again are just like a single person at a library who is asked to start publishing. And some of our library publishing partners have a lot of resources and a lot of activities and we need to make sure that we don't leave one behind the other while serving one behind while serving the other. We need to keep in mind all of our members at their various stages and they are for the library publishing programs. So I included in my five of just references and further reading about library publishing in general, what we're up against, what we've learned so far, and I will be happy to state your questions after. Thank you.
Library publishing has a unique set of values and practices that distinguish it from other types of publishing. The Library Publishing Coalition (LPC) extends the impact and sustainability of library publishing and open scholarship by providing a professional forum for developing best practices and shared expertise. The strength of community and partnerships is leveraged to support each other's publishing initiatives and achieve together what would be difficult or impossible to do alone.
10.5446/51362 (DOI)
So welcome to our lightning talks. We've got one, two, three, four, five lightning talks today. So what we'll do, how it will work, my name is Marissa McDonald and I will moderate the session for you guys. So what we're going to do is it's five minutes each for each of our presenters. I will ask that you each self-present or self-describe. So just say your name, where you're from, and then you can begin your talk. We won't start counting until you're done your introduction. And so you've got five minutes to share. I will give you a little bit of a one-minute warning when you've reached. And then after each of our lightning talks have spoken, we will do a question and answer. So they'll remain on the stage and you can ask any questions you might have. And without further ado, I'm going to let our first lightning talk and we'll just begin right away. Thank you. Hi, everyone. I'm Caitlin Nussin. I'm the PKP documentation interest group coordinator and a digital projects librarian with Scholars Portal in Toronto. And today I'm going to quickly talk to you about what's going on with PKP's documentation. So the documentation interest group, or the DIG, works together to create, update, and improve documentation for PKP software. And we do this primarily through documentation sprints. So documentation sprints are bi-weekly meetings that we do online for an hour and a half where we work together on documentation. So the real benefit of these sprints is that it gives us space to ask each other questions, get technical support from each other, and as well as having somebody else who can review what you're working on. So we decide on our tasks beforehand and then we work together on these calls to complete what we've decided on. So at a sprint, we might work on a new piece of documentation or we might be working on moving documentation into the documentation hub. So this is a screenshot of the documentation hub which many of you may be familiar with. So this is the website where all of PKP's documentation is, it lives. And the code and the content for the site is on GitHub and is all written in markdown. So some of the recent highlights of things that the documentation group has worked on recently have included the ORCID plugin guide which we did in collaboration with ORCID, a guide to DOAJ inclusion for OJS journals which we did working with DOAJ, a guide to designing your journals of branding and typography and styling, a student journal toolkit, the developer documentation hub, and a guide to upgrading from OJS 2 to OJS 3. So some of the things that we have coming up include OJS 3.2 updates for learning OJS 3, a guide to indexing in Google Scholar, a learning OMP guide, as well as an instructor guide for course journals which was worked on at this sprint for this conference. So I'm just going to go through a few ways that the community can contribute to PKP's documentation efforts. So first you can report issues, so similar to PKP's software, you can also report issues for documentation directly in GitHub or via email which I'll share at the end. You can submit changes so as you're looking at PKP's documentation and if you notice an issue or something that could be improved, we have these improved this page links that you can click on and make direct changes to the documentation. You can also work on translating documentation, so currently for example learning OJS is in four languages and we'd love to expand these translations to include more languages. You can create new documentation so if you have your own documentation that you've written for internal use that you think could be useful for the rest of the community or something that you would really love to see, then we can work with you to create new documentation for the documentation hub. And of course you can join the documentation interest group so we do sprints every two weeks as I mentioned and people can join for a single sprint if there's a certain topic they're interested in or can join us for every sprint that we do. And lastly I'd just like to point out that we have contributing guidelines which are linked to from the documentation hub that you can review if you're interested in learning more. Thanks for your attention. Okay. Hi everyone. We are Jan and Dennis from University of Berlin, Switzerland. Me I'm working for the open science team there. I also manage the OJS platform currently which is called Bob Bern Open Publishing. Dennis himself is a subject librarian for theology, Jewish and religious studies. Today we are going to present you our planned workflow for Humanities Journal that we are currently migrating to our institutional platform called Bob. Thank you. So in 2018 we received a message that the journal Udayka, one of the most important Jewish studies journal published in German, non-interrupted, was about to be shut down because of the organizational and financial problems. With an approach the editors, one of them is a professor at our institution. Why not continuing publishing on, on, on Bern Open Publishing as a new only and open access journal on OJS. So that was decided and once the decision was made that the journal will continue on Bob, we started discussion, the technical workflow. We had a couple of goals and requirements. So what we wanted is a different output performance, namely PDF, HTML and XML for the lens viewer. We said that PDFA was high on our wish list and high quality topography was also essential for us. It's a Humanities Journal. And from the very beginning we know that we wanted a single source publishing workflow outgoing from a chat's XML file. The PDFs should be produced without manual typesetting. Finally, all this should be done with free and open source software if possible. Okay, we had two challenges that had to be met here or that should be mentioned here. First, just like many of you, we don't receive our submissions as XML files obviously, but we receive doc, Word files, sometimes open document format. So we have to cover this somehow. The other thing is we need to deal with multilingual texts. We publish texts in German, French, English. And all of these texts contain material in Greek. Hebrew obviously, it's a Jewish studies journal. Occasionally we have Arabic. Hebrew and Arabic running from right to left, which is an additional challenge. So these are the tools that we use. Jets, XML is our production format. Jan has mentioned it already. The other tools are used to do the various conversion and presentation steps. Let's look how everything works together. We receive a submission as a work file. I will then convert this with Pandoc to a markdown intermediate temporary file. I do some manual cleanup, add metadata, check if everything is there, if something is missing. I will polish it and then do an additional conversion step with Pandoc to convert it to Jets XML. Here I also pull in metadata that is saved in an additional file. Once everything is in there, I use the lens view to make the web presentation. We have an XSLT style sheet to produce an HTML version. Once we use a text-based type setting tool called Context to produce PDF. Directly from this XML. We don't need additional tools here, which is very nice. Every step here is like all the commands we use performed on the command line. We have a make file that just automates most of it. Once the workflow is set up, you don't really have to worry about each step. You just type in make X and it happens automatically more or less. That's the output. Basically the lens view, you know that. The HTML. Finally, here, a very important article as you can see. The nice thing is that you see the render. The Hebrew is rendered correctly. Also the Greek. We have a block quote. It's not very complicated, but at the moment it's more or less a proof of concept, but it should work. We have to just add more of JET's elements to this rendering and then we'll see how far we can go with it. Thank you. We have GitHub repo and you can just check it out and see what is currently possible and try it out. Thank you. Hello, everyone. My name is Simon Chavant. I come from Grenoble in France. I work as a web developer at the Central Mersenne. Before I just want to say please be indulgent with my poor English. I want to introduce the Central Mersenne, which is a quite recent project since we launched it in 2018. The Central Mersenne is a project which offers a set of publishing services and tools for diamond open access journals composed in latex. By diamond open access, I mean no cost for readers and consoles. I don't know why. So why do we use latex for production? The Central Mersenne is developed by MatDoc and at its name suggests MatDoc is a unit dedicated to documentation in mathematics. Among the services developed by MatDoc was the CEDRAM, which is a dissemination platform for electronic journals only in mathematics. And maybe as you know, mathematicians use latex to compose their articles, mainly for the capacity of latex to render formulas in PDF correctly. Latex is pretty good for time setting too. And we use PDF for publishing articles because we are sure of the rendering on the different devices and the print version. So in 2017, we won a call for a project to create a structure supporting the transition to open access. We created the Central Mersenne, which took over CEDRAM and extended its range of services and disciplines. So the main objective of the Central Mersenne is to help diamond open access journals. Compose with latex to move to diamond open access. So let's have another view of our services. The first one is support through the editorial process for each journal. We make the choice of the OGS software. We adapt OGS for each workflow and provide an instance for journals. Our specificity is to use OGS from submission to production only because we have already developed our publishing platform, especially designed for metadata workflow. As I said, articles are published only in PDF for now. So these core services are free of charge. The second part of our services is optional and fee-based. As you can see, type-setting is an optional service, but all of our journals ask for this because that ensures a good visual consistency. Due to our success, we don't have enough human resources to achieve all of the work, the job. So some of these services will be outsourced like type-setting. If you want more information about these services, please visit our website and contact us. I'm not totally aware about this part because I'm mostly technical on the technical side. So currently, we have more than 10 journals and six seminars. You can see here an example of latex layout and the corresponding website. During the last two months, we helped French Institute to switch to non-diamond open access platform to our platform. We opened the website last Monday and tried to accept new submission in the beginning of December. I can't say more because there will soon be official communication. So please thank you for your attention. I want to thank the PICAP team for all the jobs they do and the dynamic community around these tools. So if you want more information, please visit our website or ask me during the break. Thank you. Hello, everyone, I'm Charles Le Thaïre and I were here to present Obsidia. It's an initiative I co-founded about one year ago. So what is it? I won't bother you with all the colors of open access, the green, gold and so on. I think you all know that very well. I will just focus on one we are doing at Obsidia. We can be classified as diamond open access since our platform is with no APC and no subscription fee. But we are also not depending on public funding for the running costs. So I will explain how. What we are doing, our vision is that open access can be seen as an opportunity and not have a cost. The scientific results has value, even economic value, that can be, we can take advantage of that out of the economic circles. So we are developing two parts. One is publishing services, diamond open access services, totally free of charge and text mining services that we sell to fund the publishing platform. I will give detailed example of that afterwards. But I'll start with the publishing platform. What are we offering to editorial committees? We are offering ready to use platform, ready to use services so they can focus on their scientific work with no economic pressure. We just to say briefly we provide hosting, support, cross-posting, DOI registration and so on. And we want to be very clear that there is no locking, editorial committees are totally free to come to us and to leave the platform. Since all the publication are CCBY license, the editorial committee remains the owner of the title of the journals. And we are based obviously on OGS, on an open software tool. So there is no locking and it's really important if you compare to most commercial publishers. One thing which seems important and what is coming out of the discussion we have all these past days is that all publishing tools should be tailored to reach, to address community needs, each community needs so there is no one size that fits all as many of you said. So that's also something important for us and also something that OGS enables and thanks for that. Here is how our beta platform looks like. Feel free to connect and to see how it is. And now I will move to the big question, where does the money come from? So I will give just one detailed example of what we are developing and I hope you will realize two things. One is it's a sustainable model, we are not so crazy guys. The other thing is that we whole here as an open access community are working in the benefits of the society. I don't think John Wilinski is here now but he said yesterday that you want to sue the US government to prove that open access is better for the benefits of the progress of humanity and of science. So I hope he can add the case I will present to this suing process. This case comes from New York time tribune from three Liberian healthcare professionals. They said that in 2014 the Ebola crisis could have been better prepared if they could have access to public articles. And publishers ensure that it was not the case because there were too many articles and the signal was too weak to be detected. So it was a very good case for us because it illustrates both of our activities which is open access and text mining services. And actually as you can see we prove that there was using proper tool we can detect weak signal but not in the metadata in the full text. So we need open access and we need tools to get interesting results. I'll go very fast to the two other examples. One is technology comparator in the case of solar cell technologies and one is a search engine to fight fake news, scientific fake news. So here is some examples of what we are developing as text mining services to fund open access. We had, as I said at the beginning, we had just one year hold and what we achieved in the first year is to have alpha dozen editorial committee project, editorial projects with different level of maturity. So we hope at the beginning of next year we would be able to announce two or three titles and we also have two customers for the intelligence tools parts. So here we are. I'll be, thank you very much for your attention. I'd be happy to answer questions and also if you're interested we are hiring people so if this adventure is interesting for you please feel free to comment to me. Thank you very much. Hi everyone. I'm Marcel. I'm a researcher and OA officer at the Alexander von Humboldt Institute for Internet and Society in Berlin. I'm quite happy. We had a couple of conversations about what I'm going to say this morning already. So aside from that talk we very much hope that you whatever get in touch and we can sort of collaborate on sort of saving small scholar lab non-APC journals. While emerging in applied research fields like internet regulation, media informatics, computational linguistics or journalism particularly strive for dynamic and diverse publishing ecosystem without high APCs and opaque publication processes. And whilst there's a multitude of transformation approaches and financing models for subscription based journals, it seems that non-APC, scholar led OA first journals are left alone in the deep waters of the publishing world. Thus the 18 months in access project at the Humboldt Institute for Internet and Society in Berlin together with the Leibniz Information Center for Economics in Hamburg and Kiel takes the established open access and peer reviewed journal Internet Policy Review as a test case and hopefully better practice. As a first step we're currently improving Internet Policy Reviews publication infrastructure with regard to wide array of technical solutions and therefore meet current bibliometric tracking and accessibility demands. In a second step we're developing sustainable OA financing modules applicable to journals and interdisciplinary contexts because in those contexts not just one disciplinary community should pay the bill. And since we believe in the strength of this particularity connecting with our diverse stakeholders at the HIC we're thirdly establishing long lasting and productive networks reaching into this multiplicity of academic fields. Just wanted to elaborate a bit further on those three parts. For instance on a technical level we co-develop various droop hold block ends to enhance our publishing infrastructure while automatizing the editorial workflow with our OGS3 backend. As for open access includes as for us open access includes inclusivity we're implementing WCAG guidelines on an HTML and PDF level both to make content more accessible that is the version 2.1 on AA compliance. And at the same time we're discussing the use of alternative metrics together with our partners at the UOC and the Internet Interdisciplinary Institute here in Barcelona taking into account current work and research like with the project that want to mention reference implementation for open sign to sign to metric indicators at the Latinates Information Center for Science and Technology in Hanover. We will try to come up with solutions for contextualizing and visualizing alt metrics on our journal site. In terms of financing models something that a lot of you might be interested in we're overcoming or try to overcome one size fits all solutions and create a reusable budget toolbox for non APC scholar led small science journals. Assuming and hopefully proving that the author pays model cannot be the way to go we are evaluating existing OA financing models and reflect upon alternative fair modes of covering the actual costs of OA publishing. Eventually this will assist so we hope on a practical level how to establish and negotiate financing opportunities with their partners and funding organizations. So none of this can be done alone it rather needs to be rooted in the research and OA community. Building upon our community support so far we will consolidate existing and establish new academic networks for example by bringing back research societies into the publishing responsibility and create awareness among one of our key stakeholders that is librarians. First findings as part of the project show that there is a promising chance of uniting both societies and libraries in some sort of consortial model to support non APC scholar led journals. This includes also setting up incentive long term publishing corporations and fostering a productive interdependency with research societies. As all of this is designed to scale well and be adaptable for journals beyond our journal or the context of internet research. All of this is designed that way so there's no end and as articulated in the juicy call we hope to support Bible diversity not only in an abstract way but rather practically encouraging smaller journals new formats and publishing initiatives. As all of this will be processed into transferable solutions all of this will be processed into transferable solutions to allow the research and OA community to make the best out of it I will include a couple of links in that PDF. Those solutions include two instructive white papers on technical solutions and financing modules due somehow summer next year and multiple workshops for publishing experts librarians and journal editors to encourage a bottom up perspective and reevaluate our results that in fall next year in Berlin and Hamburg. For all of you interested in the details there will be a full technical documentation on GitHub as well I don't have the link right now. The Hombald Institute for Internet and Society therefore stands by its commitment to a free and sustainable dissemination of knowledge particularly as a hub for internet research in Europe and as part of a network of internet research centers worldwide. Thanks a lot for your attention and please get in touch if you have any questions or want to be part of it. My name is Adela and I work for a learning society in Scotland. We are a small organization and we mostly publish research about Scottish history and archaeology. At the moment we have three types of publication so we do we have an annual journal we have a series of open access reports and we have a list of books. The journal is quite established it started in 1851 and there has been one volume published per year approximately since then. So as you can imagine there is quite an archive of big volumes. Specifically there is I think 6,500 individual papers and more than 90% of them have been published in print. So all of that research sort of locked in there. So a few years ago the society decided that we wanted to put all this research online into open access and to start publishing the journal electronically. We didn't have any of the expertise or infrastructure for that so we approached Edinburgh University library who have been running OJS for student and department journals for a few years and they were able to set up a journal hosting platform for us that is fully customized. Will it work? No? What am I doing wrong? This is what it looks like. So the University library did a sort of complete initial service for us so they customized the website so that it looks like so that it corresponds to the style of the society website. They migrated all the PDFs from an external repository where we got them all scanned. They cleaned up the metadata for us and they did all of that work for a pre-agreed fee that we had with them. I'm not doing a good job with this. There we go. So there is another list of customizations that they have done for us. We have launched this last year so we have had it for one year and it has been very well received and successful and we want to continue working with the library having the service so what we have agreed is that we are going to pay them a subscription and based on this subscription they are going to host and maintain the journal for us. So it is an unusual situation where the publisher is paying a subscription to the library rather than the other way around but so far it has been working really well for us and so much so actually not only for us but it has been working so well for the library that they have started offering the service to other organizations in Scotland. So there are now several universities and schools and I think the Royal Botanic Gardens as well. They all have their own publications hosted by a platform that is provided by the Edinburgh University Library. As for our goals we have the full archive of our journal online now but we have still not started publishing it electronically or taken advantage of all the different digital workflow tools in OJS because most of our production process up to now was to have the journal produced and typeset into a PDF externally so we just continue to do that and upload the final PDFs there but took the journal 150 years to get to this stage hopefully and near future we will be able to start publishing it electronically as well. The other thing that the library did for us is that they set up an OMP platform for us. So like I mentioned before we have a list of books and these books usually get published in print only and because we are a small organization we usually can't afford to reprint the book so once the initial print sells out we can't make the book accessible anymore. So in order to manage our backlist better we decided that we would like to put a PDF of the book online into open access after the print run has sold out and so the library set up OMP for us again much the same like OJS did all the customizations so that the websites look alike and all the migration and the metadata cleaning and again it's something that we have lodged just recently. It just started in September this year and it's been received extremely well. We have not promoted it much we basically just used social media and the main title from 2016 that we have released has been downloaded over 800 times now and this is a lot for us because it's a monograph on a very niche topic and we published it in a run of 320 copies so to have had 800 people downloaded in the space of two months is really good for our outreach and for our for growing our readership. And this is all from me. Thank you very much for your attention. All right thank you. Does anyone have questions for any of our lightning talks? No questions. Hi I have a question for you Charles. You mentioned that obsidias develop in a search engine to fight against fake news. Can you develop more on the technology and if you have a workflow or a prototype on how you plan to do that? Thanks. Yes. Yeah the idea come from a journalist that has to get a case about a protein that was known to cure AIDS. And so he took a long time to understand if this opinion I would say was backed by scientific literature so he had to ask many experts and what we are trying to do is to make a tool that is not searching for specific articles because there is when you have this type of case you would easily find one or two articles that can go in one side. So we are trying to have tools that analyze a big corpus and say that if there is a scientific consensus, if the scientific consensus is backing the claim or not. Hi my question is for Marcel in terms of the particular publishing stuff that you are working with. I may have misunderstood but is rapid publishing or continuous publishing or some kind of model where you are moving to publish academic work more quickly. Is that part of the project that you are working on and if so what are some of the challenges that you found with existing platforms and workflows? Thanks again I didn't really mention that. So Internet Policy Review as a journal itself works in a very classic format that is publishing issue based but as I said or was what I tried to mention is that obviously when you think about the challenges or the chances that come with electronic publishing you can rethink that. So we don't have an exact plan right now what we will going to include but there is for instance already a section we set up called open abstract which is a bit more fast track publishing you can check that out on our website as policyreview.info that is the URL of the journal. So we are in close discussion with a lot of journals. A friend of mine is here, a colleague we did that before so we are trying to exchange and think about new ways to come up with new modes and ways of distributing academic knowledge. So I don't know if that answers your question but it is an experimental stage.
- What’s New with PKP Documentation (Kaitlin Newson, OCUL Scholars Portal) - JATS for Judaica: Designing an XML-first Publication Workflow for a Humanities Journal (Jan Stutzmann and Denis Maier, University Library Bern) - The Centre Mersenne: An Open Access Publishing Infrastructure for Scientific Publications Using LaTex (Simon Chevance, Mathdoc) - Diamond Open Access Model Based on Text-Mining Tools and Services for Industry (Sylvain Massip and Charles Letaillieur, Opscidia) - Open Access in Small Sciences: "Internet Policy Review" (Marcel Wrzesinski, Alexander van Humboldt Institute for Internet and Society) - Open Access Publishing in Scotland: A Shared Service Approach (Adela Rauchova, Society of Antiquaries of Scotland)
10.5446/51368 (DOI)
It's a pleasure to be here. Let me see if I can operate the technology. No, it doesn't work. How about that? Yes. All these slides that you will see are openly licensed and some of the images on them, however, are through fair dealing, fair use in the United States. Start off to give a description of our background is I'm the editor of AERODL, the International Review of Research in Open and Distributed Learning. I believe it was one of the first open access journals in North America in 2000. And it's now ranked fourth of all ed tech journals in our field, educational technology and distance education. It's fifth of all educational journals, according to Google, for our field, high H5 index. And it's the only open access journal in the top 20 education journals. And it's the highest ranked Canadian education journal. So we've been around for a long time. Last year we had 800 submissions and it broke our back. Our copy editor said she'd quit. So we had to do something about it. We closed down for four months and now we're going to accept, we accepted 80 articles last year. We're going to limit it to 40 articles a year and we have to do that. And part of the problem is that throughout the world they've gone into this ranking system and they're telling faculty that they have to publish three articles a year in SCIS journals. Ours is in Scopus World of Science and all of those. Very highly ranked. And so we're getting a flood of people who are desperate to get published. And it's a big mistake by the way, the highest, the best researchers I know in the field publish maybe one a year. So it's a strange phenomenon. Anyway, you all know the ring around with faculty and giving the journals your stuff. I'm not going to get into that. But I'm going to talk about free accessible content. And we do know that the internet is the biggest commons and that the public domain is a priceless shared heritage. All knowledge is based on other knowledge. There's no such thing as pure originality. Copyright, what does it mean? Well, with the statute of Queen Anne in the English common law tradition, copyright was instituted to encourage learning and promote the progress of science and the useful arts. It was not put in to protect the rights of the author. Now in the European tradition, they do have that protecting the rights of the author. That is not in the Anglo-Saxon position. Jazzy calls this idea that it's there to protect the rights of the author as a paracopysite, copyright or pseudo copyright. The threat to all the big publishers is free content. They are deathly afraid of all this free content that's coming about. Luckily, we're getting a kickback against the big publishers. The University of California boycotts Elsevier over journal costs in open access. And now I believe there's about 20 or 30 libraries around the world who are boycotting it in Canada. I think the first was University de Marial and Laval University. So this is growing and it's a great trend. I encourage librarians to push it strongly. And why is that is because there's been a big consolidation in the publishing industry where a lot of small players are now becoming just one big one Elsevier with a few minor ones around it. And they don't want to, they want to leave the minor ones around it because otherwise they would be a monopoly as it is now they have an oligopoly and they feel safer with that. Now what's a fair profit margin? Elsevier last year's profit margin 35%. The profits of Elsevier, if you look down and you compare them with Apple, 23%, oil and gas, 7%, Wells Fargo, 27% and Elsevier right up 40%, 35% to 4%. It's one of the most profitable if not one of the most profitable industry in the world. And why? Because we give them everything for free. We pay, they pay nothing. In fact, now they support Gold Open Access and now not only do we not give it to them for free, we actually pay them to support Open Access. So what's happening is they're on a gravy train with all of this funding that they're getting from us and they're going to stick with it as long as possible. I mean, $10 billion in profit. I mean, the Bill and the Gates Foundation has only $4 million, $590 million National Health Medical Research Council, you go down Research Council UK3 Bill. They're way ahead. They're making a fortune. I think it was around two and a half million pounds that the president of Elsevier made last year. Like, we're giving away our stuff. They're making a fortune of us. You know, and forget, if you look at the publishing industry, forget about Harry Potter. Forget about Fifty Shades of Grey. These are peanuts. This money is peanuts compared to what they're making off of us. They're making a fortune. And they're squeezing us like this. I know you have more money than that. They try to figure out how much money your university has and they push up their prices to beat with that. They're having a bit of trouble though. They get a Swedish court order and they've been blocked. And so things are happening. I like to see that somebody is doing something about it. It's all about ownership and control. Open access is the way we need to move. And I'm sure everyone here is convinced we need to move this quickly and forcefully and just get away with it. Let them keep all their copyright regulations. If education moves to open access, we can just bypass it. We don't have to deal with it. As a sample in Crossref, 28% of articles, 45% are in Unpaywall. Unpaywall is 47% recently accessed. And there's 18% more citations than in the paid journals. Sherpa Romeo is another one. I don't have time to get into the detail. But another one where we can see that open access is a benefit. Open access policy, which open access policy? And you can see there's different ones. There's the green, the blue, the yellow. I only heard about diamond open access yesterday. I'll have to find out more about it. So what are the solutions? Well, there's the LibGen solution. Its main focus is the distribution of its own library infrastructure, including its source code, catalog, and terabyte-sized collection to anyone who wants to start his or her own library. So we can go to LibGen and just start our own. Or we can go to SyHub. And SyHub is removing all the barriers in the way of science. What does it do? It brings knowledge to all. They're fighting inequality and knowledge access across the world. They don't accept copyright. And it's fully open access. However, it is illegal in many, if not most, or even all, jurisdictions in the world. I would recommend anyone to look up Alexander L. Bakian's name on YouTube and listen to her description of how she got into the SyHub. And it's in Russian, but they have subtitles. So it's a very interesting description. And basically what she says is that she could not, she was in microbiology, and she could not get access to the journals. Her Ukrainian university could not afford to pay for access to them. So what she did was she contacted colleagues in other universities, and they'd copy and send the articles to her. And then she thought, you know, this is ridiculous. These should be available to all researchers everywhere. And she then started asking them, she created a program to go to different libraries and download all the articles. And she got in through many different friends who remain unidentified, and she's downloaded millions of articles. And what we have now, it's the first pirate website in the world to provide mass and public access to tens of millions of research papers. And we're talking about more than 70 million research papers. That's quite a heavy load. And the solution, they have 85% of all total access journals. And in the Web of Science, they have 97.8% of all the journals. And it's growing. They've been successfully sued by two major publishers. They can't find her. So she's not paying. I fear that someday they will find her. But she has contingencies. Interesting, 85% of all articles. And not only that, they're the most recent articles. They've got them all. Like the ones that are missing are some very old articles that haven't made it into the database. So it's pretty comprehensive. So when I'm looking for an article, I don't go to our library. I have to sign into my university. I have to sign into the library. I have to go find which directory it's in. And then I find my article. In Sy Hub, I just type the name of the author or the title, a few words from the title. And it comes up instantaneously. I consulted with lawyers in Canada. And they tell me that if you have a, if your university has a license to access a particular article, and you access it, there's no problem. Whether you get it from Sy Hub or anywhere else. And their reaction was why would they want to stop you from doing that? You paid for the access to the article or your institution has. And for me, I have never used it to pirate because all of the articles I'm looking for in my field, we have the journals in our library at our university. Social Sciences, 48,000 journals, 4.9 million of 5.9 million articles, 82% psychology, 82.9%. This is a huge boon for people in developing countries. I'm the UNESCO International Council for Distance Education Chair in Open Educational Resources. My job and my main responsibility is to spread the word, awareness, help people implement open educational resources, particularly in developing countries as part of SDG goal number four, the Strategic Development Goal for Education for All. And in developing countries, they need Sy Hub. They rely on it very heavily. Everywhere I go, and I've been to more than 25 developing countries, everywhere I go, they do. Sy Hub has saved thousands, if not tens of thousands of lives by doctors in developing countries who've been able to access articles that they couldn't otherwise have done so. And you can see by the red spots of where people are in the world who are accessing the database. So they call it theft. But to me, this is theft. Removing millions of texts from the public domain is taking something away from the public. They've extended copyright continuously. From the beginning, it was 28 years, 14 years and 14 years, to 50 years, and now 70 years. In Canada, we're still at 50 years, but there's a bill coming up that they're going to extend it to 70 years in order to agree with the United States and Europe. And so we will be in that. Taking those texts away from people and putting a license on them is theft. So when they talk to us about the ethics of piracy and theft, we can talk back to them and tell them what real theft is. And I'll close off with this statement. If enough food was available for everyone and it was free, is it ethical to deprive the hungry? If enough knowledge is available for everyone, is it ethical to deprive those who thirst for knowledge? What they're doing is they're deliberately closing off knowledge from everyone. And there is no way that this can be considered ethical. So I'll finish with that and open it up for question. I'm sure everyone doesn't agree, but... Thanks very much. Thanks very much. Thanks. Thanks. I'm going to close it out with a celebratory speech.
The free sharing of scholarly research is essential for supporting the UNESCO Strategic Development Goal 4: Education for all. Open access (OA) can be effective in reducing the knowledge divide that separates and partitions societies. Researchers and educators worldwide continue to face significant challenges related to providing increased access to high quality research, while containing or reducing costs. New developments in information technology, especially with mobile computers such as phones, tablets, and other devices highlight the shortcomings and challenges for the traditional education community. Such technologies have the potential to increase access and flexibility to research by rendering it ubiquitous.
10.5446/51373 (DOI)
Okay, hi, welcome to my talk. My name is Eluiza Guerrero, or I go as El Guerrero, and I changed the name of my talk because it was very self-referential and kind of inception in a way. But what I want to talk about is what we talk about when we talk about our language. And what I mean by that is what we're actually saying when we say things, and what are the implications of the language we use day to day. So what are we talking about? Our language has a long history of oppression, and this isn't just in publishing or in tech, but in daily life. And it's a problem that impacts the people we work with, and even our friends, our families, our colleagues. So this isn't just strictly just for technology or all these terminologies used in science and academic publishing. So what can we do about it? I'm going to frame it within tech and scholarly publishing just to scope it just a bit more. And let's also be mindful of how we use language outside of work. So who am I and why this talk? I'm a software developer at PKP, and I've always questioned the language that we've inherited and how we've internalized all this racism, ableism, sexism, all sorts of isms in our daily lives. And I have a lot to learn, and I want to talk about this with you so we can begin to make things better, not just for ourselves, but for the people who are affected by the way we use our language, the way we've been so used to using these words that are actually very harmful. So what's the problem? For example, I come from the Philippines, and in the Philippines, ironically, we were colonized by Spain for 300 years. So in the Philippines during that time, they used Spanish to determine societal status. If you were a colonizer, you would speak Spanish. If you were the colonized, you would speak our own indigenous language. And for those Filipinos who knew how to speak Spanish, it was a very much upper tier social class for them. And we were also required to change our names to Spanish, hence my Spanish name. So there's a long history of this colonization and oppression using language, and it applies the same to the terminology that we've been handed down. And so much so that it's become normal for us to use these terms. In Canada, indigenous people's culture and language is in danger of being taken away from them. So it's not just something that's been handed down, but it's something that constantly happens in the world all around the world today. So it's not only historical. We use language as a tool, and knowingly or unknowingly, we negatively impact marginalized people. And we shouldn't use the excuse, just because we've always said X is this way, then we should just keep it. No, I don't think that we should follow these historical oppressive languages, and I think that we should question and make things better for people, because it definitely will impact people in ways that we may not know, because we all have different lived experiences. So, I want to show some real world examples where we can see this in tech and scholarly publishing. So here's a screenshot of a site that's asking me to, excuse me, whitelist their site or disable my ad blocking software. The terms blacklist and whitelist have always been used in technology, but they have very racist connotations, because we're often using black to denote something as bad and white as something that is good. And anti-black racism is still very pervasive, and they are subtle terms that perpetuate that black is always good. Even black sheep or any other term that uses black to say it's something that's bad. And this cycles back into our daily lives. Again, we think that they are very harmless words, but they actually carry a lot of weight, a lot of historical oppressive weight. And again, it's not just in the past few centuries, it's still happening today. Here's another term, master sleeve, definitely problematic, but also definitely being used in so many contexts. Most of the time it just means a primary device controlling secondary devices. And why can't we just use that instead? Like, it's no matter how appropriate we may think that these terminologies apply to whatever we're trying to do, it doesn't make it okay to use them. And so modern day slavery is still happening. All these languages, all these terminologies that we've been using knowingly or unknowingly are harmful because even with the long history, it's still happening. Here's an academic essay, the importance of stupidity in scientific research. Was there really a need to say stupidity? Obviously, the author was just trying to grab your attention. But what he was actually referring to was the ignorance of, was the person's ignorance. But that doesn't mean it's okay also to make fun of people's intelligence, which is ableist. Just as a side note. And when we use these words, these eye-catching, they're easy terms, they're low-hanging fruit, we should consider our intent rather than the probable negative impact that our words may have. So again, he could not have used stupidity in this sense, but it was low-hanging fruit for him, it was easy. And even in OJS, not to put OJS on the spot, but I mean this sort of language is used in academic and scholarly publishing, double-blind blind reviews. And this is problematic because it's ableist in nature because we kind of equate blindness with ignorance. And that's something that we should question and definitely change. I wonder if anyone else has any other examples that they'd like to share? If you have any? Anybody? Nope? Okay, I'll continue. So why is it harmful? The Internet is global. It's a global communal context. And we're living in a globalized context where people of different lived experiences and histories are joining the conversation. And we should make our spaces safer and more inclusive regardless of disability, race, religion, gender identity, gender expression. It's not about, like, there's nothing wrong with being politically correct. There's nothing wrong with trying to make our language and our spaces more inclusive and diverse because these people will be joining our teams. They'll be the ones who we go to see the doctor to or, you know, they will be everywhere. And we have to consider the impact that our language has on them. And if we want, we're always talking about diversity and inclusion in our workplaces. And if we want to include these people, we have to make sure our language affects them positively and not alienate them or disempower them. Even from a design perspective, we should be catering to more than the white, able-bodied person and how this leaves other people out in the cold. So yes, so we talk about inclusivity. There's a lot of talk about, oh, let's make our company more inclusive, more diverse. And oftentimes, it's just a checkbox to be checked off. But if you want to walk the talk, we can start with our language. And it's a long road, but language is definitely a huge part of making sure that our work spaces, our friends, our colleagues, our family are all included in our language, in our conversations. So what can we do? We all have different lived experiences. I have my own experiences with racism, microaggressions and stuff like that. But we should think about also what power we hold and that we are able to say certain terms or words. And maybe that's something that we can start with ourselves, thinking about how come I can say this and let it not affect me, but it may affect other people. And it's good to reflect, to take a hard look at our privilege and realize that the choice of our words matter. So again, we focus on our intent instead of eye catching or easy low hanging fruit words to use just because it's easy, just because we've always done it this way, just because it sounds like it has more impact. We should focus more on our intent and be more sincere and be more inclusive of our language. So I've got a few websites I'd like to check out that people have done. One of them is called self-defined.app, which is a modern dictionary project by Tatiana Mack, and I will click on it. So this is the dictionary that she has developed, and it's something that anybody can submit contributions to. And it basically outlines all the problematic terminologies that's been used everywhere, not just in technology, not just in design, not just in academia, but anything that you may not have thought was not problematic could be problematic. This is a good way to go through and check ourselves before we do or say anything that could be damaging to other people. So it's a really good reference for language and redefining our vocabularies. And the other link is a Google document, I am not entirely sure who started this document, but it's an alternative. It's a document for alternatives and substitutes for appropriate and problematic language. Black English or African American vernacular English is a huge problem. They have a lot of alternatives and the context by which it is being used, and they have examples to show how we can reframe the way we say things or how we mean things so that it's not oppressing or undermining other people. So this is quite a long list. So they have ableist language as well, which is really important because we often take for granted, because as able-bodied people we take for granted the language that we use and it alienates people who are disabled and may not enjoy the same privileges as we do, especially with language where we constantly use stupid or again in OJS, double blind, blind reviews. And yeah, so it's good to reevaluate our terms. So going back to... So what's been done? People are already or have always been talking about this, and I'm definitely not the first, but I think it's important to share this with all of you so that we can continue this conversation with our teams, our people, our friends, family, colleagues, become more aware of the language that we use and how it impacts people. An example on Twitter, Andre Staltz recommended to change Blacklist to Denialist, which makes more sense. He gives all these alternatives to terms that we've always used to things that make more sense, honestly. We don't need to use Blacklist, but we can say Denialist, Whitelist to allow list, and master and slave to primary and replica, which makes sense. Ruby on Rails, which is a programming framework, is also replacing the use of Whitelist and Blacklist. Python, a popular programming language, is removing the master and slave terminology, which is great. Here's a tweet from someone in scholarly publishing, change double-blind or blind reviews to mutually anonymous peer review. It's a mouthful, but it's way better than what we currently have, and it makes more sense. Also, it's more user-friendly for people who are not familiar with these scholarly terms. In Node.js, we've opened an issue to remove blind from review types. So we're taking the steps to get there. We're trying to be more friendly, and not just to people who may be affected by it, but people who may not be familiar with what a double-blind or a blind review means. Cool people talking about this. Again, Tatiana Mac, who made Self-Defined.app. I want to talk about a group in Toronto that I'm a part of called Intersectio, and we're a bunch of people who are black indigenous people of color trying to uplift these marginalized voices in the Toronto tech space, because there's a lot of problematic issues with that tech space. Actually, not just in Toronto, but everywhere. But I'm from Toronto, so that's why I wanted to give them a shout-out. And there are a lot more smart people with lived experience who are by-bock, who are disabled, queer, trans, talking about this. And some final notes. We've walked through all these examples in technology and scholarly language, and it's definitely relevant for us working as developers, designers, or anyone who works in tech and scholarly publishing. But it's also relevant to our daily lives, and that we should be cognizant of it in our teams, whether we are in tech and design or scholarly publishing. And you may know someone who has an invisible disability, and it might be harming them. And it really doesn't harm ourselves to be using more inclusive language. And you never know. They might be people in your teams, and we just don't know about it. So there's nothing wrong with being politically correct. And we have the alternative vocabulary. We just have to be mindful and do a bit of reprogramming, and that doesn't hurt either. And I wanted to close with what Tatiana Max says. We define our words, but they don't define us. We've been passed this language from generation to generation, but with the Internet, with globalization, we can redefine our words. And redefine our vocabulary to make sure that we can include everyone in the conversation and learn how to better relate with one another. And these are my sources. I'll be sending the slides to Marissa, I think, so we can go through the sources. And that's it. Thank you. I want to thank Lorraine Chun from Toronto, Julian Leigh Ann from the Bay Area, Sophie Ush, who is a colleague of mine at PKP for helping me with these slides, because this is my first talk, and I was very nervous. But thank you so much. And yeah, let's do better. Yeah. Thank you.
The language used in technology, including publishing whether scientific or academic, is rooted in racism and ableism, with terms such as "blacklist" versus "whitelist," "slave" versus "master," or "blind reviews" and "double blind" studies in research papers. These terminologies have become ingrained in our vocabularies, but how can we confront the systemic biases put in place by such oppressive language? More importantly, who does this harm, and how? I want to confront such language and challenge ourselves to become more aware of this type of subtle racism and ableism, and take action to replace them with alternatives that will not be harmful to marginalized audiences.
10.5446/51374 (DOI)
Good morning, afternoon. My name is Alexa, I'm from Costa Rica. I'm from Instituto Tecnológico de Costa Rica. I'm going to talk about accessibility in open access journals. This is a little research that we made for some open access journals. We like to test the accessibility, the web accessibility of those journals. First of all, I want to make a little introduction of what is accessibility and the importance of that topic. Disability affects 15% of world population. It's a lot of people that have any kind of disability. Of those, 645 million suffer visual or hearing impairment. This affects directly the use of the web. Any kind of content or web page. That disability consists of interaction with the environment. It's because we have a wrong meaning of disability. We said the person has a disability. It's the environment that has that problem. The environment has some barriers of problems that made the people can interact with that environment. For that reason, we have to make sure that our environment, that could be a building, that could be a website, have to be well done for all kinds of people. That world-wide organization proposes making accessibility possible for everybody. For all conventional systems and services, the idea is to make them accessible for people with any disability or not. For that reason, some organizations such as the Worldwide Web Consortium create recommendations and standards to the web to guarantee the accessibility of any kind of site. For example, if you have an e-commerce, you would like that everyone can access your site. But this idea could be more, or could be for other sites, not just for e-commerce, like academic sites. Those consortium, half the standards, have the 1.0 and 2.0. Also the last year, they released a new version, the 2.1, they have some different features. You can go through the site and check those guidelines. They are really simple. But some simple things may really be changed in our sites. Accessibility must be guaranteed for all. The website must be guaranteed, but it's not the real world. We can just go through the Internet and find a lot of sites that you could access. Just could access that site for us and could be worse for someone who has any disability. This situation becomes even more critical for academic sites or academic websites that have some, for example, journal repositories that have to be accessible for everyone, but it is not the real situation. The web accessibility 2.0, the guideline, has three different levels. A level A, the most basic web accessibility features. They are really basic, but they have a big difference. They have the double A, the biggest and most common barrier that you can find in the site. You have the triple A, the highest or more complicated feature to solve. The objective of the test of the resource that we made was to evaluate the web accessibility of pages in open access journals. Because developers, managers, and editors have to be aware of that topic. It's because sometimes we think that accessibility is just a thing for developers or designers, but it's not, editors and staff have to be aware of that. Because some information that you put in the site, sometimes the editor or the staff or your assistant is not that developer. And you can follow some guidelines to make the information more accessible. The methodology was to find some journals to test. The idea was to test open access journals because I work in open access journals. I'm an editor of open access journals. And I am on the movement of open access. We found on DoA, 134 journals available on the site that they registered because the idea was official sites and official journals. We made an automatic text with this tool, source site. There are a lot of tools that you can use to make an accessibility test, but we used this one because it was really good. This tool checks accessibility with those guidelines that I mentioned before, 1.0 and 2.0. Also provide another information like broken links, page structure, then CIO information. You can use this. It's really good. What about the journals that we test? We have representation from all continents in my new room. We have more from Europe. I forgot to mention that the journal was for information and technology category. It's because we need a little representation and we can test like 1,000 something. We try to select from those specific topics. We have different representation here. And also we try to identify if they use OJS or if they use another platform to support the journals. And the 58% use OJS system. That was really good. But not all of them use that version 3. That for me is better or now have accessibility features. Other use they on development. And also use ISO like Elsevier or something, sites of those platforms. What about the results? The first I thought the first barrier because it was really interesting try to access those journals that have the site in the 21 of those journals we couldn't access. Just try to access the URL. It's just the first test that we made. And we have 21 that have blank page or partial access or no access at all. Like a patient of phone, server internal error. This is really critical and other error. It was very interesting because for me that I don't have any disability, I couldn't access that information. And this is the first barrier that we found with those journals. And for the reason we test the other 113 journals that we could access the content and the page and we could make the test. We have 100% of the journals that we test have accessibility issues. In minor or minor level, but all of them have any one or more issues. We found 115 different accessibility issues. It was different accessibility issues, but most of them were wearing 90% of the journals. We have like 10 different issues that were in the majority of the journals. We have 66% of level A. I think this is okay because they have a lot of problems but of the minor level, we have 9% just for level AA and 25% of triple A that are the biggest problems. Sometimes issues that only the developers can solve because something of the structure of the page or something like that. Okay, here I have some representations of the issue of level A because a lot, but I would like to focus on the most frequency that we have. The first one, we have the highest frequency that 92% of the journal have this issue. It's really simple but it's easy to solve. A boy specifying a new window as the target of a link with target blank. It means that when you have, for example, a vision problem or if you are not signed, when you click on a link and you go to another window, but you don't say anything to that person because I can see very well, I don't have problems because I see that I'm going to another window, there is no problem. But when you can see and you're using a screen reader and you don't specify to the person that you are going to another window, they could be confusing for that person to have a good read of the site. This is the first one because it's not common to put this but it's really a really big problem. Also when you have a link or an image and you don't put an alt text, it is really, really common. For developer and designer, now people are trying to change their minds and now they are writing the description but also we have this problem. When you insert an image or when you put a link, you have to make a description. You have to use this attribute, name, alt text and you have to describe the image or describe where the link is going to go. If you are using a screen reader, the screen reader read that description but if you don't have it and you have an image with important information or something and you don't have it, the screen reader can't say anything to the user and it's a really common problem. Another is that ensure there is sufficient contrast between the forward and the background colors on the page. This is also important because sometimes the editor or the staff is that they use a thing because it's nice because I like the forward and black and the layer is gray but if you have a vision problem, you couldn't read that information. It's important that you have a sufficient contrast. For the reason you have to have a good design but also make a test of accessibility. If it is looking good but also it is functional. That is the idea. Also is removing the underline from links, making it hard for color blind user to see them. Also, I'm going to leave the presentation if you would like to read the other issues. So we have the level AA. We have some of them. You can see that in minor frequency. That is okay. But also those issues or issues you need are harder to solve. These ones are really easy to solve. The editor, the staff, your assistant could solve that problem. When you see the issue, when you use the program, the sort side or whatever you use, you can see what is the recommendation to solve that problem. The idea is that those issues could be solved easily. Those are issues that we have for AAA. Also they will have little frequency. That is really good. But those are the highest. We need more knowledge to try to solve. Problems are problems for the structure page or also that developers can solve. I have some representation here. Some conclusions of the little test is that the web accessibility of open journals involves different roles that I mentioned before, such as developer, managers and staff. We have to have this in mind in just a work for developers. That content of open journals should be accessible to anyone, whether or not to suffer a disability. If you have a site that is good for someone with disability, it could be also good for everybody. You have to consider people in any country, any age, using any technology. This is important because now we would like to read an article on the mobile, not just on desktop. And also accessibility features like screen readers, zooms and invert colors. Also it is important to test if our site is well done. We are editors and we don't know how the system was made. We can check using accessibility features. Also developer should reduce accessibility issues if you are a developer on platform. But also managers and editors verify the basics. What I say that is because we know that OJS3 have accessibility features. For the reason we say, am I using OJS3 and everything is done? No. We have to verify if we can make another thing to guarantee the accessibility in our journal. And some recommendation from the test that we made before is of course to use the recent update of OJS3 because it includes accessibility features. This is to guarantee you don't have like developed problems or highest problems. It is a web design that has a contrast between forward and backward colors. It is not just a nice thing for example. You have to ensure that all images that include in the website have an alt text tag. It is really important. It is because sometimes in the home page we put like a workflow with the publication process. We use an image. But if you don't read it, you see the page well or if you are blind for example, you will use a screen reader. And the screen reader have to describe that image. And also the description that you write of the image have to be a real description. It is because sometimes people write a description of the process. But if someone read that description that you write did not say anything to the user. The idea is to write a description that say something or describe the image that you put on the site. It is because I have seen some descriptions that are really general and don't say anything to the user. Try to do not include flash elements like animation and things that are past, don't make any or don't add any important things to your site. Try not to use it. Or if you use it, don't use to add important information. It is better to add information. Does that really important information try to put in letters? And finally use a sensitive phone. It is better for reading. 12 point size and minimum. This is the general recommendation. Very responsive design. And validate in different platforms. It is important that you ensure that the site works in different platforms. Test the accessibility of your journal. If you are in charge of a journal, an editor or director or something, it is important that you test your site. It is really easy. You can write and Google test accessibility. You have a lot of tools or the tools that we use in this study. And you can see what your site has problems and how to solve it. And for future work, we are going to test the pages of the site or other pages because now we just test the home page. The article page or the current issue. I would like to prefer test sites using the same platform. And to test the articles. This is another world. Because I am talking about the web page. We talk about document. This is another world. And when you decide, Google and maintain the website with accessibility in mind, everyone wins. This is the final. And thanks. OK.
The openness of journals means that anybody could access the content means having to consider people in any country, any age, using any technology and accessibility features. This presentation shows an evaluation of web accessibility of open access journal pages available on DOAJ (Directory of Open Access Journals) in the Technology subject. For the evaluation we used an automatic web accessibility test, SortSite; this tool checks the accessibility of each page according to WCAG 1.0 and WCAG 2.0 guidelines. 100% of open access journals present web accessibility issues of different levels. The most critical issues were related to problems in the page's structure such as missing tags. Other minor issues like unresponsive design and insufficient contrast between foreground and background colors could cause reading problems.
10.5446/51375 (DOI)
Hi, everyone. I hope you can hear me fine. And welcome, welcome to the session. Take your seats. And the idea is that by the end, we should all be able to do a tower like this. So hope you are ready. And with a lot of energy. So I'm Gabriela Mejiaz. I'm the engagement lead for ORCID. I work with organizations increasing the adoption and awareness of ORCID in Europe. And Duleep, my colleague from the Heidelberg University Library, who is a software developer and has contributed to the plugin. So before I continue, I would like to know how many of you know what ORCID is? OK, so most of you. Great. So I don't need to give you much of an introduction then. But the goal of ORCID is to solve the name ambiguity issue. That's actually one of the goals. And I'll continue later on. So ORCID is not for profit organization. And it's very important to know what ORCID is. It's not for profit organization. And it's very important for me to tell you this. We have an open governance. And we are registered as a nonprofit organization in the US. That means our bylaws prohibited the organization to be sold. I just want to make sure that you know that. And we provide open tools. So the ORCID ID that you or most of you know identify researchers, we also offer APIs. And that's what makes the plugin, the OJS ORCID plugin, possible and a registry that you can use, organizations can use to synchronize research data. And today we've talked a lot and heard a lot about open science and also about inclusion. And for us, these are also core values. And we express this a little bit on this diagram. And for us, openness means also interoperability. So as you can see here, the idea is that the researchers or ORCID ID can use the ID, can use, connect the ID through our system, like once a million an article, when applying for a grant, or connecting the ID into their repository, into their CRE system, other institution. And to be able to achieve this, we work with all organizations in the research landscape. And actually the plugin that we're going to present today is a joint effort from PKP, the Heidelberg University Library, which is part of our German consortium. And we collaborate as well. And yes, this fits on the publisher sector. So I wanted to also tell you something about our community. We have more than 7 million IDs registered. And as I said before, we are a nonprofit organization. We finance ourselves through membership fees, institutional membership fees. Currently we have more than 1,100 members in more than 40 countries. And these members are building these integrations with ORCID, meaning implementing ORCID IDs in their workflows and integrating our APIs into their systems to make this interoperability possible, so the data flows from one system to another across organizations, across countries, to save the researcher's time, and to make data reusable as well. And I mentioned our APIs before, so I'm going to go into more detail now. We are an open organization, and therefore we have a public API that's open for everyone to use. It's free to use. With the public API, organizations can obtain an ID and read data that is marked as public on ORCID records. Does everybody know what API means? Yes, okay, okay. So it means application programming interface, so it's basically a tool that allows you to exchange data between your systems and the ORCID registry. And the public API is free to use. Everybody can use it for free, and it allows you to obtain an ID and read data on ORCID records that's public. Our member organizations have the benefit of using the member API. This allows the same functionalities as the public API plus the functionality to add data to ORCID records or update the data that you add and also read trusted parties' information. So on the ORCID record, there are three levels of privacy controlled by each user or each researcher. Public means the data is available to everyone through the registry and through the API. Trusted parties is the second level. That means that the researcher needs to authorize this organization to read this information. And the last level is private. That means only the researcher or the user can access this information. Most of the data on the ORCID registry is public, but most of the email addresses are marked as private. And again, we're an open organization. What we do every year is for the Open Access Week, we release our public data file, which is an XML file that contains all data that's public on the ORCID records, and this is published under a CCO license. So the member API allows you to read public and trusted parties' data, add or update information. And we have another API for consortium members or premium members that also allows to synchronize information by sending a notification every time something changes on an ORCID record. You can register a callback URL. You will receive a pin on that URL, and then you'll know something changing that record, and you can update or synchronize that data on your systems. And some information, some statistics for you to know the volume of data flowing in our registry. Currently, we have more than 45 million of words. We call it words and not publications because we consider many times of contributions, not only publications, but also preprints, software, data sets, patents, conference posters, and many more. And you can see that there are more than 17 million unique DOIs, and most of this data is being added to our registry by member organizations. And actually, the plugin that we will see today allows more of this data to be able to get to the ORCID records. And as I'm going to show you now, this is an example of this interoperability using the APIs between systems. This is an example of an ORCID record, and this is an example of a publication being added to an ORCID record by the plugin. And this is an example from a researcher that has authored an article on a journal published by the Heidelberg University Library. And as you can see here, the source of the information is not the author herself or himself, but instead the journal. So for us, this is also a way of adding trust and transparency to the research process. And as you can see there as well, there's a DOI on the article that's been pushed by the plugin, and we integrate many different type of persistent identifiers since this also contributes to more interoperability and visibility of the research data. And yes, we've put together some resources, some links about the plugin, some video tutorials, and the plugin guide that was developed together with PKP in a virtual spring that's Katelyn mentioned today. And now I'm going to hand over to Dulib, who will explain you more about the technical details of the plugin. Thank you. Okay. Is only this mic is enough or I would like to later go to the computer then I mean I would need that also. Hi. This is working like this. What is? Okay. As Gabriela mentioned, I work for the university library of Heidelberg and I will go through now how we did this OJS plugin. First I will, if everybody is clear about OJS, I think now, I would not go to that detail and I will show you the plugin and as we are always developing in a community, I also want to show you who has contributed from beginning to this plugin and then after that we will go to the questions. So this is OJS and it is standard and the development history, specification and I will also have, I'm having some demos from videos how the integration is done. This is an image of settings from the OJS plugin. OJS plugin, OJS plugin, OJS plugin is a generic plugin. Generic means it is a plugin which directly interacts a lot with OJS system and currently we support OJS API version 2.1. The plugin can be configured to use both the member API and the public API. Also as Gabriela mentioned, we are having this sandbox API and you can also configure that and we also allow that you can log all the communication with the API or only the errors that you can configure through the plugin. So some historical facts. The first development of this OJS plugin was done by the University Lab of Pittsburgh and then later it was taken into public knowledge project and it was a PKKP official plugin and the Heidelberg University Lab provided what we did was practically adding the support for the member API and a lot of community members helped and as we had here in this conference also we had some software sprints, a lot of people contributed their ideas. Actually developing this kind of a plugin is not the challenge, it's not the technology. It is the people contributing and you getting the ideas and we have communicated everything how the plugin communication went on the GitHub issue then you can, if you are interested later, you can see and we also invite you to contribute or give your ideas or if you have also errors you can write there. So this is a formal specification what we are supporting. You can choose any kind of the four possibilities what you are having. So now I have some videos I have to go there and hopefully... So it is working. This is how you would set up the plugin in OJS. First you go to the settings and you have your website settings then you go to the plugins and you can find this plugin in the plugin gallery. You can just choose the OK plugin and here I have already installed therefore it says I can update but generally it would ask you to install and you just click there. And that's it for the installation then we can go to the installed plugins and you will see it under the section Generic Plugins. Here under the settings now you can configure if you are using the Sandbox API or the production API but what we would recommend and also OK would recommend is first you test with the Sandbox API and when everything is finished you can apply for the public API credentials and I have also mentioned if you are using the Sandbox API OK allows only to use Melinator accounts. Melinator is a service where you can have free mail addresses if you are also considered with your privacy and do not want to give your email address to third parties you can use Melinator for other things also. Then you configure it here. So choose the API settings and give your credentials and you can decide if you want to send emails and you set the login level and then that's it. The configuration is simple. So the next thing, next video where I am going to show is this plugin allows authorizing OK accounts for users who are inside or authors who are users of the OJS system. That is the first example that I am going to show. So sorry. To do that when you have logged into your profile you have to go to your public profile settings then if you have enabled the OJS plugin then you will see this button to connect to your OK idea and you will also see first that there is a small introduction that you can read before going into the connecting with the OJS. So after that I will also go again back to the OJS. Connecting to the OJS profile. Here you have to login with your OJS account because this is a configuring with the Sandbox API I am using the Melinator accounts. Then what is happening in the background is the OJS system connects to the API and authorizes and gets a token and it saves in the OJS. This green button is always a kind of sign that it is verified by Okid. So now I am doing a submission using this authenticated user. That is my test profile. I have to wait. It is the external author submission not the one the other video I will show you from the authorize to say I have mixed the videos. So what I am doing is I am inviting an external author to the system using the Okid plugin. We can configure that. Do I want to send authorization request to the external author? At this moment it sends an email to the author. So the user can now authorize that OJS system here. What is happening in the behind is that the external author gives access to the OJS system that it can retrieve this token from Okid. Then you are redirected to OJS. Now when you see the profile of this external author you will see that his or her OJS ID is there. Then we are done with the editing of the article and now we are going to publish. Exactly at this moment when you are publishing it, it sends the metadata from OJS system to the Okid. So when I go back to the Okid page and when I refresh this page you will see that the article metadata is in the Okid page. So here you see the article title and the article links and the metadata that this is the Okid ID membership from the Unisd Albergh and also it is shown as the source. So now I am going to show you the other video where you think that is the internal author. So because I am already authenticated I can just publish. So when I am refreshing it now I will also have it is the same process. So now we have added the author and we are going to go to the OJS reader interface. In the reader interface now you are seeing that this green button that this is the Okid verified Okid ID. Earlier some historical background in OJS 2 versions. So you could add Okid IDs without verification. We see with a lot of consultation with Okid organization and also with discussion with infrastructure providers. So we decided that we do not want Okid IDs that are not authenticated and we are also thinking for all the systems what we are going to actually I want to answer some of the questions which can come. So that we allow we are also thinking of how we would ask the authors who are not authorized to send emails and get their permission that we get our systems self-always verified Okid IDs. So I think that was it. Let me see. Yes. As we always do the next steps what we are going to do we are discussing in this GitHub issue. You can have a look there and yeah this is again some of the resources directly connected here. Yeah. Thank you very much for listening and you have questions now.
The goal of this workshop is to introduce the OJS-ORCID plugin. Learn how to use, configure, and publish with ORCID's Member and Public APls along with where to find documentation and support. Includes a showcase and demonstration of organizations who are already using the ORCID integration in OJS publishing workflows. Presenters will also discuss the community-based approach of requirements engineering used during the development of this plugin and the distributed development of such by several international partners.
10.5446/51376 (DOI)
So, I'm going to start with a little bit of a personal story about what was going on with our joint PKK in terms of design and usability stuff, particularly what kind of problems we had and a little bit of history about how we got into those problems and then I'm going to talk a little bit about how we went about changing that system and putting it in place of things that helped us make some usability changes. So, I joined PKK in April of 2015 and I was hired as a software developer, so I wasn't hired as a designer, but I had a background in the WordPress product space where design and aesthetics are really a key part of selling the product, so I came to a perspective where that was really a critical part of the whole process. So, in April 2015, and OJS3 had just been re-talented and this was a complete ground-up rebuild of OJS and it was really meant to be a big leap forward in terms of bringing the application up to a kind of modern standard and it had just reached Apple, which means this was kind of the first fully functioning version of it and this is what it looked like. So, it's obvious that no designer was involved in this and they would wait, no designer was within 100 miles of this. It's really like a past days design here. So, in this screen, I can actually see four different button styles, so these are buttons, these are buttons, these are buttons, and those are buttons. So, none of them actually looked like each other and none of them actually looked like buttons. So, this is very much how I felt. I think it's how many of you felt at different times with different kinds of OJS software, but what I'm going to talk about is why or pretty well why did we get here? So, how did we get to a situation where OJS3 had got to do a complete ground-up rebuild but no designer had been involved in this project at any point in the process. So, there's a few things that I think were playing here. One is PKP and its software really emerged after doing itself a culture of the early 2000s when people really didn't talk about usability, they didn't talk about software being intuitive. And users didn't really expect that either. You got on and you figured it out one way or the other. It's very different today where users don't understand it immediately, they get upset. But back then, it wasn't really that important. Another big problem is that within the team, which was much smaller than it is now, there's really a lack of understanding of what UX was, what led to UX problems, particularly what kind of strategies led to resolving UX problems. And then finally, design itself, as you can see, is kind of unapportable. It'd be nice if we could make this look nice and everything, but we just can't believe we're that important. The most important thing, though, that contributed to this was feature-free. Feature-free is what happens when you say yes to everything. And that's a problem because... I think I just turned it off. There we go. With feature-free, the more you do, the worse it's going to be. That's just a basic fact of all software. All software has to anticipate and manage feature-free. And PKK just really struggled to do this. And there's a few reasons why that happened. First of all... Good morning. Software for PKK is really a means to an end, and that is to make knowledge public. So from the very beginning, if OJS did not have a feature, and this feature was necessary to migrate from a closed publishing system into an open publishing system, then OJS, in PKK, in particular, felt a real responsibility to make that happen, to make it happen as soon as possible. So it's not the kind of VC-funded startup where you can build the smallest possible app and then just plow money into a customer acquisition. It really mattered to them that the software did what people needed it to do. So that led to pressure for new features. The other big thing that happened was early on, PKK became... OJS became really a critical part of the publishing infrastructure of the global south. And so, when we think about PKK as not just software, but as a mission-driven organization, it was really important to serve that need. And what that meant is that very early on, we had a massive base of users, and that means we had a whole lot of diverse needs and feature requests emerging. And then finally, and this is really important, is that frankly, PKK didn't have the resources to invest that much in all of these features. So it was able to sustain the product and keep it moving forward, but it didn't have the resources to refine it. PKK knew that they had problems with UX. There was a recurring problem, and so, in terms of time, I'll just go past this, but basically they did a UX review, and the UX review basically says start over. The problem with that is, if you're a small team, and you just invested a massive amount of time in the rebuild, a UX evaluation that tells you to start over is not helpful. In fact, it actually puts greater barriers on making improvement because all of a sudden, it just seems insurmountable. Not only do we not have resources to make innovative improvements, but we definitely don't have resources to just start over again. So how do we turn things around? Culture process, resources, I won't create a wrong, I'll just go straight to them. So when I joined the team, I don't have any professional training in design, I don't have any academic training in user experience or anything like that, but because of my background, I knew the things were just going off the rails. And so I came to the team and I just said, look, I don't know exactly how we get there, but I do know that what we're doing is wrong, and I can do better than this. And so shortly before we hit beta, I got the team to let me just do a really quick visual redo of the editorial back end. And this really didn't address all of my usability issues, it was really just kind of saying, let's bring at least a basic design overhaul to this to really show what can be done. And this was really important for two reasons. One is that I think it overcame a certain inertia within the team, and it set in especially as a result of that UX review where it just felt like doing UX was just impossible, it was this huge hurdle. But making this change really demonstrated that actually without an enormous amount of work you can put in place something, you can actually make a pretty significant improvement. And the other important thing it did was it really established my bona fide to get into the team. And that allowed me to start advocating for more and more resources to spend time on this kind of stuff. It didn't, however, solve the problem with feature creep. We still had a lot of things coming in. And it set up a situation where there's a lot of tension within the team because the team would come and say, oh, we need to do this, we need to do that. And I would say, no, please, please let's not add another feature. We have enough problems, we're barely holding our heads up to water, can we please not keep adding more problems to something? Unfortunately, it turned me into a bit of a kind of droney gatekeeper. This wasn't really sustainable mostly because I was new to scholarly publishing. So I wasn't really the person to be making those decisions about what features are important and what features aren't and how do we balance that need to find what we have and as well as serve the population we have. So we had to do a process where I thought like I was training the team on what usability in the racks was and the team was training me on what scholarly publishing was. So I tend to think of usability in a fairly straightforward way. The simpler your application is, the easier it is to make it intuitive. The more complex it is, the more confusing it's going to be. And usability, UX work, that kind of stuff. You could maybe shift the needle a bit on this, but the raw material of what you're working with is going to involve a certain amount of complexity or simplicity. So when we look at commercial actors, which is the market I came from, not Twitter or Instagram, but just generally consumer-facing products, they're furiously pushing the simplicity of their application. There's a very limited set of things that you can do on Twitter or Facebook or Instagram or whatever. They do that deliberately because they know the simpler that they can make their application, the more intuitive it will be and the easier it will be for them to onboard new users. But when we talk about scholarly publishing, we're in a very different realm. Scholarly publishing is inherently very, very complex. And that means that in scholarly publishing, the application is built for it, but by their nature, it's going to be a lot more confusing. And scholarly publishing is getting even more complex than it is now. So as well as all the stuff we've been dealing with for years, like DOIs, URNs, depository places, that new scheme is coming up like credit, we have new problems we're trying to solve like how do we manage research software, how do we credit research software, how do we manage all these different kinds of review processes that people want to explore, all this kind of stuff. So, scholarly publishing is getting much more complex. And from a product or service design perspective, this is very much moving in the direction of action. So for me, my perspective is, let's move up into the left. But much of the scholarly publishing community is basically kind of moving things down into the right. So this is very much how I felt for the last four years. And again, it created this tension and it wasn't very productive. I mean, we were in this process of shifting the culture from a yes culture to a no first culture, and that's been really important, but at the end of the day, it's not enough just to say no to everything, we actually got help to manage this stuff. So for that, we needed to institute a process. And we didn't go about, we didn't move forward to the clear plan about this is exactly the steps we're going to take. We started out with one thing here, one thing there, and we ended up with what seems like a pretty good process for us. And it started with user testing. And I won't play the video, but I will share the slides after. If you want to see it, it's a video about similar live user testing that's on your best. The user testing is really important in kind of shifting, instead of just hearing from other people, we want this, we want that, or whatever, we're actually going out and testing things that we already did, things that the platform, features that the platform already had. And we were looking at what was already confusing and not very user friendly, and not allow us to kind of start thinking about priorities. So we would take all of the notes that we gathered from the user testing, and we would dump it into a big spreadsheet, and we would assign priorities. The top one, which thankfully we don't have any now, was critical. And what that meant was, this is either broken, or it's so confusing that people aren't even able to use it. So this is actually quite common. We had lots of features in the system that we spent a lot of time in money building, but no one could actually use them because they were so confusing. So we just had everything kind of ranked from sort of most critical to the lowest priority stuff. This kind of started negotiating in terms of how resources were, how like new development work was being parceled out, where resources were going. There were kind of three questions that we would ask. I mean, I say that like we have a really formal process in the deposit, it was mostly me asking these questions at first. And it was really so, if someone comes to us with a new feature, I want to ask the question, is this new feature actually more critical than some kind of UX priority we have? And when I talk about critical, what I mean is, are there people who actually cannot publish with our platform because this feature is mission-admissive? Because we did have UX priorities that were actually preventing people from using the platform. I mean, there's probably people in this room who have sat down with OJS and unsuccessfully tried to publish something. I know when I joined the team, I spent about a week trying to figure out how to publish a submission, and I couldn't actually get ahead to ask somebody. So the second question is, who's actually going to benefit from this new feature? Oftentimes we hear a lot about new feature requests because that 3 to 5% who really need that feature, they're the vulnerable ones. They're up on the forums, they're letting us know, but what we don't hear from people is, I do this thing every day like a sign of a reviewer, and it's really annoying that I have to do this sort of thing. So the user testing and the priorities we set there really gave us a way to kind of evaluate how many people are going to use this new feature, versus if we invested time in saving people time during the regular workflow, people would get a benefit from that. Another last question, which is the hardest one to ask, to answer, but really involves kind of this culture change within the organization, is thinking of features as having an inherently negative impact on the software. So again, when we think about that simplicity, complexity, and access, every new thing that we provide is going to be a tax or like a burden on everyone who doesn't use that feature. So when we're thinking about what features go into chords, we actually need to evaluate not just who's going to use it, but who's not going to use it and how is this going to impact them. So, yeah, it's a completely different environment today, thankfully. Once we had this process, we were able to kind of see iterative gains in the usability stuff, and that's helped make the argument for devoting more resources with the team. So today we've got Israel, who you'll hear from next, who is going to be coordinating UX research and evaluation stuff. The community is heavily involved in the user testing, both running the testing as well as participating in it, as well as this prioritization thing, which is really important. Community partners have really helped us identify, not just like one of the 20 things you want from RJS, but you can only have one or two. What are those one or two things you want us to work on first? And that's been really important. And as I said, the community's been really good about that. It's hard to be told we're not going to do something that you want us to do. But the community's been really understanding about us, figuring out our priorities, and then making sure that we're focusing on that. What we don't do very well is actually communicating those priorities back to the team, but we can work on that. And today we've got about a half a dozen people involved in research, design, implementation, and iteration of UX. And that's a huge improvement, because when we started it was really just me doing stuff. But thinking back to Tara's talk yesterday, it's really important that we have more voices involved in this process, and it isn't just me kind of desperately trying to hack things together. And most of those half a dozen are in this room today, actually. And then the biggest thing is that nowadays when we're thinking about new features and things, the design and UX perspective is actually involved fairly on in that process. So it used to be that features sort of arrive, and then we desperately try to make them usable in some way. But nowadays we're actually there where the features being, what happens if we can say, maybe that's not even featured this way, let's think about it in another way and be more iterative. And that's been really important part of the process. I'll go ahead and skip this slide for time, but I just want to say if you're in an institution that maybe doesn't value UX, or because the field has the resources to do UX, I think one of the things you can learn from us is to start small and create some kind of proven impact, and that will give you leverage to start actually negotiating the resources to do this sort of stuff. So that's me.
Software that provides a good user experience (UX) is often easier to promise than deliver. For years, PKP's flagship journal publishing software, Open Journal Systems (OJS), received poor reviews for its dense user interface (UI) and low design quality. To turn things around, PKP had to say "no" more often, manage an overwhelming stream of feature requests, devise simple heuristics to set priorities, and seek out more effective ways to get user feedback. Learn about particular challenges faced by PKP as the providers of a distributed platform and how they're working to balance the needs of a diverse set of users. Includes advice for those who want to prioritize UX work on a budget but don't know where to start, or who want to advocate for better UX development processes in an organisation that doesn't yet prioritize it.
10.5446/51378 (DOI)
Thank you. Hello everyone. I'm Dennis. I'm from the University of Bern. I'm a subject librarian there. And I'm currently in the process of setting up a new journal where we use context for typesetting XML. I will just give you a brief overview of how this works. Just to let you know, I have a couple of code examples. And I'm not sure if you can read them in the back, so feel free to come closer if there's need. Okay. The context is this. We use a print journal, Udica, that we have before, and we convert it to an e-only open access journal that we host in our Bern University publishing OGS platform. I've talked about it yesterday in the lightning talk briefly. And the task is we want a single source workflow. We want different output formats, PDF, XML, HTML, chats as our production format. So we need chats to go to all these different output formats. We want a high quality typeset PDF, but no manual typesetting. The requirement is also to have PDF A and no costly software, which is supposed to be free as in beer and in speech. So it must be open source, reusable, so to say. So for the HTML and XML, we have Landsview, XLT software. But the big question was how do we do with the PDF? How do we get from XML to PDF, meeting all these requirements? So that's the workflow I've showed you yesterday. And now we're concentrating on this last step here, going from the chats to PDF. So let's meet context. Context is a macro package based on tech. Some of you might know it, tech is a market-based plain text typesetting system initially developed by Donald Knudt in the late 70s, 80s. So it's kind of a dinosaur. Early on, some people decided that this package is not really usable by users or it's really complicated. So they started developing macro packages to make life easier for authors. Use of this is latex, much used today. You will probably have heard of it or know it or have used it, by Leslie Lampert initially from the 80s till today. And then there's another macro package. They started developing it in the 90s and it's still developed today by a Dutch company called Brakma Advanced Document Engineering. Has a slightly different approach than latex. Yeah, that's just to introduce it. So why do we use a tech-based solution? First of all, let's start from down there. It's open source. It's a multi-platform. It works on everything from a Windows PC to a toaster. It's customizable. If you know how to program the system, you can do really a lot of things with it. And last but not least, it gives us high-quality type setting. That means we have micro-typographic extensions, merging, extension, kerning, tracking, all those typographic stuff that programmers usually don't care about, but readers do. And it has the Knuth Plus line-breaking algorithm to distribute the white space in an optical way among paragraphs. About these things, you will see it in the output or if you don't see it, it's actually a good thing. So context is on top of that. I said it. The good thing here is it's always a bit like a comparison to latex because most people ask me, why don't you just use latex for it? Well, that's why context gives us a consistent interface. It's not like you use packages in latex for all the tasks. If you Google for latex-based solutions, it's always use this package and then you have these commands and the commands always differ between packages. It's not really consistent. Context is developed by one company, so it's more monolithic. It's one interface and you have commands that are actually predictable. If you know how to style one element, you will probably also know how to style another. Then PDFA is possible, which is a big selling point and we can process XML just out of the box. No XSLT required, no additional software. You just write your style sheets in the context language and then you go straight through. So that's how it looks like, it's a sample context document. Those of you who know latex will certainly recognize it. We have backslash all over the place, braces, brackets, so it's more or less more of the same. The differences are subtle. For example, you have a start text command down there, line 17 and line 29 stop text instead of begin document and document, like small differences, syntax is slightly different. But the other thing is if you look at the preamble over here, you do everything with these setup commands and they work more or less the same for each element. This is very nice. As I've said before, you download packages. You don't have to adjust the output using rather strange, not strange, but to commands that are used only in the context of one particular package. You can do everything with the same commands. That's what I want to say. So what do I need now? First we need an XML input file, obviously. The second is we need context style setups, like those things up here, much more of them. And then we need the mapping to map XML elements to context. This is actually quite similar to what you would do with an XLT template, just to do it in one tool. So we have here a sample minimal Jets XML article. We have the front meta over here, which is collapsed. Those of you in the front can actually see it. Then we have section element paragraphs with talics, list elements, bullet list, a display quote down here, another section heading, food notes as well. So what do we do now? In our setup file, we have a minimal setup like this. It's just an excerpt. It's not everything. But what we do is we start a new setup that I call XML Jets setups, where I just say, okay, which elements do I want? Nothing, so that nothing comes in that I don't want. And then I select all those elements that I really, really need. So I start with article, front, body, back, and I assign those to corresponding additional macros or setups that have the prefix XML colon, and then, for example, XML colon body will render the body. The same with bold and italics down there. At the end, I have to register this setup, and then we can go on. So let's start with the body element. We have, again, a setup, XML body, and the main thing is here, we just flush it through. There's this whole element, and we just pass it through to context to handle it. But before that, we have an additional macro to handle the front meta, title, author, title page, ISSN number, whatever you want, you can just pass it through here. Next thing is the paragraph. Again, pretty simple. XMLP, flush it through, quite simple. But the interesting thing is here, you can use a filter, for example, to check if there's a language attribute. If we need to change hyphenation patterns, for example, so we check that, and if they are there, we use another command to pipe that in somehow. I won't show it in detail, but it's just a user-not-a-command to check what is there to map it into context syntax. So we go on. At the end, we add manually a paragraph break to start the next paragraph afterwards. We do the same thing with italics and bold. Just take the element, flush it through, wrap it in a group, like with these braces, add the necessary commands, emphasis, bold, and you're actually done. You do this with every element you have in a usual JETS file, and you end up with this. You define, like in one file, you will define the article layout. This is up here, the front meta I've been talking about. And then we style the other elements, we take them over, and it's actually a rather painless workflow once it's set up. So your questions. Should I use this? Why should I use this? The answer is obviously, well, maybe. It works. So that's a yes. It meets our requirements. So if your requirements are similar, if you want PDF, A, no additional tools, nothing to pay for like antenna house or like these rather cost-intensive solutions, that's fine. But of course, it's another tool in the tool chain. Someone needs to master it. You need to have someone who is familiar with these kind of things. I have written my PhD thesis in latex, so I'm actually coming from that world. So I had to actually to adapt to the XML and to get that over. But if you have someone who's used to process XML files, maybe a different story. Yeah, these are the two drawbacks, I think. You need to have someone who really, really, really knows these kind of things, who's familiar with it. And you have another tool which might break. I'm not saying it will break, but it's another dependency, so to say. Yeah, that's it for mine.
ConTeXt is a typesetting system based on TeX similar to the more widespread LaTeX, and just like LaTeX, ConTeXt produces documents that meet the highest typographical standards. Unlike LaTeX, however, ConTeXt can also be used to process XML files without prior conversion or external tools. This talk will provide a brief introduction into the ConTeXt typesetting system and show how ConTeXt can be used to typeset XML source files by mapping XML elements into its own format. falsche Titelankündigung!
10.5446/51380 (DOI)
Hi, everybody. I hope everybody is as exhausted as I am. That will really help you get through this incredibly stimulating conversation about Crossref. Unfortunately, my co-presenter, Susan Collins, cannot make it today. She's not feeling well. So I'm going to have a conversation for the next 20 minutes about where PKP and Crossref have been working together to make services for Crossref members better within OJS. Maybe just a quick show of hands. How many folks here are using Crossref with OJS and PKP? Amazing. Great. So I have some good news and some admissions of our chronic and constant failures. First of all, PKP and Crossref are BFFs. Oh, cool. A little square next to the emoji. Always, always good. We've been at this for a long time. I found out the other day talking to one that the first Crossref plugin for PKP and OJS was developed in 2009. That is some time ago. Crossref will be celebrating their 20th anniversary next year. So that's 10 years. That's a pretty long time. And through that time, they've been consistent supporters of PKP. That original plugin that they developed was made from the ground up from something that did not really exist at all and was a huge boon to folks who wanted to have an easier way to submit their DOIs within OJS, which was not always easy. Since then, we've managed to move to a new deposit API. Those of you who are on OJS 3.1.2, James looked mad at me there for a second. 3.1.2 have access to the new deposit API, which works a lot better than the old deposit API. Actually, just to make myself feel bad, how many of you have tried to submit DOI using OJS and had it stuck on submitted in the status? Yeah. Okay. Great. Yeah. I'm surprised it's not more of you. So that's addressed in the new deposit API. That is no longer a problem. So just update to 3.1.2, and that won't be a thing. We also added reference linking, which is really great. And we added funding. So you can actually pull up a funder registry and add funding metadata to your articles. And that's, what database is that? Who does the database, James? James doesn't remember. But it's provided. So it's actually like an autocomplete for funding registries, and you can put in your funding amount. And those things get submitted to Crossref as well. And reference linking, if you're curious about how it works, please flag me down. It is not opaque at all, which is what I'm going to be talking about while I'm here. So in 2014, PKP became a sponsoring organization, which means not only do some of our hosted clients, we covered their deposit fees and we covered their registration fees. But we also provide to a list of low-income countries, free Crossref hosting and free Crossref submission. So we have roughly 37 publishers, organizations, or clients. I say roughly because some of those are like full organizations, and some of them are individual journals. So it's probably closer to 50 or so journals that we're totally sponsoring, and 37 unique prefixes. And 17 of those have waived deposit fees. So we're offering Crossref services to 17 individual journals. And so far, I believe we're at about 15,000 DOIs. That's pretty good in a five-year span, I would say. But what I know is that many of them probably have tried to deposit DOIs and not been able to. And that's sort of been a major issue for Crossref members on the OJS platform. So this year, we've really entered into what I've referred to as a bit of an emotional support partnership. Dealing with the Crossref folks and dealing with PKP folks, we both have sort of an open support community. So if you go to Crossref or you email Crossref, whether or not you're a Crossref member, they will happily try to answer your questions. They will do so whether or not you were giving them money because they want the metadata to be good and they really care about the quality of the service. And just this year, they actually opened up a forum at community.crossref.org, which looks a lot like our forum. And their support people are answering questions much like we do on our support forum. But there's also cross-pollination. So if there's a question about DOIs and OJS that they can't answer, they tag me in and I roll over and answer those questions. And then if somebody asks those questions on our forum, I get tagged in and I answer those questions. So we're trying to help each other in a big way here. But it's also really nice because now when I get to talk to somebody from Crossref, I have a little bit of a shoulder to cry on. I no longer have to feel completely crazy on my own. And this is especially true for the folks at Crossref. Isaac Farley in particular, he's sort of their head of support. I know he really needed someone to talk to about problems. And mostly he tells me and I just smile and nod. But it's really good for him. So we're working together to sort of spell things out in a more clear way. Geez. So just after the library publishing forum in May, we drafted an MOU between Crossref and PKP to start up a Crossref and PKP working group. It's really funny in the Google calendar. There's always says PKP working group and ours always says Crossref working group. It's very clear that we've really thought well about what we were going to call this thing. So what we've done is we've got this support system that I addressed where I help Isaac and Isaac helps me and we sort of communicate openly. But we've also started to have a real conversation about the development of Crossref features and functionality within PKP software, where the stress points are for our users. We know from Crossref side that probably the biggest issue is they can't upgrade to 3.1.2 or they've had a hard time upgrading. And obviously we can't be there all the time when somebody has a heavily customized version of OJS that isn't easily migrated. But we do now have support documentation on our docs hub that addresses the process of migrating. So that's at least a start. But increasingly what we're doing is addressing the stress features sort of within the plugin itself. One thing that's a very clear example of this is the DOI plugin exists in a completely different place than the Crossref plugin. And there's no mention of DOI registration agencies on the DOI plugin. So we know that lots of people start their journal and they turn on the DOI plugin and then they set their suffix creation pattern and then they hit yes and then everything starts getting assigned DOIs and they go, I did it. I have DOIs. But they're not registering them with anything. And we don't even indicate to them that they ought to or should or that it's important. So sometimes we get people who have been saying like, yeah, I've got something like 2000, 3000 DOIs I've deposited. And then Crossref goes, you're not a registered user. None of this has ever been working. So instead of having those things be so opaque across the system where we sort of added these features over time without really thinking about it, we're going to start trying to roll this into the workflow a little bit more intentionally. Secondarily we've been talking about development and distribution of PKP specific educational and support materials. Over the last year Crossref has been working on this just humongous support curriculum. It's called the Crossref curriculum. When they shared it with me it was so big it wouldn't fit in one Google doc. I think it was like 400 pages. It's just this monster document that is about all of the things you can do with Crossref services and the Crossref DOI. And in that they included specific reference to OJS and PKP documentation. So we're mutually supporting each other. They don't have to maintain PKP or OJS information and we don't have to maintain their information. We can sort of play off of one another and make sure we're getting the message correct. I'm not sure what collaborate of research means. Susan wrote that and since she's sick I'm going to assume it was related to a fever dream she was having. And then development of future areas of cooperation. So obviously we have an investment in each other. I was just at the Crossref live event in Amsterdam last week where we talked about a lot of ways that we can help serve the OJS community and the PKP community in general. And we're trying to make sure that we're open to a broader conversation. Any place that we're maybe missing. A good example would be Crossmark. So we've never had a Crossmark plugin that's really worked. And at this conference Crossref decided that they would no longer have fees for Crossmark. Crossmark is going to be free to use if you're already using your deposit fees. You don't have to pay extra for Crossmark anymore. So obviously the pressure is now on us to make sure that our Crossmark services work. Which means we have to build a plugin and I'll get to how that might happen here in a second. So the Crossref curriculum came out and I would point this out. It's probably going to be released in spring 2020. I said came out. That was past tense. We'll be coming out in spring 2020. And right now at the PKP documentation hub we've added fresh documents for people who are running OJS 3.1.2. So if you're fully upgraded it includes documentation on the Crossref deposit, similarity check and authenticate. What else is there? Reference linking and the funding. All three are covered in the Crossref documentation. We've also been talking about code development. So this is relatively new over the course of the last few months. We've been having conversations with Crossref where we say they say to us, you know, you guys probably host more journals on your platform than Wiley has. There's many. And a lot of our members are having problems with submitting their content. How can we solve that problem? And when James and I were on that call because we're very Canadian, what we heard was is there a way that we could meet together to have a series of other meetings about a way that we could find a way to make this right? And what Jeffrey Builder at Crossref was saying was who do I give the money to so that this will happen? So over the last month or so we basically started entering into a conversation about what problems we're going to solve. And this partnership again is super important. At this Crossref event I was sitting across the table from a Wiley rep who was talking about how expensive DOIs were. And I remember being really mad. I thought, you have a lot of money. And this Crossref deposit fee is probably not breaking the bank for you, but for some journals, you know, a membership to Crossref, it's like $2.75 or $2.70 a year American, that's a big deal. That's a decision about whether or not they can even be part of the service. So we want to make sure that this is a good value proposition for these journals and that the service works really well. So the next steps, we're going to throw money at the problem. We have a proposal for code development which more or less came out of the sprint. If you were here in the first two days, it's going to be part of the sprint announcement and we'll have sort of a position open up, but we will be probably hiring a person to solve all these issues. And then secondarily, we're going to have a conversation about the metadata in OJS and Crossref through this project called Coalition Publica in Canada. Some of the issues that you may have run into using DOIs in your own journal, for example, you know that the name fields in OJS are required, first name and last name. I guess in 3.2 you'll be able to enter one name. But what I've learned working on some journals like those by the American Library Association is in order to get through a form, they would write first name, American Library, last name, association. And I don't know how you feel about the status of corporations as people, but I don't think they are. So we need to add a corporate name field and we need to make some more adjustments to make our metadata a little bit more responsible here. And that's a big part of this conversation about metadata review, making sure that we're capturing what Crossref needs to more accurately describe the work that you're putting in. Predominantly, we just want you to know that we're committed to making life easier for Crossref members. I know that a lot of people rely on this and you spend money on it, so you want to make sure that it works well and we're hearing those things. So if you have feedback, you can certainly feel free to flag me down. I was going to say I'm happy to hear it. I'm not always happy to hear it, but I will listen to you and I do think your comments are important. We want to make sure we catch those friction points. We want to know where things aren't working and more importantly Crossref wants to know where things aren't working and we do have a really good friendship with them. So I think that's it. Great. Thank you.
Earlier this year, PKP and Crossref formed a working group to help address some of the issues OJS users encounter with their Crossref membership. Join us for a presentation on our findings to date - a report to the PKP community, of sorts - and a reflection on the cooperative effort and where we hope to go from here. We're keen to gather audience feedback too - what should be top of our agendas? Where can we add the most value for all? Our belief is that this working group will foster not just an on-going relationship but better support for both the PKP and Crossref communities.
10.5446/51384 (DOI)
I work in something that we call the September Academic Publishing, which is a service for editors and journals connected to my university. We have been using OJS since 2003, and I like OJS, just to have that said. I'm also a member of the Spark Europe Board, I'm a member of the advisory board of DOAJ, and none of them have any responsibility for what I'm saying here. It's me who's talking, and not the organizations I belong to. I'm an economist, I came from the administration to Open Access, actually started doing Open Access thinking 23 years ago. This presentation is based on a study that I and a colleague at University of Bergen made after the release of the plan S guidance in November last year. This guidelines have later been revised and softened. We think maybe partly because of our study. Many requirements have become recommendations, but they might become the harsh reality after the plan S reevaluates itself in 2024. That is why this presentation. Let's see if we can make this work. OJS is a very powerful publishing tool. It might not be the perfect one, but it is not in OJS the problems lie. It's designed for electronic publishing. Something wrong? No? Okay. They have good workflow capabilities. It has functionality that supports Open Access publishing. It has plugins for communications with Open Access services and all the infrastructures and wide number of plugins. It's much used by smaller and scholar publishing services. Stand alone, university-based, you name it. I think it said that there's more than 10,000 installations, at least 10,000 journals using OJS. It's the most widespread software. It's free. It's one reason it's also widespread. But how does the Open Access landscape look like? We get a very large number of numbers. You don't have to read them all. The gist of it is that Open Access publishers are many, but they are small, measured in the number of journals they publish. There are some problems with these kind of statistics. I won't go into it. This gives a rough picture of reality. We can summarize the table that there are great many publishers publishing one or a few journals, and then there are a few publishers publishing a large number of journals each, and there's not very much in between. Can such small publishers be competent when it comes to publishing and technology? I have no doubt they are competent when it comes to the content they're publishing. That's quite another matter. It's not up for discussion here. Can they be efficient in an economic sense? Talk about economies of scale. Small publishers often also publish small journals when you measure them by the number of articles they publish each year. My impression after having worked with this for many years is that the answer to both the questions are no, they can't. I could stop there, of course, but let's look at some more things. What are the problems for editors? Well, they're used to the paper world, and they have a lot of thinking that needs to be re-learned. Most editors of small journals still think that they publish a journal issue. They don't understand that what they publish are singular PDFs that need all relevant information embedded in the PDF. There's no cover to where you can put the author bios or things like that. They have a limited understanding of what open access is, and the infrastructure is open access. But I think that open access is about putting content on the internet. No, it isn't. It's about putting content into an infrastructure, making sure it's visible and available where the users look for content. And one thing you can be very sure about is they're not going to your journals looking for content. So all the energy spent on design of journal pages, forget it. Put the work into getting content into the mechanisms that ensure distribution. The home page of the journals are rarely visited by people looking for content. They're not technology-wise. I mean, there are exceptions, but again, exceptions to that. And they have no idea about economics, which is also why they don't have any funding either, most of them. And the understanding of economics is when they came to the university, the finance department for the first time, is that universities don't understand economics really. So what can we observe? Well, one thing that we can observe is that there is a huge number of open access journals not listed in the directory of open access journals. Water Crawford estimates nearly 6,000 of them. A listing in the DOAJ is, for one thing, it's a sign of acceptable scholarly quality and open access quality. And it is also a tools for distribution for metadata, making the content visible and red. So if you're not there, you're doing a disservice to your authors and your content. But those listed in DOAJ are still weak on a number of quality aspects. So a listing is not the end solution, but it's a step on the way. If you look at the original plan S requirements and we look at non-APC publishers and APC publishers, a bit for technical reasons, we see that the small non-APC publishers meet 1.1 of the four technical criteria on average, which is truly bad. The large non-APC publishers meet 3.5 of the four technical criteria, which is good. If you are an APC publisher and small, you're still not good, but you're much better than the non-APC publisher of the same size, which you meet 1.6 of the criteria. And the large APC-based publishers meet 3.8 out of the four technical criteria. There is a world of difference between the small and the large publishers, and there is a definite difference between the APC-based and the not-APC-based publishers. You'll find much more numbers in our article. Go read that afterwards. If you look at the policy side of it, the picture is different. The non-APC publishers, the small ones meet 1.9 out of the three policy criteria, while the larger ones meet fewer criteria, actually, on average. The same APC publishers, the smaller ones meet 2.4 out of four, while the larger ones meet 3 out of four policy criteria. So when it comes to policy, the small publishers are not really this advantage, not at the same scale, at least, as when it comes to technical criteria. So what are the technical problems? There's a lack of doys. Probably because many editors, they don't understand what the doy is and why they should have a doy. The lack of doy reduces dissemination of metadata. OJS helps you with assigning doys and submitting this to Crossref, but you need to understand why and how and you have to have the money. The bill is not big, but many small journals don't have any financial economy at all. Zero income and zero ability to pay bills, even the small, sorry, there's a hole in the floor. Not big enough to escape. Sorry. Then there's a lack of long-term preservation arrangements. That could look very difficult, but the PKP has, as far as I know, offered a very cheap and simple solution integrated with OJS. There's no machine-readable full-text format, and this is a problem I really can understand because XML is not for amateurs. Making XML will be costly in some way or other, and it will need financing. That will be a huge problem for small journals. And last technical criteria, there's a lack of embedded license info in the text files. Nearly 50% of journals lack this, and this is not difficult to do, but you have to know that you should do it and how to do it. So we have a competence problem. Publishing entails a number of important competencies, and scholar-led publishing is, as the world says, led by scholars. They are very competent, but probably not in publishing. And there's a huge cost associated with acquiring the necessary competence, not necessarily in money, but in time. These are often overworked. Thousands don't have the time to invest, and because they have too little competence, they don't even know they should invest time in this. The average open-access journal is APC-free, published alone, and has few articles. And that means that the cost of competence has very few articles to be divided between. That means that the model is very expensive per article. And there is no income to buy competence with, and you should remember that not being competent also has cost. I mean, it will reduce the distribution of the content. That is a cost to authors and to scholarship in general. The future, the plan has relented on this part, and I know they have read the article. The final criteria were less demanding, but there are clear signs that the softened up criteria will be toughened up from 2025. The most of them are already recommendations, which means that very few small scholar-led journals will be compliant six years from now, and six years is very soon on the time scale where we work. It's a geological time scale, I think, sometimes. Plan S might have grown to become more important, because many editors can today say, well, plan S, it doesn't mean anything to us. Our authors don't have plan S funding. Well, that can change. The day your friendly neighborhood research council decides to join plan S. You can have a plan S problem overnight. So we have to plan for it and have some thinking about it. This will be the demise of scholar-led publishing, at least unless something is done. One solution could be that more APC-based scholar-led publishing. I mean, APC is not the same as profitable. It's a way of getting income. That could allow outsourcing of competence-demanding activities. For instance, typesetting. I mean, we have the world's most expensive typesetter. It's a professor of Spanish, very productive one, too. He should be researching and writing more articles. We need more and better tools, especially when it comes to XML. For many of the other problems, we have the tools. We don't have the editors that understand how to use them. I think that one answer is to create larger publishing entities that are more resilient. As standalone journal, we'll have problems the editor dies or the tech support guy changes his job. Larger publishing entities allow the competence cost to be spread over more articles. That is part of the economies of scale. You also get more experience. You do think more efficiently. What scale is needed? Well, my guess is that you won't really have a future unless you publish at least 50 journals. You need some institutional willingness to provide better funding and to enter into inter-institutional publishing agreements to create large entities. That is my solution, actually. More money, larger entities. I think that's the solution. Questions that we'll save for later? Don't forget to look at the Munion conference. This year's conference is next week, but there will probably be one coming up next year, too. We'll be in there. More should come. Thanks for listening to me.
Small journals - published as stand-alone journals or by very small publishers - form a large part of open access publishing today. But small journals/publishers seem to have problems with their publishing competencies. OJS provides tools, but not the competences needed for exploiting the possibilities that lies in OJS. So what is needed, for small journals to become better tools for their authors?
10.5446/51385 (DOI)
Hello. I'm Clinton Graham and I am a systems developer at the University of Pittsburgh. So I don't have background in institutional advancement and I don't have background in public speaking. So I'm stepping out in both of those areas here. My talk is sharing the wealth, opening funding. And what I hope to do is to describe a way that by sharing the wealth of the things that we are doing, we can fund through grant writing, through award nomination, the innovations that we want to do locally. So this presentation is a pitch. It's not a source of new funding. I'm not offering my own money. I am not hoping that you will steal grants from PKP. But I'm suggesting that if we share metrics and stories of the cool things that we're doing together, we can together create easier ways to write grants. So the background for this talk is a sprint that we hosted in Pittsburgh. A PKP sprint is a face-to-face event where developers and editors and industry partners all come together to work to improve our products. We completed a sprint just prior to this conference here in Barcelona. We worked on documentation, on translation, on user experience, on architecture and coding. And I wanted to bring a sprint to Pittsburgh, which would have been the first in the U.S. I imagined up to 40 participants coming together and working together for three days. But these things cost money and we don't charge anyone for them to participate in the sprint. So I dreamed big and I wanted to provide funding even for those who didn't have an institution who would be able to send them to Pittsburgh to work with us. And so I made a budget for food, for housing, for travel and for logistics. My university offered a local micro grant where PIT sought to enhance its global impact. The provost offered 50% matching funds for a project with goals of interdisciplinarity, collaboration and institutional and community impact. So can anyone think of a global organization that might be collaborating across disciplines to impact multiple institutions and communities? I wrote a grant for PKP and the sprint. It was rejected, unfortunately. It also has a award internally which highlights and celebrates community-oriented partnerships where there's mutual benefit for everyone, including benefit to the broader public good. The cash award of $2,000 was available. So can you think of any international partnerships which are built on mutual benefit and community engagement? I nominated our relationship with PKP. It was also rejected. But despite being unsuccessful in both of those funding attempts, I got some time to think about the process of making these requests. And I was particularly interested in all of the numbers and examples that I had gathered as supporting evidence within the application. So my applications both referenced the work we were doing locally at PIT and referenced the work that PKP is doing at large. And I think I worked way too hard to find all of this. I want to make it easier to collect these statistics, these metrics, these stories. So for all the cool stuff we are doing locally and worldwide, how do we document it and how do we share it? One example of a category that I think would be interesting to quantify is interdisciplinarity. We work together as collaborators across disciplines. We're integrating the contributions of computer programmers, of academic and professional publishing staff, of industry partners, of nonprofit partners, and then all of that goes into interdisciplinary work. So for the PKP Sprint at Pittsburgh, our event supporters included faculty from engineering, from health and rehabilitation sciences, and from sociology. And when you add in the perspectives of the library publishers, the librarians themselves, the technologists, you get this robust story of collaboration. So what would this look like statistically? I want to be able to find the counts of contributor disciplines. What were the interactions that built the software, that built the partnerships? I want to be able to find counts and metrics on the published disciplines. How were the publishers or the works themselves interdisciplinary? I want to be able to tell the story. So for example, at Pitt, we have a journal that is called Leslie, is an interdisciplinary journal for linguists, computer scientists, psychologists, psychiatrists, attorneys, law enforcement, security executives, and intelligence analysts, an intentionally interdisciplinary journal. Ledger is another journal we publish. It is an interdisciplinary journal about blockchain and cryptocurrency technology, asking questions of the intersections with mathematics, computer science, engineering, law, and economics. Another common funding category is impact. We can highlight the impact that PKP is having on the academic publishing in general and an open access broadly. We have broad install base around the world. We have PKP doing its own research and open access initiatives. There's also impact within the organization in our sprint funding. I cited PKP as a reason that the University of Pittsburgh is now considered a leader in the field of library publishing. And that interdisciplinary collaboration itself is an impact on the institution. But if you're a funder, maybe the most interesting thing that you're looking for is the impact within the community. And so for our award nomination, I described how our publishing programs amplify local and diverse and underrepresented voices within scholarly communications. At Pitt, we have journals that are at work in areas like cultural studies, health disparity, and underserved communities. So what would that look like in terms of statistics? How many journals have collectively we flipped from print or subscription based publishing to electronic open access? I want to be able to easily collect metrics like that or traditional metrics, bibliometrics, alt metrics. I want to be able to get access to them and describe the way that our articles are being shared, cited, read. I can tell stories about the August Wilson Journal at the University of Pittsburgh, which is a new journal on the scholarship of a local African-American playwright and his legacy in Pittsburgh. Or the Journal of Anthropology and Aging, which is a cultural and historic study of the elderly across cultures and environments and health disparity that comes in with those environments. Another area that I had experience with was global reach. And this is easy to brag with PKP because we have a strong history of being multilingual, both in the translation of the user interface and in supporting the multilingual entry of metadata and disseminated content. Recent work at the sprint even this week will make that so much easier to quantify. The software has also been addressing challenges of internationalization when it considers the cultural expectations for names, global identity disambiguation, legal requirements, cultural conventions, and PKP is distributed worldwide. So for statistic, was OJS2 available in 34 or 36 languages? Depending on where you look, there's different numbers because probably of the completeness of the translation. And how complete is our coverage in OMP3 and OJS3? We'll be able to tell that much better based on the work of the sprint and moving our translation platform. But let's also refine our installation counts. PKP collects information about how many counts, how many installations there are around the world. But that's not easy for any of us to look at and cite within our own work. With the University of Pittsburgh, the Bolivian studies journal publishes articles about Bolivian culture and literature in English, Spanish, and the native languages of Bolivia. My university is also known for its non-Western publications and collections. And so the Journal of Japanese Language and Literature provides a scholarly forum for Japanese literature, linguistics, teaching Japanese as second language, and Japanese culture. All of these things are things that I think can be easily cited in a grant application. Final category, partnership. This is, I think, the core of the public knowledge project. We operate on principles of the open movement and free software. We're based in an ethical position. All the work is done in collaborative and inclusive methods. Both the software and the research that our software creates is oriented toward the public good. Our shared governance is also no example of partnership. The technical committee, the members committee, the advisory committee are all made up of representatives of the partner organizations and of the community. So for statistics, we can gather the list of these development partners, these strategic partners, and describe that partnership. We can share numbers of participation in the sprints by region, by discipline, by organization to illustrate the ways that we work together. I like to use the technical committee as a story. We have seven different institutions, plus community members, from six different countries, coordinating across nine time zones. Our sprint last year in Heidelberg featured 17 countries working together on eight technical and non-technical projects. I know we had more than that diversity in the sprint here this week, but I don't have the numbers and I will have to pester Marissa to try to find them. I haven't covered all kinds of other categories that might be grant worthy, such as return on investment or innovation or sustainability or accessibility, because I'm interested in your feedback on what might be helpful. I think that by deliberately sharing these metrics and these stories, we'll all be able to quickly pursue small grant opportunities as they become available. So what if we collected these stories and shared them? What if we collected these metrics and shared them? That's my pitch. I know what makes PITS relationship with PKP special and I can write that into a grant, but I also want to be able to cite all of the work that you all are doing as well. I want to be able to say, this is what we're supporting or this is what we're a part of or this is what we aspire to. And I think that if we have that available, collectively we can strengthen our storytelling, we can strengthen our bragging, and we can strengthen our grant writing. And maybe my next grant won't be a big rejected and maybe yours will be accepted too. So if you think that pitch could work, I will be here for your feedback. If you think that there are things that I'm overlooking because I am not a grant writer by profession, although I have had successful grants, I swear I have, just not this summer, then I'm interested in what PIT falls you see in this idea. You can also reach me at any of these handles, CTGram or CTGram at PIT.edu. Thank you. Thank you. Thank you. Thank you. Thank you.
Based on the story of organizing the PKP Pittsburgh Sprint, this is a pitch to share metrics and narratives on our community's interdisciplinary collaboration, community I social impact, international participation / global reach, partnership reciprocity/ mutual benefit, and other fundable efforts. This presenter hopes these shared details can help individual community members to craft small grant proposals and award nominations in support of local publishing efforts
10.5446/51387 (DOI)
I am here to talk to you about our work towards sustainability in our open access to general publishing. I used to work for the Osama Tumamista Library as a TV exclusive social services there. Now I am the head of the old town management at City of Eastside's institutional set. But I am a librarian. I have been working with open access for my entire career. Just a moment to be fair. But at least for the last 12 years I've been working with open access. Just to give you a quick partner. Thank you. Just to give you a quick view of where I'm from. We also went to the University of Norway and it has about 2200 employees and 20,000 students. We are the third largest university in Norway. Our starting vocation college is to be educated teachers, nurses, engineers, librarians and many more. The University of Norway has four libraries around the campus and there are around 80 employees. Not all of them have their own schools and stuff as well. We also have our own section for the media which is where our publishing services are based. And how did we end up as a local writers publisher? Well this was back in 2010. At this point in time we were already running an institutional post story that was kind of a way into two organizations publishing. We were familiar with the concept of open access but we didn't have an experience in publishing journals. But there were actually three journals already at Osumas that were using the open access software. They had been in it for a while. There were probably a lot of those journals that John Moody was talking about yesterday. I think we've been in it for years and years and years. It did happen as a proposal. And we also had the three journals that wanted to start up within how that worked. So they wanted something simple to start with. It didn't require a lot of technical testing to come down how. And some of the library came into the future. And we decided to run a publishing platform. We may still go to OS because the existing journals were already familiar for that system. And everyone we talked to in Norway they already used OGS as well. And our role was to make it easier for our departments to publish journals. And we wanted to offer the technical support that we did because that was what they were asking for at that point. And we also decided to provide the services for free if the editor isn't an employee of Osumas. We actually started out with a member of the editorial team. It is an employee of Osumas. Then we would get a platform for free. But then we started seeing suspicious numbers of journals that had one editorial member from Osumas. Which meant that they got the platform for free. So we had to actually change that group. I think someone was a journalist in there. So the whole workshop, but in 2011, I'm going to say it's published on. But then our goal was to be able to provide technical support which we changed. Something unexpected happened. So many of our team went to Liberia and said we're going to publish everything with access with regards to publishing a journal. It makes sense that Liberia and they had the perfect competencies for journal publishing. Our idea was just to run a platform for free to use of support and technical support. But we were now being asked about the new peer review. How do I get index? How should the PDF look for others to be able to site time? How can I ignore exactly the next nice number? How about the code in the region publishing system and so on and so on and so on. The editors discovered that we already knew we were able to find out the states which was the hard and hard questions. And soon we were deeply involved with a few of the rooms on classrooms such as ethics, copyrights, the truth into authors. We had quite a few difficult cases throughout the years. And I sounded like we had two authors that wanted to remove a third author from the paper and also had disagreements during the review process. And how do you have that as a very new editor when it's your first time experience as for the day? And we also have one of our journals to be now complained about as someone felt that they had been cited unfairly in article two. So these were the issues that we were facing in given that was about. So we can have to change the service level from the pure technical stuff to a bit more. So now we, a lot of the work that we will do is actually use the support for the other sources and giving them the advice and guidance on everything about our manager. And we will do stuff like provide those for them, we do training, we get the visual aids, we will attempt that it will be these if you are invited. We also try to get the career have an annual meeting for editors as well as they are able to sit in the same room and we can discuss issues that come up on like if an un-difficult case is his fault. I will do advice and help with the mixing and the suitability and we will help with financing options. We will remind them when there are grants that they can apply for, you advice and help them get stable financing. And we will also help them with delivery and assessment statistics and animals and other stuff when they are applying for grants. We also have been working kind of hard with changing the view of how they do digital publishing. We find they have a very mindset from the traditional way of publishing. They are still thinking that oh the PDF has a great, but we think that they are waiting to focus on the old way of reading the journal. We started a foundation and we really created the last page. Is PDF always the format that you should go to? There is some entire rule of formats out there that are much better in lots of ways than PDFs. So we have developed our own workflow for publishing which is Sentinel. We have been doing that for quite a few years. So each of our editors has a need to get something that is more accessible than the PDFs out there. We also talked to them about how they do the publishing. Until the time they voice their articles, they once every time, they are published six at once. So we will try to get them to publish in more email in two months or two years. That is something that is built with them. And it makes more sense. As you can see, more stable work, people just work through them as they get them. And then publish on us in this channel. And the author doesn't have to work with them either. And they are working with them. Do you really need to run continuous page number if you need to have issues? You can just set the address. And we try to encourage them to, for example, fill their abstracts. Instead of adding just written abstracts, you can have a video on front page of the journal as well. And then we will provide them with a platform, storage and a possibility to fill in authors. So try to inspire them to a bit more digital publishing set. And then something happening here. We had a good thing going. We were running our channel, so also met Jones. And then we started getting a lot of requests for prize quotes from Jones that were not for the need to be also met. And then we got quotes and got emails and got form letters. And then also asked us how much do we need to contribute to a platform. And then we had kind of green bin cover approach. Our focus had been on the comments from the University and the Jones at the moment. But now we're forced to look at the question, how should we handle all of this with this comment? Should we say no? How should we consider that? Would it be fair to have the platform for this? Well, we should use it. And so we decided that we will look to open for a question on this and Jones. And we were going to charge the service. But only have to cover our own costs with providing it. And we also decided that we should have a free training for libraries and other institutions if they were able to publish Jones as well. So we gave the option, we had the payout, and then you'll publish it for you. Or we can train the library and then download or just help you for free. So then we would come again with the tools to do it ourselves as well as providing it and it's not to publish. We do no marketing at all. We do request that we get it just by going out to still get a month. And you want to request a chance, not all of them have, but we'll be a couple and we'll see if we want to come to this month. And right now we're publishing to you, Jones, apart from other universities or university colleges in the world. And I just wanted to give you a quick view of the open access publishing that's in Norway. And I have a few commercial publishers like that in the last, they will publish my journals and also open access books. They're fairly new to the OE market. That's a cool publisher with traditional long traditional roots. The reason is such is because it's open access to fun. And they have nine journals in the sense that it's getting to be a substantial publisher. And then we have the old Scandinavian University Press. They have 26 journals. I think they are the largest publisher of open access journals in Norway. As you can see from the name, this is almost out as open-bound in Norwegian universities, 20 years ago I think, they became a commercial company as well. And then we have the Universal Publisher. And they actually get some to be the most popular of the open access publishers in Norway. And then we have the Universal Oslo International 19 journals. Also, the Metropolitan University has 16. The University of Kamsa has 12. I took out some student journals and research journals, research report journals. The University of Oregon has a lot of journals. And then there are a lot of small publishers as well. And I think that's why I like these numbers. I think this is a journal published by G.S. It's quite a large amount. What's next? The commercial publishers, they will either charge a journal and you give the fee for the publication, or they will add anything. Whereas the University publishers, when you use the word, you have to give free. And you can give a few of the journals that are the same. But as you can see, the The University of Kamsa is actually of course working with the publication landscape in Norway. Because we have the majority of the journals. We had to find a way to give a price for our services. And our goal was to give a fair pricing. We wanted to encourage competition in a way to condition the asset. We think that's a new way in the commercial publishers. So I think a lot of money, they are not quite as good as the ones that are in the market. So we made the price of all this as a fairly simple. We'll just take the labor cost, the social cost and all the resources and some really good journals. And divided by 15. And my goal to mine, price per journal is 7,200 euros. Fairly simple, right. But then we have journal number 16. So we have to recapitulate that the same cost with the specific and more resources on journal number 16. So when we divide the same cost with six in Germans. And so we end up with a price set in neutral, such as 6,750 euros. It was interesting to compare to the cost of the university in Germany, which was around 6,000 euros. And the same price strategy. The problem with this price model is that we will never reach zero because they are already charging the new journals their proportion of the cost. We have to cover all the free journals in a way. Each new journal on the page will their part for the cost. So we really need to look at the new price moment for this. Also a few of our journals are looking to charge you maybe sees and I will change the equation. So one of the bureaucratic issues that they see is normally because we're in a large institution, we have to find a way to do how do you want to somehow do it. We are moving towards sustainability. We never reach it. Definitely need to read the something there. And we are also concerned with the fairness here. We're working on working that's a job because we want to compete a little bit with the commercial publishers, but we don't want to outcompet them as well. Because we both have to get fair markets in a way. That's because the university of government is on the top row and it will be right to use government funds to outcompet a commercial publisher. We can't use government funds to sustain a publishing service and does not cover its own costs. But at the same time, we need to change the policy market. I think the publishers, the Vietnamese profits and they do it for others. So that's not quite the same. So we are considering looking into new way of doing this, maybe a national joint publishing platform that could sustain its own costs. We can do that. We are considering a project that could be into that. And in a way, we are already doing some collaborations on national level, like the strategy and the use and translation of OJST to bring an example for the international collaborating. There's one thing that's written and this is a very slowly able, really publishing landscape. I don't think we can reach an inclusion for quite a while yet. I'm not sure we're going to end up in a dying world. That's something to think about. Thank you.
The University Library at OsloMet started running the OJS platform for publishing open access journals in 2011. The platform quickly grew from three start-up journals, to today's 15 published journals. The free service expanded from focusing on running a platform, to offering support on all levels of academic journal publishing. They provide advice on peer review, DOis, indexing, publication formats, financing and facilitate experience sharing between journals. At a cost, they also publish journals for other institutions in Norway, and regularly receive requests to include more journals. This presentation shares experiences from building a publishing platform, lessons learned along the way, and goals for the future of sustainable library publishing
10.5446/51390 (DOI)
Alright, so, hello. Thank you for that introduction. So we're going to do this sitting because I couldn't really sit yesterday, so I'm just going to keep it easy. So, um, here on the slides you'll see a little bit more of what we get out of outside of our work. We just want to emphasize that people doing this work and scholarly communications are actually on hold outside of work as well. And I want to start, it's a kind of a start of presentations of territorial acknowledgements. I'd like to do that as well. We work, study and live in a region which overlaps with the unceded, traditional and ancestral lands of the slavitudes, and we want to see who could not squamish, quam muskium semiaum, tulasim and kikai peoples, and quam pulpulite, a university driver who takes its name from the palm tree first nation. So we wanted to sort of frame this discussion about course journals and that race-making supports social justice by framing a little bit how we situate ourselves in this discussion and what we bring or what we don't bring to this conversation. So Karen and I both wanted to acknowledge that we do bring a fair amount of our own privilege to this work. And while working with and in many cases in some ways representing marginalized communities and individuals as we work with course journals in this context, we don't have the lived experience of that history of marginalization and oppression which we'll be discussing today. So we wanted to make sure we sort of framed our conversation that way. We're very much still learning about this topic and we feel grateful to have had the opportunity to work with faculty and students at our institutions and to learn from them as we explore this work. So I'll just give you a quick overview of what we're planning to discuss in the next 20 minutes. So we wanted to start by sort of situating ourselves as a community in the conversations and discussions that are happening around social justice in quality communications and in libraries. Then we are going to go through two case studies from each of our institutions of journals that we work with in classrooms with course instructors. We'll talk about some of the lessons that we've learned along the way working on these projects, some recommendations that we have for OJS in the ways that OJS can continue to support this work. And we wanted to acknowledge as well that the course journals that we will be presenting on today were designed and managed by the course instructors and their students. So we worked on them as librarians, we supported these projects and we used OJS. We wanted to make sure we're giving credit to the instructors and the students for the work that they did on this. Okay, so again to sort of frame us in this conversation that's happening around equity, diversity and inclusion, we took a look through some of the literature and some of the conversations that are happening in libraries and in this college education. And we're seeing this coming up more and more as a discussion point. So for example last year the Library of Publishing Coalition published the annual framework for library publishing and that includes sections on diversity, equity, inclusion as well as accessibility and scholarly publishing. And we were fortunate as well to hear from Tara Robertson during Parakino yesterday around accessibility, diversity and inclusion in higher practices. So we're seeing more and more of the discussion is happening in the library community at conferences, on-list serves and after-grading and the topics are really live and relevant in scholarly communications today. So what does this then look like in a practical sense and how can this like perspective of scholarly publishing actually inform our teaching practices? So what we want to do is guide students to an understanding for scholarly publishing and their role in contributing to the old scholarly conversation of students and emerging scholars. And if we want to really be inclusive, this involves rejecting long-held notions of how scholarship is recognized and defined and letting go of the notion of scholarly activity that's unconnected to the rest of our lives and all the aspects of our existence as human beings. As April Hethcock says in a blog post feminist framework for radical knowledge collaboration, scholarship does not just an intellectual exercise, it involves human beings doing work with other human beings and subjects related to the lives of human beings. We bring our full embodied and intellectual selves to this work as we engage in different ways of knowing and unknowing. So now I screen them for you. And so at the same time as we're having this conversation with students in the classroom, we're encouraging them to think critically about the types of information that tend to be given a voice and recognition in our institutions and in scholarship. So we ask them to really think about whose voices are missing or underrepresented, who acts as the gatekeepers to decide what and who is worthy to participate in academic scholarship, and why is so much value placed on scholarship written in the English language. So again we have a quote from April Hethcock where she encourages us to use language as a tool for inclusion rather than a barrier to participation. And with these questions in mind, we can encourage students to challenge current publishing models and the inequities they can reproduce and to work on building a more radically new and a power system of knowledge creation and sharing. So then the role of librarians in all of this is that we often provide instruction around scholarly publishing practices and concepts such as copyrights or author rights, open access and a hear review. But we can also work with course instructors to introduce these ethical considerations and inequities in traditional publishing models and how students can address these in their course journals. Of course we also help with the technical aspects of OGS as we all know it has been learning and caring from that committee barrier for people to start doing these kinds of assignments. Okay so we're going to jump into the first case study that we have for you, which is a course journal that was run at Simon Fraser University in 2018. And I'm in the interest of time I'll just show you some screenshots of what the journal looks like but what we share on the slide to be able to look how it looks like and take a look around if you're curious about it. So this course journal called Intersectional Apocalypse was produced by a third year class in S. of U's Gender and Sexuality and Women's Business Department and this was led by Dr. Ella Chibuwa. And Dr. Chibuwa works on intersectional approaches to digital publishing studies so she brings a fair amount of knowledge and expertise to this area and to the class that she created. So I'm presenting this case study as an example of a course journal, again supported by a study library, but then of course in the journal produced by the class. So students in the class collectively designed and built this journal making decisions around several important aspects. They paid special attention to imbalances of power and traditional solely publishing and they were able to work out ways to reject, avoid, or mitigate many of these imbalances by making certain choices. So for example they recruited submissions from the wider community beyond the class so students at their institution and also the community outside of S. of U. And they were looking in particular for different types and formats of works, visual art, poetic prose, zines, advocating for different formats of scholarship and trying to hear from underrepresented traditional English-language voices moving beyond the traditional sort of written texts or article. The course chose the Classes and Open Peer Review form of peer review in order to generate some open dialogue and discussion between the books we submitted to the content and the students in the class were conducted peer review. They wanted to promote open, fair, and collaborative discussions between the author and the viewer and they also made peer review optional. So anyone who wished to submit to the journal without going through that process was welcome to do that and they were just getting the work had been peer reviewed. They paid special attention to accessibility. They conceded on the side here they had all of the submissions in HTML which is screen-readable as well as accessible PDFs. They also took the time to make MP3 audio recordings with all of the discussions. The class chose a Creative Commons Attribution Non-Commercial Licence for their work after one discussion about which license we worked best for them and they also used or had the option for traditional knowledge labels for anyone who was a member of an indigenous community who was submitting work that had different access requirements. So if they wanted the work to only be accessible to certain communities and certain kind of peer or any other locations they provided that as an option that gives more flexibility than the Creative Commons Licence that are commonly used. And they talked a lot about sustainability of their journal and what would become of it. How it would financially sustain itself. They were very keen to receive a diamond or an access model where they were no pleased to subscribe, no pleased to publish and they did consider placing a donation button on the homepage of the journal as well for sustainability aspect. Yeah, so here I just have a couple of examples of the types of content that were published in the journal. So you can see some evidence of the class moving beyond traditionally text-based academic articles using a mixed media approach and different types of storytelling to challenge the notion of what is considered academic content. So now at Quantlin we have here the Logan Creek Decolonization Project Journal and this is run by Dr. Kathy Gunster and her students in multi-portal culture. They documents the ongoing botanical decolonization and re-digitization of Logan Creek which is an area on the K-E-Liteley campus. Their intention with this class is to develop biographies for significant food, medicinal and technology plants that use the Decolonization which can be then developed into signage for the sites as well. So it will become an interpretive area there. And the use names in the Hukminum dialect which is spoken in the Logan mainland, in the Moskvim, in the Mount of the Fraser River and operated at Decolonization. So the way this class works is that students write a peer-reviewed paper but they are then collectively and openly peer-reviewed and edited. So they look to pass them around in hard copy and collectively edit them and the idea is that there is a uniform quality of language that it achieves and that no one is left behind even if their international students who speak English might not be their first language. So it's very collaborative class in that way. So each papers includes alternative forms of scholarship as well and the form, for example, of recipes. Decolon recipes in here and if you can't read that on the screen it has a recipe for Blue Elderberry syrup. And so they also, there are also plans to influence the audio recordings of Hukminum names both online and potentially also outside of the area. They're further spoken by local speakers of the language. All workers are openly available online and CC licenses won't share the work, the students work widely with the world and also so that the Qualcomm First Nation and South is access to materials. Kathy works to develop that area of local creed because it was taken from the Commonwealth First Nation and destroyed by construction. And she just does this work until such time as there's a more formal agreement of latitude so that the whole language can't be extended or come back to them. Right next to the issue that's in the works of the molecules focus on documenting weeds and those walls include recipes because it has a lot of international students and what's considered a weed in Canada, like actually they consider the weed wherever they come from. They've got some future plans for development where they increase the accessibility of the materials itself, those are at the moment, and it's very accessible yet, and also the potential to add to traditional knowledge labels so that people have to agree to the digital particular season where they are to put people before they access the materials. So we're going to jump in and talk a little bit about some of the lessons that we've learned in the very supporting of these projects. Karen and I have both worked on a few different course journal projects aside from the two that we've discussed today. And one thing that's always talking about is the sustainability of the journal, what will happen beyond the end of the course. So for example, SFU was fortunate to have Dr. Chibuwo as a Bruce Wayne Woodward Postdoctoral Fellow from 2017 to 2019, but she's just moved on to another position as an assistant professor at Illinois State University. So the future of the intersectional apocalypse journal is currently uncertain. It remains to be seen and will be taken up by another instructor perhaps in the same department or a team at SFU. For now it will be preserved by SFU Digital Publishing in its current format because it is a great showcase of the work that is done. Other course journals might continue to publish and subsequent iterations of the course if the instructor continues to teach. They could transition to student journals as well, so it might be taken up by a student association or society who can continue to publish outside of a structured classroom environment. We also find the OJS learning curve that we mentioned a little bit earlier can be a challenge in the courses, especially when we're running journals like this in a four month period with an instruction that hasn't used the software before. So we do work on some of the instructional aspects of supporting the instruction, supporting the students in the class, taking them through the workflow of submitting over and laying the peer review on the editing and so on. And you'll notice in the SFU example the main portion of the course was really about taking students through the production of the journal, setting it up in OJS as well as going through that publishing process. We do find in a four month term that that's a huge learning curve. It's much more straightforward for the students to be publishing their own work in a journal that's already established and set out for them rather than having both sort of the creation of the journal and the publishing process. So reusing course journals in subsequent courses is one way to reduce this workload and pinching. And so finally the time of users are key to this. I believe it's the fact that we're running these in four month terms. But many of the structures are very dedicated to the quality of the final product and they might take on the large task of editing all of the submissions, all of the work that goes into the journal. So one of the instructors in our example is estimated that they spend about 200 hours of work in producing the final output. So for this reason we recommend that the journal is published just one submission per student in the term. We try to take on too many projects that once can be too much in the short time frame. And we recommend that the instructor focus the class either around writing and publishing journals or designing and setting up the journal by itself. And then we'd like to end this with some recommendations for OJS and we know that some of these we don't actually know how many of them are just recommendations from feedback that we've had. So of course there are some examples that we've given already showed them the ability to OJS and bring about changes in scholarly publishing and giving those options to engage in alternative forms of scholarship. But we also think that there are some opportunities to adapt to changes in scholarly publishing that reflect efforts to bring more equity and inclusive teams of process. So one of the requests that we've had was more options for open peer review that could be added to allow for public discussion around work instead of anonymous or developed anonymous models. And then more sort of the bottom style, more discussion about open peer review options that we've been looking to go back in. The other recommendation we have would be to sell some kind of triple anonymous review where the author is unknown to any of the journal editors which could decrease bias in the initial decision making process before submissions are sent to peer review. I have no idea what this was about. The other thing is an option for a donate button to be added to the homepage to provide an alternative business model for diving to OJ journals. And I know that there are some options for that but if that's standard available it's just one of the options out of the box you can create. And then lastly, more nuanced access options for traditional knowledge. For example, that the user must indicate that they belong to a community or demographic and acknowledge that the content has specific access limitations before accessing a particular piece of content. And of course that's on the system, the same as clicking into a web AT and looking at that website or something. So to close we have a few resources here about course journals as well as the student journal that we recently added to the documentation. And I would just like to say thank you to the instructors and the students that have been working on this. Thank you.
How can OJS and OMP be used in classes to engage students in discussions around social justice in scholarly publishing? This presentation will discuss examples of course journals and book projects at Simon Fraser University (SFU) and Kwantlen Polytechnic University (KPU) which attempt to involve students in anti-colonial, anti-racist, and anti-oppressive forms of scholarship. These projects aim to be inclusive in a variety of ways: in terms of accessibility, language, content formats, and sustainability strategies. The presenters discuss the ways that OJS and OMP can be used in the classroom to develop students' awareness of, and ability to address, social justice concerns in traditional publishing. Finally, they will explore how lessons learned from these case studies can be implemented in other courses.
10.5446/51287 (DOI)
Okay, so welcome everyone to the afternoon session. Our first speaker, Gerard Duchamp, will report on Kleed Stars and Chocolates. Thank you dear Glebe. I must first of all thank Maxime and the IHES team and my co-organizers Glebe Koshevoy and Horgnok Min. And of course you all for attending this talk. And I will speak about Kleed Stars and Chaffle Algebras. Try your tango tail because you will see that when you begin to calculate within non-competitive series you have to manage with superpositions. And this is a notion we made clear with Glebe Koshevoy and Christophe Tolu in our publication in Lotar Agiens, Twisting and Perturbations. And we will speak mainly about perturbations and its combinatorics. So now what is the goal of this talk? We have three goals today. First I will make some time for motivation, explain how this is motivated and what is the root from the very first motivation that was to extend the polylogarism and to some mat overflow post which made the collaboration with Dary Greenberg possible. And after we will decide the main theorem and it is a general theorem. We have two proofs but maybe due to lack of time I will not go into detail. But this result is a take place in three variations of general theme which is the linear independence of character on a co-algebras. And this is the subject of our publication. So we will see, this is generality but we will go into detail. This is more or less generality because it is done on enveloping algebras. The main result will be generality and it has been discussed, condition has been discussed. And after we will see also examples and examples are made mainly by algebras formed from the free algebra by adding some co-multiplication. And after we will end with some remarks about the structure of Outdoor groups. This term is inherited from Boba Key. It is a group of group like. Boba Key calls Outdoor groups a group of primitive series. So this will be the group of characters more general than group likes in this context. And we will see how we can devise a local system of coordinates and have some identities that cannot in my opinion be approved otherwise. Of course, parts of this work rest on iterated integrals so it is linked with Dyson series. Now we go to the initial motivation. Initial motivation is from hyper-logarism. You have in the complex plane you take points, different points or different singular which will be seen as singularities a1, a2, an. And you see that the functions that are integrated it is 1 over z minus aj are all homomorphic within the plane without this point. Of course, as you have monodromy phenomenon, you must base yourself on the path. So if you take a list of singularities possibly with repetitions. All singularities are different but your list can be with repetitions and a path avoiding all singularities from z0 to z and you form this. Of course you see that this quantity does not, does depend on the path but it does depend only on the homotopic class. Meaning that if you take two paths that are homotopic, of course the result is the same. Now as the result depends on the homotopic class, it can be seen as a function on the universal covering or you can work sometimes it is easier on the cleft plane which is a section of this universal covering. You take the singularities and you take a direction from which the singularities are seen different and you take half rays in order to avoid the logarithmic phenomenon which consists in turning around the singularities. So now how do we change this with our concern today? As I said these functions take a list of singularities and a path. As we have said we work on a simply connected domain so only the two, the extremal point of the path, the initial point and the base point and the end point and we will shift from a list of singularities to words because we said that to each singularity we have attributed a letter so now you have a word. So you have here you have done a mapping which takes an end point, I will make this clear here because your base point is given, transfer all and it takes the end point and a word. So now it depends only on a word with variable coefficients so this is a series which is the generating series of all your computations of your computations that you can imagine taking a word and as monomial and it is a non-computative generating series of your hyper logarithm. Now why do we take this generating series? Because it has a lot of nice features, it fulfills, it is a solution of non-computative differential equation with left multiplier. I will not go back to this because I went many many times here on elsewhere since 2017 in cap 17, cap 18 and cap 19 on the study of this series and the left multiplier will be proved as primitive and as you know if you have a group which is sort of a lig group with the algebra, the primitive series. So we can prove as this left multiplier belongs to the algebra of our group as at one point which is Z0, it is a group line, it can be said to be group line for all Z and it is a shuffle morphism. We have seen also the second feature which is linear independence of the coordinates with respect to C, it is not difficult to prove this with monodromy but it can be seen to be independent of larger sets of scalars, larger rings or larger even fields of functions which will not be the subject of today but if you go to the slides of cap 19 you will see this study. And which will be our concern today, it will be at the end of this talk will be the factorization of this character in elementary characters. And as it is once it is factorized, once this series is factorized, you can delete because you see that when you approach the singularities you can have some divergences so you can re-normalize by deleting some factors of this factorization, which is this point. And we will see at once what is the problem of extension to some series because you see that here your quantity depends on a word so you can linearize this mapping and say, oh it depends on non-commutative polynomials but what will it be if you instead of a word or a polynomial you put inside a series an infinite sum. So you cannot put every infinite sum avoiding the divergences you have to put infinite sums that are in its domain. We will see at once what will contain the domain and in particular this domain contains easy to compute with a series and these series are expressed in a very useful language, very easy to compute in language, which is linked to automata theory. And this is a clean start. If you take a series without constant term you can take the series 1 plus s plus s squared plus until infinity and it is 1 minus s inverse for respect to the concatenation in the algebra of non-commutative series. So now it is a detail but the Dapo-Dalilevsky was reading from left to right and reading from right to left and of course it is to match with a more recent bibliography. Now what do we have? We have done the hyperlogarism, I recall that hyperlogarism takes you a word or a non-commutative polynomial and it gives you a holomorphism function. So we have evoked argument of the proof such that this hyperlogarism which takes a word and then a non-commutative polynomial is a shuffle morphism. If you put in as an argument the shuffle of two polynomials you get the product of these two hyperlogarisms. And how to extend this? We have here a space which is the space of formal power series. For these spaces we like to decompose a series into a homogeneous slice which is the sum of all coefficients for all words with the same length n. And as our alphabet is finite, the number of singularities is finite, this SN is a polynomial. So you can substitute this SN into hyperlogarism. Now you have to consider, as in your source, you have your series that can be like that, it is just not other than the sum of its homogeneous components. You have to consider this series in the target. So this series is in the target, may not converge. So we ask that it converges unconditionally, means in French they say convergence commutative and sometimes in the Anglo-Saxon world say commutatively convergent and unconditionally means it converges whatever the terms are and if it converges whatever the terms are for the compact convergence which is convergence on every uniform convergence on every compact, it is a standard topology on H of omega in order to preserve an electricity. We say that this series is admissible as an argument of HL if and only if this series is unconditionally convergent. And this is why we ask this, it is because H of omega shares with finite dimensional topological vector spaces. We are on C here, the nice property that if a series is unconditionally convergent for this compact convergence, then it is absolutely convergent. And we need this to prove that the shuffle property is preserved in the new domain. So now you have your polynomials, you have your holomorphic functions, it is of course a subspace of all holomorphic functions in omega and you can extend these two series which are members of the domain. Domain is then as we said the series such that if you decompose a series as in these homogeneous components, this series converges commutatively. Now, so if the main result is if you take two series in the domain, their shuffle is in the domain, of course one is in the domain. So you have extended this into the domain such that the shuffle property is preserved and it is a character with values in H of omega, H of omega with values in commutative algebra. So now as we want to give examples, we take the polylogarith. The polylogarith is a special case of deeper logarithm with some asymptotic condition. I don't go into detail. Please admit that it is a special case of hyperlogarism. So it is the same as it was for hyperlogarism. The domain is the set of series such that if you decompose a series, then this series converges unconditionally. And now what can you substitute? You can substitute as we, it is a motivation of our title, clean star. Each time you take an expression or a series and you of course check that it is without constant term, the clean star of it is justice. So what will it be here? You take X0, you have two singularities which are 0 and 1 in this example instead of A1, A2, AN. And you have X0 and X1, the letters indicating to which, around which singularity you integrate. And X0 star is just 1 plus X0 plus X0 square plus X0 cubed and so on and so on. And this you can prove, of course, it is an easy exercise that it is in the domain. X1 star is in the domain and you get these values when you compute and which will be seen here and it is why it is called star of the plane because you have X0, X1 which are two independent letters, the two dimensional space, alpha X0 plus beta X1 is a plane and star of the plane is exactly this guys and this guys we will see how they can be seen as characters and it can be proved, maybe I think yes, it can be proved that if you consider alpha X0 and beta X1 in the previous slide, you have the general formula is this one. It is the star of alpha X plus beta, alpha X0 plus beta X1. So as it is, it can be decomposed and as this guy can be seen as alpha X0 star shuffle beta X1 star, the alley of this is not other than the product of the alley and one can prove and it is a lot of easy computations. If you put alpha here, you have Z to the alpha. If you put here, you have Z to minus beta. So it is the product formula which was seen here. Now we go to the next, it will be more and more algebraic. The first motivation was analytic but it will be more and more algebraic. Every common character is of this form. Of course you have only one character due to the fact that we are in the free algebra. You have only one character such that to each letter you may correspond alpha X but it is an easy but not immediate exercise to prove that it is a star of this linear combination. And now you have shuffles with perturbations as after the world we call them with Glemb Koshyvoy and Christophe Tolleux, it is a perturbation of the shuffle product. The shuffle product is only based on this recursion with the same unit and if you can perturbate it with the Shifi, it takes two letters, you can be linearized it and from this quality of this operation defined you can prove that it is associative or commutative and so on. So you have the shuffle with no perturbation, the shuffle with the perturbation which is given the addition of indices, the Tule filtration which gives a bi-algebra which is not an enveloping algebra. These two give enveloping algebras but this gives bi-algebra which is not enveloping algebra. Due to the lack of time I will not go into detail for Adamar. And this is a table of shuffle products that we see in the literature beginning with Rhee and Hoffman and Kostermans who is a student of Mien. Mien and Mien considered the shuffle for their computations. These two were considered in order to filter the superposition terms of the shuffle and as was evoked we said that we considered twistings and perturbations and here it is a non-twist expression and you have this perturbation and this is the term which is added. I pass the others, save the one with Marchion maybe who considered generality with perturbation which is a law of algebra associative with unit and it can be non-commutative and then the five shuffles associated with this is not commutative. So the common pattern is the following. You have the beginning of the recursion of shuffle with perturbation. Now what is the initial motivation? It is a post I cast a lot of overflow and of course this post is what is written in clear here and it is the following. You take a lg algebra for a ring without zero divisors. It has been discussed that this condition is important for the conclusion and you take the enveloping algebra and you consider the standard decreasing filtration which is just the product of the term in the augmentation ideal. Now you have this filtration which is decreasing. You take the orthogonal filtration with a little shift of index and this is in the dual of this enveloping algebra and you can show that this filtration is a filtration of convolution with respect to convolution. So as it is increasing and the filtration with respect to convolution, the union of all the sectors is a convolution of a few stars. It is not in general in the polynomial examples it is not all dual. So we see that you have a commutative as I said so I pass this or maybe not. It is a way given to us by Gleb Kochewoy which is very nice how to compute the fish shuffle. You take your two words. The first word you put it vertically, the second word you put it horizontally from left to right and you allow from this leftmost, low corner to this rightmost upper corner you allow passes with east, north pass and diagonal pass and to each pass you can evaluate each pass as a term. For example, if you have a diagonal term I will take five of the two letters and if I have horizontal pass I put the letter in the horizontal world. If I have a north step I will put a letter of the vertical world. So now I can, this is another example. I pass this which is RADFORM theorem. I pass, well it is a dualizability here. So we have considered a fish shuffle and we can say that K of X with this fish shuffle providing that phi be associative is associative algebra and we can prove that due to the recursion if you endow this algebra with a co-product which is a D concatenation which is dual to concatenation and epsilon which takes a constant term of every polynomial you have a B algebra. In fact due to the fact that this delta is nilpotent delta plus you take delta plus and you repeat this procedure due to the fact that this delta is nilpotent then this algebra is in fact the HOGF algebra because you can compute the inverse of the identity because you can put identity equals epsilon plus i plus i plus and then you compute the inverse with usual series. Now you want to go to the dual. So if you want to go to the dual your decon concatenation will take the place of a concatenation and your fish shuffle will take the place of a delta fish shuffle. So you have a technical condition on the structure constant in order to warrant this. I will not develop but the slides are available on the site. What is interesting is that if you have a phi shuffle the characters on this algebra are stars of the plane which means that you take a linear combination of the letters. If your alphabet is infinite this linear combination can be infinite it's just a linear form and you can you put a star on it it is star of the plane and as you know in every B algebra the characters of the algebra itself compose with respect to the dual law of the delta. So you compose the stars of the plane with the fish shuffle and this is what is not so easy to prove combinatorially I mean in a pedestrian way but if you take into account the fact that the fish shuffle of two characters is again a character you just have to test this formula on letters and you can prove that this follows the recursion. Notice that if you have this as you have firstly the first term the second term and the perturbation and you have an identity which can serve to analyze groups of shuffle for example which are not so easy and this is now our main result with Darish Greenberg and Hong Nockman means that this property holds for every bi-algebra but in you know in developing algebra the monoid of character in is a group here for bi-algebra you have only a monoid of character so we have to take this into account. So we start with a bi-algebra and we make the standard decreasing filtration as was the case for developing algebra. I take the orthogonal filtration which is increasing with no surprise it is a filtration which is compatible with convolution product and the conclusion is the set if this can be done on whatever the ring of coefficients but now you have to take an integral domain and for an integral domain this group of characters is linearly independent with respect to this convolution algebra. So as an example I can give an example which is with only an univariate polynomial you take the bi-algebra and the algebra itself is the algebra of polynomials you take only one variable delta is delta of x is x tensor 1 plus 1 tensor x means that x is primitive and epsilon is that take the constant term of polynomials so as the clean starts of the plane it is clean starts of the line because you have only one dimension here so the character are of this form. So it is easily checked that you have this it is a particular case of the formula that was given previously and the monoid is isomorphic to the abelian group in particular it is a group but it is a case for every and the enveloping algebra and this is an enveloping algebra now let us give a non-trivial example you take the algebra of the rational in order to have the roots you could take also the extension of q by the roots and you take all the prime numbers and you take these roots of the prime numbers and this is a character of k of x this is not difficult but now what the preceding study shows that this set of character is algebraically independent over the polynomials and you can double check this considering that this is this formula the star is the inverse of 1 minus rather than the square root of b and x this time with with respect to the co-contenuation product and you have to use this which means that the shuffle power goes into the multiplicator of the character so you can double check that it is algebraically independent of course if you use many variables you have non-trivial results so what we consider now is the group of characters and the group of characters you have exponentials and logarithm it is you have a very very easy log x correspondence now I will dive into I will swoop into the example of the shuffle product the shuffle product as was as was explained here uses the addition of indices as perturbations so you have the beginning is the same as the shuffle and you have y index i plus j this term now if you take two as an application of our formula which is general here for shuffle you have this you have this star shuffle this star equals this plus this I could have put only the one index and alpha i plus beta i and this is the billionaire product and this is the phi so if you see this formula you can immediately imagine that if you take as a mental image your why why why I as a pro as a as a power like in overall style so you code this linear combination by this series means that you take the index and you put it as a power you get the following from 11 you get the following formula we will call phi y this coding and then you have phi y y of s shuffle shuffle sorry phi y one of t shuffle sorry star and the shuffle is here equal phi o mbr y of one plus s one plus t minus one so this will result in the fact that this is a one parameter group and as it is a one parameter group we can have identities which can double check by Newton-Jirard formula like for example this is a star of the plane so it is a character and a character is also an exponential of primitive element so the question is ah yes we have a character so it is the exponential of what it is a the same as it is so taking our our coding we can use this and take one letter for example renormalize by t and say that this is of course not one parameter group it is a one parameter path drawn on the house of group but it can be a log one can take the log of this one parameter pass and have this formula I so this formula can be double checked by other means maybe so let me conclude the star of the plane property which is not difficult but it is worth to be means that in this by algebra which are of use combinatorially for many operations of a formal power series coding coding different things star of the plane property means that a character of this is exactly of this form so this formula is the computation on the group of character and for con characters you can have matrix valued characters or non-commitative value characters it is not important because the free algebra algebra non-commitative polynomial now if you consider this the composition of these characters you have an evolution equation that's why it is so linked to the project evolution equation in physics led by carol penson now if you have a character which is this this time a value means that a is a k commutative algebra associative with you contrary contrary wise to the one you must have a commutative well it is not important so that because as this is commutative then the image is commutative but we will you see that we if you see she died for deform this it is not necessarily commutative so you must we must start with this and this character will be taken as identity you give me a word I give you that word the same word it is identity and of course it has a shuffle property because you it is a shuffle morphism so now you have this is what we call the diagonal series it is the sum on all words that w tensor w and you see that it is an expression of identity so you can change the terms from w and w to to a basis in duality to orthogonal basis orthogonal between themselves so it can be proved it is a mrs factorization from melanson wrote over at tucson verger that this is the product of this exponential of course now if you change the character you can keep the identity the identity between the diagonal series which is a formal identity and this and then you obtain a system of local coordinate I said that the house of group was a sort of league group so you know that if you take a basis of the le algebra of the group you and take the exponentials you have a system of local coordinate at the neighborhood of the of the unity now so you see that this will give you a system of local coordinate of your your your house of group by decomposing every character as a product of elementary characters and to end with you have a different version of this factorization and of course it is rather straightforward this phi is is uh is associative but uh and commutative but I don't know yet if it is generalizable uh under which condition it is generalizable to a non-commutative uh perturbed shuffles we have to explore this so this factorization which is easy to to set and to clear and to state in the case of shuffles in fact all of all for all enveloping algebra which are free as a as a as k module because of the kary bierkovid formula you can draw uh derive formula kary bierkovid basis and under certain condition technical condition you can have the orthogonal basis which is in in this b star uh convolution algebra so this can be set in an every enveloping algebra so we have evoked two or three times in our conversations with uh glem koshevoy that we could use this uh to factorize some characters uh on and all other enveloping algebras like uh christines zamolochikov pardon me for the pronunciation lialgebra and otherwise uh other ones uh uh we were evoking another uh lialgebra which was a commutative and coming from maxi so now i have said i have finished with my presentation and thank you for of course your attention uh with many bubbles thank you sir uh so we have one minute for question before me talks yes yes other questions or remarks or comments no oh thank you glem and uh uh thanks thanks for your attention to the public
We present some bialgebras and their monoid of characters. We entend, to the case of some rings, the well-known theorem (in the case when the scalars form a field) about linear independence of characters. Examples of algebraic independence of subfamilies and identites derived from their groups (or monoids) of characters are *provided. In this framework, we detail the study of one-parameter groups of characters.
10.5446/51288 (DOI)
My talk will be essentially about some homology calculations, but it involves a lot of combinatorics in the sense that first of all, this complexes we consider or they are small subcomplexes, they at the same time carry some least structure, which is gradually algebra. And least structure is kind of very nicely defined, can be described combinatorially. And of course, the other point where combinatorics involved is our proof. So in the proof of the fact that homologies are sitting in one place in certain complexes, we use at some places of the proof, we use some arguments from non-competitive grammar basis theory for ideals kind of defined by differential in this complexes. So this is not just using problems grammar-based computations for homologists, it is just elements of the proof where grammar-based theory gets involved. So in this complexes we calculate they appear from so-called pre-Kalabi-Yaw structures, which in turn appeared in open strings and this structure was also present in many other places in algebraic geometry and in projective geometry and in connection with mirror symmetry conjecture. But I will not talk about this, I will just give definition of pre-Kalabi-Yaw algebras as they appear initially and translate it to the definition of a high cyclic-hochschild complex. Then I describe this small sub-complex which allow us to calculate essentially homologists of this high cyclic-hochschild complex and show what is combinatorial description of the complex and then kind of of course I will probably do not have time to show whole proof of the fact that this high-hochschild complex have homologists, it is pure so it have homologists in only one place, but I will give you a structure of the proof and show maybe some elements of the proof which were grammar-based is involved. And as a result of this purity proof for this digital algebras of the complex, the high-hochschild complex, we have a result of formality of this complex, so deformation theory kind of described by this, but this is a direct consequence of the fact that homologists are pure, you can argue more or less standard using this infinity structure which is obtained by homotopy transfer on homology and the fact that homology sitting in one place do not give you too much freedom so it should be formal as a result. I probably will not talk about this, but the fact is that purity of the complex is very good property which allow you to have formality and all consequence for deformation, for deformation and so on and so forth. So now I will start with definition of plecalabia structure as it was given by Konsevich Glossopoulos and Zeidel some time ago, so this is kind of initial definition which in a way has a string theory. So this is defined as an infinity structure on direct sum of algebra and it's dual shifted by 1 minus d, so this is when I talk about the plecalabia algebra, but I will not actually pay too much attention to shifts in this talk, so this is kind of sometimes essential. So then this infinity structure on a plus a star should be cyclic invariant with respect to natural evaluation pairing on a plus a star which means that if you take your form here it is nth multiplication in your infinity structure and permute your arguments so pull alpha 1 to the nth then the form is preserved. So sine which is minus 1 in power degree of alpha multiplied by sum of degrees of all elements through which you pull this alpha, so sum of all these degrees. Here I use a conventional shifting convention. And form itself is just evaluation form, so if you have elements from a plus a star where a is from a and f is from a star then form is just evaluating of f on b and j on a with appropriate sign. So nth requirement in the definition is that a itself is a infinity subalgebra in a plus a star in this infinity algebra structure which you have. It can be defined for a associative algebra but also for a infinity, it doesn't matter. And then the notion can be reformulated in terms of this higher horchic complex and this reformulation this allows many, it is kind of useful in many respects so it allows applications of homological tools and it's also allow extension of this notion by translating everything in this terms of complex work categorically so you can extend the application of the structure. And one point also important that here you suppose when you take a star it is kind of better to suppose that a is finite dimensional or at least have finite dimensional graded components because only then otherwise a star can give you sub problems. It's not the same as a and so on and so forth in infinite dimensional case. But in formulations in terms of high horchic complex this requirement, this fineness requirement is not needed. So now this complex have many gradings so I give you a slice of this complex of degree n capital and component of degree n in this slice. So and this amounts to home from n capital copies of powers of a which are mapped to n's power of a with certain additional structure which I explained in few minutes. So this complex actually comes from dualization of power of bark complex. So you utilize it by home as by models over a in power n from n's copy of bark complex to this model a n with the following by model structure on it. So this is kind of important point in all this formalization that we endow this n copies of a by the following n by model structure. So we multiply element of a n not as you would do naturally just multiply each term from the right and from the left by corresponding term but we twist second the element by which you multiply from the right. So you twist this b1 bn and multiply a1 xy to b2 and so on and the last term you will get bn. So the left right multiple is twisted. This is this cyclic structure on n's power of a. So then this differential on bark complex when you consider this as a home software key give you differential on this complex. And this element of the complex kind of understood can be pictured as so to speak multi operations where you have for each group of entries. You have one output from this model so you have cyclic set of operations with our inputs and number of the separation is n capital and they are kind of situated cyclically on the circle. So and some of all inputs is this small n which is a new grading of your slides of the complex. So of course for n equal n capital equal to 1 you have the complex to a as here a is only in one copy you do not have any special cyclic structure on it so you have just usual home which is Horschild complex. So now we do it will be proven on this complex namely we consider now that complex was high Horschild and this is high cyclic Horschild complex. So we consider not all elements of home but only some of them so if you consider this operation is this real of element of the operations then there is natural action of group that n on this multi operation namely you can turn this wheel such that two outputs coincide and some sign to this tuning. This is how that n acts on homes in previous in Horschild complex and high Horschild complex and when we consider only operations which are invariant under this turning then we get elements of high cyclic Horschild complex. So now let's take kind of bigger picture so n is not restricted we take any n so number of inputs can be arbitrary but number is outputs of outputs fifth then we have a slice of complex and then we can take arbitrary number of outputs as well so we have some of the sub complexes that give you all whole high Horschild complex. And the least structure on this high Horschild complex is defined quite naturally again in this interpretation of elements of such wheels. So you have graded pre-lead structure on such wheels defined by inserting of one output of one operation to all inputs of another operation. As appropriate sign this is graded pre-lead algebra and then if you take a treated lea algebra a b minus lea this sign you get a graded lea algebra structure on this high Horschild complex and high cyclic Horschild complex and both. So then what will be in terms of high Horschild complex definition of the Columbia structure it is element of high cyclic Horschild complex shifted which is a solution or Carton equation in this lea algebra. So now let us consider small sub complex in this high Horschild complex which we will use to calculate to show that high Horschild complex is for. So it will be lea sub algebra not only lea sub algebra but lea structure on this lea sub algebra is described. So this small sub complex appeared from well known quotient of the bar complex which is this small resolution the kernel of multiplication in algebra a and this is kind of starting point for defining this non-converting field. So then we define this sub complex that over algebra a is by model home over a in power n from power of this resolution to the same model as I described before and take again in variance under that action. This is our small complex that or n slice capital n slice of this complex inside high Horschild complex and this quasi as a more fixed complex in case algebra is smooth because if in a sense that kernel of multiplication in this office this algebra is project if I move then it is quasi as a more fixed sub complex so it can be used for homology calculations. So now we need to choose some basis in this complex data and this basis on initial complex which consists of the access which is this difference so number of accesses equal to number of generators a number of the accesses equal to number of accesses of algebra a and this off from here we denote the c star let's say at this point to describe the basis in this data and we also pass from arbitrary smooth algebra to let's say for now free algebra but it is also can be done for pass algebra of quivers most of them so later more precisely now of a as free algebra on n generators x i x in art x i x i so basis will consist of following monomials which are again written in a circle monomials with labels so you can have labels delta crisp number of deltas is the same as number of accesses as generators of free algebra and there could be some size other type of labels in your word here it is one but it could be any number of labels x i and in between there are just from monomials from free algebra you want to do and whatever are monomials from free algebra so this are our non commutative which form basis in small subcomplex data and now we want to embed precisely to fix some embedding of this complex data to higher hawkshift complex for that we should specify how element of the basis of x i produce you produce for your operation element of high hawkshift complex which is this multi multi operation and this we do in the following way so we define let's say in this picture we take x i delta build which contain three deltas and two size and this word will define for us map from a cube to a in power five in the following way so you have three inputs three let's say monomials from free algebra and we glue them to deltas with by by axis in this words we glue them to deltas with the same index in some way first so we found x we found delta with the same index and glue them here then how and do the same with all three worlds then how we read outputs of the separation we suppose that there is some positive orientation on the plane so all arches are here related clockwise so outputs will be the following starting from nearest x i we read part of initial x delta monomials and then the part of input word which give you this word as a outcome is the first outcome this is this word here then here you read this word and this word till the nearest x i this will you will give you output second output here the next output and here if there is no x i you read word from output then the monomial itself and then another output will give you another input will give you next output so this way you get five outputs so number of outputs is a number of deltas in your word plus number of size and number of inputs are number of deltas and then you do this gluing in all possible ways and take linear combination of those this define for you operation so this word is input here these two green words are green outputs here this word is input here this is output and so on and so forth so you get element of a higher shell complex which is a map from power three but sometimes it's zero power of a sometimes it's first power of a two number of outputs is five so two a power five so this is realization of non commutative c delta monomial as a operation so it is kind of highly non commutative monomial because you see you you multiply what what this operation does you multiply this monomial and not like usual linear monomial we can multiply from the right and from the left but you can multiply this from a certain number of sites which correspond to the number of deltas in this monomial and then you get output which is also not a plain not two words or not plain but it is kind of tuple of monomial I will give you some more examples showing how it works so now we embedded this small sub complex in a particular way to our high hohschild complex and not that in this embedding we get only as elements of small subcomplex we get only those homes where powers of a in this partition of power and in this partition in the incoming element is can be only zero over as you can see here so this is another characterization of subcomplex so let me to make it a little clearer let me give one example of what can be obtained with help of this non commutative words elements of small subcomplex so let's explain in this language this double Poisson bracket which is known as a structure which allow non commutative kind of historically already known as a structure which allow non commutative Poisson bracket on representation spaces on algebra if you have this map from a square to a square on algebra which satisfy this double Leibniz identity double E code and symmetry so let us obtain this map from a square to a square from c delta root from non commutative for that we need to take that which contain two deltas and no size then you will have two inputs and they give you two outputs so this will be my from a square to a square and you can see that with this definition you will automatically if if map obtained from c delta root then this map automatically will satisfy Leibniz identity let me say right away that symmetry identity will come from the fact that we take the two invariants and Jacobi identity will be additional condition which come from our Cartesian but why it will be satisfy Leibniz identity so double Leibniz identity says that if you in input plug some product of two elements then what will happen you kind of should multiply one of one part of this element to this element of tensor product here and multiplication and and the other way and multiplication here on on the this is element of a square the structure of a b module on this a square let's consider outer structure so it means that if you multiply product of a b by c from the left you multiply left component and multiply from the right you multiply right component from the right so if you have this notation then Leibniz can be seen exactly from our interpretation of this operation given by c delta root as operation from the Hayek-Hochschiff complex I just described so indeed when you let you put here is product of two words you want to then when you plug it here in all possible ways you can plan a plug it in such a way that division border between this words you want and you to is kind of higher than point where you plug it or lower than point where you plug it so in the first case you see that result of your operation is that you just perform operation on V and U2 and this glues this U1 to the beginning of the tensor product so that shows from the left you multiply the left component of the tensor product and here you get from this sum of other type of elements in the sum you get another sum on here when you do just multiply from the right by the result of breaking between these two so this is a kind of simplest example how this pre-collabial structures in form of Hayek-Hochschiff complex or sub-complex produced non-commutative or some structure analogous things for bigger number of inputs so now what is the knee bracket on this non-commutative monomials let's take two monomials and compose them as we should interpret them as a multi-operations and compose them as multi-operations should be composed so as I said before this composition of insertions of all outputs to inputs the operation of composition of elements of Hayek-Hochschiff complex we just described before so what this operation will mean for elements of sub-complex of sub-complex data so if we want to take a bracket of these two C delta-deltes then actually surprisingly enough in a way it bracket of these two words as obtained through the composition of elements of Hayek-Hochschiff complex of the svils actually will give you elements which again expressed expressed via C delta monomials more precisely how we do the following so if we want to take bracket of let's say monomial a and monomial b we glue the same index cut up on this place X and delta disappear we open it up and we get new C delta monomial we do this for all X and set deltas sum up and do insertion in another way for V a and sum up again so this will this linear combination of new C delta monomials obtained in such a way will express use the same operation on the level of Hayek-Hochschiff complex as the expression expressed by this combination of this new C delta-deltes so here there is a picture which explained it in a way so it shows that when you consider operations defined by this C delta monomials and whenever you take an element which comes from outside of monomial itself some let's say X i which is sitting in the output and do this is what you would do for real operations in the difference between composition of these two operations all such monomials all such insertions of outside elements will cancel with some element from composition on the other way so only actually the insertions which I showed before insertions of elements of the C delta monomials itself to each other only they give result for the bracket so this is how combinatorially one can see that this bracket is closed on the sub complex data so now let me say something about homology calculations themselves so we have this small sub complex data and differential which comes from differential Hayek-Hochschiff complex of course we have both without taking invariance so what we will prove we will prove that for free algebra on number of variables bigger than or equal than 2 so it is not true for polynomial algebra or for pass algebras of queering with at least two vertices this complex is for so homologies are sitting in one place so differential in this complex which comes from Hayek-Hochschiff complex looks like this so the elements of them in itself is such a word from non-competitive algebra on variables xi on variables deltas in the same quantity as xi and xi and on such a word our differential is just substitution of arbitrary xi apart from the one which is sitting on the first place so when u1 is empty then it will be something a little bit different a little twisted but if xi is sitting inside the word it is just gets substituted by this expression so yeah some of commutators of x and some deltas and if xi is sitting in the first place if there is no this i or u1 here then this xi is substituted by also kind of by commutator by the delta and axis but this x gets into the end of the world world so you kind of get this expression after substitution of first xi and then the rest other as the size are as before substituted by delta and you take some of this expression here yeah so as you see this differential substitute size by delta and axis so we have co-homological grading by degree of delta on this complex and homological grading on degree of co-homological grading by degree of xi which gets smaller when you apply differential and by degree of delta it is homological grading so you can get new deltas and we will consider in this proof the slice of the complex which have fixed degree m by xi and delta so what we essentially prove that all homologous of the complex are sitting here in degree zero in this co-homological grading by xi so our kind of yeah first main kind of basis of induction let's say in in general the proof is by m by we go we go by this slices here so basis of induction is m equal to 2 for m equal to 1 there is homology sitting here but if you put a arrow from k from the field here then this homology disappears so this is kind of trivial step and here we all our considerations and I should say that in the whole proof the most kind of subtle place where we should ensure that homologous are vanishing is this place the closest to the place where homogoules are sitting which is of course not surprising so here we will need more careful considerations which will which using this Grubner-Besitz arguments mainly for degree of xi is equal to 1 so what again about the structure of how we proceed first step we do is a reduction to the complex with different differential namely differential is straightened so to speak so we consider another differential which does not add differently on size whether they are first on first in the world but it substitutes just all size by delta and we take some by all the substitutions and first step which is not difficult we prove that actually we reduce the fact for our complex this is our main result which we want to prove for our curly differential that if or more precisely yeah okay I should thank you words also about so we should prove result for the small complex with the curly differential it will follows that for the complex where we take invariance and the ten action it is also true because this the ten action commutes with differential so if you turn this real differential you can differentiate after or before it doesn't matter and then purity of the whole complex will follow from purity of this small subcomplex because it's quasi-zomorphic as I already said so now the main work will be done in the complex with straight differential and reduction to the complex from the complex which we need with this twisted differential to the complex with straight and differential is not difficult but here already we have some appearances of grid-nurbez's considerations namely for degree of element c degree of element bigger than 2 so not in this difficult place so to speak but down in in this down the picture we have easy to see that we can reduce our differential to straight and differential okay maybe I do not have time to say how but it is really not difficult and then we go to this place this red red line in the complex grid then we only get a bit more sub-talk and consideration where we will work modulo the so we get expression of differential modulo the ideal generated by our delta which defined our differential so so here we need to do something in the in this ideal which is quite small one so to speak so it is close to three polynomials but still for the grid-nurbez's in this ideal allow us to proceed and again to get rid of this first psi in the expression and move side to the left model of the image if element was from the kernel the model of the image will move side and by this getting closer to showing that image coincides with the kernel or in case of straight and differential we just show we should not go always through moving this side but we just show that differential is actually the same we can get rid of this first side model of the image so differentials curly one and straight one will be the same so reduction does not use any induction by place of psi so I wanted to show this reduction more precisely but I think I do not have time anymore anyway I and then after we reduce we proceed with the main proof for the straight and differential which uses some facts about what are generators so CZG model for the ideal generated by again this delta here so we again work in the quotient of free algebra by ideal generated by delta and then it was mainly for free algebra but then we can model some changes substituted by pass algebra of clear because this all theory of CZG and Grivner basis are developed for by Ed Green for arbitrary algebra with multiplicative basis and pass out with a particular case of this there are only few subtle places here for example we consider three polynomials on two types of variables which is just free product of two free algebras and in case of clear it will be free product of clear plus algebra times free algebra on the belt and this should be presented somehow that as a quotient of another clear algebra which is possible to do then we can proceed in this quotient of free algebra and it works again so yeah thank you I think I should stop here already over time just two minutes so it's okay so thank you for your great interesting talk questions comments yes I have a small question you just mentioned the free product of these two free algebras in this spirit of Kuroch I yeah maybe maybe I mean that you will not have relations between these two sets of generators okay so if you multiply two free algebras you get free algebras on the okay I understand you partition the alphabet in two okay yeah yeah yeah I also have a question so then you consider this Poisson structure then you just have two circles and then you just multiply then you just take some break and then it becomes one circle I just remember then and we maybe some discuss hope algebra on Pozitroids with Gerard and Kristoff then it was also similar because Pozitroids they are characterized as necklaces say that this is just some permutation which is on the circle and then we have to multiple just such circle permutation and then we just else just have found some point where we have to glue then we just make ball and then it becomes one common circle and so the question is any some relation to some some algebra which you have abstract but but if we just consider some concrete algebra for example function on the cross money and on something like this if we just do that so what we will get in other in other ways so if have you some examples with non-obstruct but then you just take some algebra a for example functions on some manifold and then need your construction for example this double person bracket is an example but it is kind of because it was true that the separation which is considered the important taxes so for algebras some examples very namely this list right yes this is interesting to know and for this Pozitroids you said that you glue this in a particular place or you glue it by all possible places or unfortunately many years ago I only just let us use this picture with doing I say that's somewhere I saw so many probably many instances they don't remember how how we need to investigate can you give me some references so I have to find maybe some notes I don't I don't think that we have paper but we just there are some notes I will try to find and there is in cluster algebra I think these are these so this is Poisson structures maybe also double I think they walk by Shapiro and Weinstein so they consider the different brackets and then I don't remember do they double this brackets but maybe Elsa could be some geometric implementation of your theory I may have a kind of general question you mentioned and at the beginning a formality result and there's a good question structure coming into the picture of course is these are double person brackets so it's quite different does it have a meaning to speak about quantization here or not at all actually what what is this that what this formalities supposed to say that we actually kind of we have a description this of this infinity structure on the digital object so you actually have all you can can know about deformation factor as a result so this is what it says about the formations and this is not only actually double Poisson brackets it's kind of poly Poisson yeah so this is kind of one of the points because about yeah but it also says about double that kind of after Hammut epi we know the picture we know how did you ask a question yeah maybe you are right I say it later that everything is so it's a homotopy it's a quasi to the initial one for smooth algebra for algebra the kernel of multiplication is projective module over algebra for projective by model then this two resolutions and we take homes they have the same homologist so this is the morphology on this complexity again I understood that they are quite the other one but more than that they are sometimes this is here I'm not sure we use kind of argument for homotopy is more fit to get formality but it goes through this infinity structures on homology this homotopy transfer structure and I'm not sure I think about it okay thank you
I will talk on homology calculations for the higher cyclic Hochschild complex and on combinatorial description of Lie structure on highly noncommutative words. It is based on the texts: Pre-Calabi-Yau algebras and double Poisson brackets (J. ALgebra, 2020), preprints IHES M/19/14.
10.5446/51289 (DOI)
Well, I'm very grateful for this invitation to speak a little bit about this connection between proofs and nuts. And I think there is a lot to explore still, so I will just explain a little bit the basic ideas. And I love this picture that summarizes the talk. So it's about how a proof can be seen as some kind of naughty structure which has to do with the structure of dialogue. And so it's a long question about what is a logical proof, how should we should represent it, describe it, and how we could have a properly mathematical notation for proofs. And my starting point in this talk is the idea of game semantics. So the idea is that every proof of a formula A initiates a dialogue where proponent tries to convince opponent and opponent tries to refute proponent. And this is really a nice interactive understanding of proofs. And so here is a typical proof in, let's say, some kind of Genson notation, traditional notation. It's a little bit cryptic, a bit difficult to understand what's behind. So, but what it proves is so-called drinkers formula, which says that in any open café in Paris, with at least, so we need one drink, I mean one customer, so in the café, so there exists a specific customer Y, which is so sober, it's very sober customer Y, such that if A of Y, which means that Y is drinking, then all the other customers in the café are also drinking. And clearly this is counter-intuitive, but this is proved by this short proof in classical logic. Okay, this property is not true, is not valid in intuitionistic logic or constructive logic, but it is valid in classical logic. And here is the proof. And I said that a proof gives a strategy, some interactive strategy to convince you. So we could, I mean, I could play it with you. So okay, imagine I want to convince you of this property, which you think is a little bit strange. Well, we open, I mean, we enter together in the café. And then I say, okay, I know the customer was, I mean, was very sober, so I could pick someone. So for instance, I don't know, Gerard, if you allow me, I would say, okay, Gerard, you're very sober. So I know that, okay, if ever you drink, everybody else will be drinking in the café. And then the point is, well, I may be wrong. Okay. So you may be drinking Gerard, but then someone else in the café is not drinking. And so my opponent will refute me and say, come on, Paul, you're just wrong. This customer, so I don't know, Nicolas, you're allowed to be, you know, the counter-witness. So okay, say, okay, come on. You know, here, I see Nicolas is not, I mean, he's not, so he's a counter-example to your claim. And so the proof what he does interactively is to allow me to backtrack. And so this is a little bit hidden here in the fact that there is two existential here introductions in the rules. It allows me as, you know, prover to backtrack and say, oh, sorry, I was wrong. I shouldn't have picked, you know, Gerard at the beginning, I should have picked Nicolas. Okay. And so the idea is that the fact, I mean, the reason why the existential here is not constructive. Okay. It has to do with the fact that I did use the witness from the interaction I have with the, with the opponent. Okay. So in a way, I cheat. Okay. But this is exactly the way classical logic works interactively. And I mean, you can, you know, instead of speaking about the drinker formula, you can say that this formula says that every, every, you know, proposition is either true or as a counter-example. I mean, you can think of the way here as a counter-example. And of, I mean, and clearly, you know, before a property is proved, well, this, still this property holds because if someone finds a counter-example, then we have the counter-example. I mean, this is the same story I've told you here. Okay. And so in a way, this syntax is a little bit, you know, difficult to understand while the game semantics gives a more, much more intuitive understanding of it. And so let me speak briefly about, you know, the way, you know, game semantics and algebra are connected. So this is through linear logic and a number of connectives, which in fact are really adapted or are really consistent with linear, linear algebra. Okay. And so here we have negation, which is really like a dual in, and I will come back to that in linear algebra between the game A played by, you know, the player and the game, not a same game, but played now seen from the point of view of the opponent. And so negation peruse the whole of opponent and proponent. The sum here is what it does. It says, okay, let's start with two games. So there is a game A and a game B, there are some. So imagine this is chess and poker, for instance. Okay. So some of the game is the game where I as a player decides whether I want to play chess or I want to play poker. And then once he's decided, we carry on. We never come back to the other board. Okay. And so this is understood as here is this, this junction. Okay. It's a choice I do as a player, but there is a dual connective, which is just the same, but where opponent makes the choice. And this is understood has a form of conjunction because if I let the opponent, you know, the environment choose, then clearly I should be a master in chess, but also a master in poker if I want to, to, you know, win the game, because I don't know what the opponent will choose. And so this is really some, a notion of constructive, you know, conjunction. It's an ant. Okay. So this is a little symbol used for ant. But there is also a tensor product where the two games are played in parallel, but only opponent is allowed to switchboard. Okay. And so player will just play where opponent has just played. And this is understood as a classical conjunction when I say classical, I mean, in the sense of classical logic. And there is a dual where it's same, but now player is allowed to switchboard. Okay, and the nice thing is that this is can be understood as a form of classical disjunction. And so I will show you how to establish the fact that for every formula, so for every game. So here, for instance, let's, let's take a data mind, you know, version of chess. Well, we have the property that a, let's call it the chess is called property a, or here, not a. Remember, it's just you swap the board. And so I will give you an interactive strategy which wins in that game. And so really the idea is that we are playing, I mean, I am playing two boards here in parallel and here in front of me, there is a tensor product and the tensor product can be seen of a strategy or counter strategy of type tensor can be seen as a pair of strategies. So let's say I'm playing against two, you know, famous Russian chess masters. And I will show you how I can win by playing here white and here playing black. Okay. And of course, I mean, winning, you know, I'm not crazy. I cannot win against both. Or if I want to win against both, I need to be a very strong master myself. But here is by pure logic. So there is no, I mean, it's just by some kind of logical, let's say manipulations or logical, you know, truth. Okay. So I want to prove that a or not a and the strategy to do that is just to let it's very simple and it's like cheating really. So it's to let course noise start here. And so course noise plays a move like this or here. And then copycat what course noise has done on this board. And I can do that because as because this connective, you know, enables me to play, I mean, to switch board when because whenever a move has been playing on one board, I'm allowed to move to the other board like this. Okay. And so I've copycat the move by course noise, carp of answers. And then I move to the other board and I play like carp of. Okay. And then course noise answers and then I just move like course noise has just moved. And in that way, I just by copy, you know, some kind of copycat strategy. I mean, carp of really believes is playing against course noise and course noise is really believes is playing against carp of and in the end, one of the two wins. And so that means that I will lose maybe carp of wins. So I lose on this board, but I win on this board. And after I want to prove that they are not a. And so that means that I only need to win on one of the two boards. Okay. So the idea is that we that, you know, we can understand, you know, the fact that the properties through or its negation, something purely interactive and purely linguistic, which doesn't have to do with the, you know, outside world. It's not about whether today it's, you know, cold or hot weather. It's really about just pure linguistic phenomena. Okay. And so I mentioned that there is also this exponential modality, which is very nice. I will not speak about it anymore in this talk, but it's nice to know that there is something which enables opponent to reopen to reopen balls whenever the opponent is embarrassed. And this is what happened in the drinker formula thing. At some point I was embarrassed as a player. So I reopened a new board and I won on the new board on the second board using information I knew from the first, the fact that I mean, Nicola was the good witness and not Gerard. Okay. So this I learned on the first board and then I use this information on the second board. And so, and by the way, this, I mean, maybe I will have the time to mention it, but this has to do with co free constructions of co-algebras or commutative co-algebras in vector spaces. And okay, I try to come back to this, but now what I will try to show you is that there are connections between these ideas of, you know, game semantics and ideas coming from linear algebra and representation theory. So okay, I will, I will move that. Okay. And so, but before I do that, there is this important, let's say tool coming from categories and ideas by Lombak in particular, which is to give a factorial approach to proof invariance in the same way that we will see there is a functional approach to not invariance. Okay. So, so I will speak about the, in particular, I will start from this idea coming from Brewer, Haytman, Kolmogorov, that a proof of a conjunction. Okay. So, there really is a pair consisting of a proof of a and a proof of B. Okay. So, and the idea is that, I mean, from the game semantics point of view is that someone, you know, if I claim a and B, my environment, my opponent could attack me on a or on B. So I need to be able to prove a and to prove B. So I should have a proof of a and a proof of B. Okay. So this is fine. And we will see that this will be interpreted by the existence of a Cartesian product in categories. But then there is this more mysterious description of a proof of a formula from A to B in logic as an algorithm, which is able to turn any proof of A, Phi, into a proof of B. Psi of Phi. So this algorithm Phi, the question is what does, I mean, what does it mean, an algorithm? And this was, I mean, this is still a question that people, I mean, it's not so clear. Whereas here it's quite clear what it means. Okay. A pair is a pair. But an algorithm is something like saying, okay, I have a notion of algorithm somewhere in the air, but I don't know exactly what it means. And so the notion of Cartesian close category is an attempt to answer that question by saying, come on. Okay, an algorithm will be a map in a specific category. But this category should be Cartesian. So it should have a Cartesian product. And it should be closed in the sense that we should have a family of adjunctions between the function A times and the function A implies. And what this means is that we have a natural bijection between the set of maps from A times B to C and the set of maps from B to A implies C. And we can think of this as some kind of implication. So clearly a basic example is the category of sets and functions. Okay, but there are many, many other examples. And we spend a lot of time, you know, when we study proofs to construct Cartesian close categories of many shapes. Okay. And so, I mean, just, I mean, a typical example is every topos is such, you know, every, every, every category of shapes is such. And there are many, many other examples that we can analyze. It's a very rich and interesting topic. But here I want to focus on the free construction. Okay. And so, and you will see it's very symbolic. And so if you want to say, okay, I know I start from a category. Now I want to construct the free Cartesian close category. And you think about it, you say, okay, I should start by the objects and the objects, they should be constructed from the objects of the original category. And then the products and the implications. Okay. So there should be this grammar of objects. And so the objects of the category of the free Cartesian close category are constructed by this grammar. Okay. So you can think of them as formulas or as types constructed by this, this, this very simple grammar with Cartesian product and implication. And here, again, Cartesian product is some, you can understand it as some kind of conjunction and implication here. Okay. And so the, the, the, now the morphisms in this category, they are not as, I mean, there are a bit subtle to define, to describe. And I will just say a word about them. So they are lambda terms. And so just to say a little bit, you know, a word about lambda terms. So lambda, lambda terms are terms in a calculus, which is a pure calculus of functions. Okay. And I will say a word on that now more. But then these terms of this calculus should be considered modulo some, some notion of beta, eta conversion. And what I claim here is that the situation is very similar to what we have in nut theory, where we have like this tango diagrams. Okay. And then up to deformation of diagrams. Okay. Typically, write the master moves. So, so how is the lambda calculus defined? I mean, there is some kind of, I mean, some kind of calculus where you describe functions in a given context. So typically you said that there is, for instance, a function x. Okay. This is just a variable in a context where x is of type a, and then x will be here. This term will be of type a. And the most important two rules is the abstraction rule, which says that if you have a term p of type b, okay, in a given context where the variable has been declared of type a, then you will construct the function, which is written lambda x dot p. And the function here, you can think of it as the function which to x associates p of x. And so this is the notation here. And this type is the type of functions from a to b. And so I said implication, but you can also think of it as description of some kind of function, function space. Okay. All the functions from a to b. Okay. And since the variable here is of type a, and the output is of type b, then lambda x dot x, this function is of type from, I mean, a implies b. And once we have constructed such a function from a to b, we can apply it to an argument. So this is the notation. So p is applied to the argument to get something of type b. So the argument is of type a, applied, I mean, we apply the function of type a implies b, or a to b, and then we get something. And then there are three basic rules here that organize, I mean, that deal with the context. And so then what are the beta and eta rules? So they're very cute rules. I mean, they are very beautiful and powerful. So the first one says, okay, if I have constructed the function, which to x associates p of x, and then I apply it to an argument, then I will get, I mean, I can rewrite it into the same, I mean, the p, but now it's p of q. And this is the way it's written here. So because it's a pure calculus of functions, we have variables x appearing in p. And so we can then substitute them by q. Okay, similarly here, there is a rule which says that every term of this calculus can be seen as a function lambda x, which to x associates the term applied to the argument x. Okay. And this is completely formal, completely symbolic. But at the same time, it's known to be deeply connected to a language, I mean, a language of proofs. And so this lambda terms here, this lambda terms in red, they can be seen as proofs of some propositions which are written with implication here and conjunction. So there were these, you know, little logic calculus here of, which you can see, think also as a calculus of formulas and here, this is the description of the proofs of these formulas. But what is important, I mean, okay, I don't ask you to understand all these details about proofs and formulas. But what I want to stress is that there is a completely algebraic construction of the free Cartesian closed category. And what this means is that whenever I take a category C here, and I have a functor into a Cartesian closed category D, I can lift this functor to the because this is Cartesian closed, I can lift this functor from C to D to a functor which preserves up to coherent isomorphism Cartesian product and the implication arrow, okay, from this free Cartesian closed category into D. And so this gives, so this construction is extremely important in the construction of what we call proof invariance. And I mean, you know, I work in a, in a, in a like computer science lab. And the reason is that in fact, this lambda calculus can be seen also as a language of programs. So there is a nice correspondence between proofs here and programs. So you can think of it as a very simple programming language here. If you like that you transport, you interpret all the, so the morphisms are programs and you interpret them into some category. And so here, the category typically will be, could be a category like the category of sets and functions could be a, you know, a pre-shift, a pre-shift or a shift category or a topos or whatever. As long as it is Cartesian closed category, we have this beautiful little functor, okay. And the, okay, this is the story for, for, you know, what, what, what I would do, you know, all my day, okay, constructed this kind of things. But then people in, I mean, studying not invariance, and this is the connection with, with nuts, they do something extremely similar where instead of constructing, so, so they start from the functors, let's say, I mean, it's one way to think about. The construction of nut invariance, there are many ways to construct nut invariance, but here I will describe the functorial approach, which is to say, okay, if I'm able to, to, to associate to every color, you know, an interpretation in a ribbon category, and I will explain briefly how to construct such ribbon categories using representation theory of quantum groups. So when we have such a ribbon category and a functor into, into that, we can leave the functor to a functor which preserves the structure of ribbon categories. But this, I mean, this free ribbon category is a, is a category where the morphisms are frame tangles. So in particular, we can study like ribbon nuts and we can associate invariance to each ribbon nut because this category is, is, is typically the morphisms from the unit object to, to, to itself are, are ribbon, I mean, ribbon diagrams which are closed and so they are, they are really like ribbon nuts. And so you see there is this very, I mean, kind of fascinating analogy between what we do with proofs in the functorial semantics of proofs and the functorial, you know, invariance of, of nuts. And so I will try to explain the connection. So I will go very briefly just to explain this ribbon categories on string diagrams. So it's a notation for monoidal categories. So the idea is that a morphism from A tensor B tensor C to D tensor E will be described as some diagram with three inputs and two outputs. Okay. So ABC here as input and DE as output. So, so the flow, I mean, goes in that direction from, from the bottom to top. Okay. So this is the arrow here. And so composition is described by vertical composition. So the composition in the category and the tensor product as horizontal, I mean, putting side by side F and G. So typically here is an example where F tensor identity. So F here identity here and then F tensor identity composed with identity tensor J. Okay. So, I mean, if you, I mean, if you interpret this morphism here in the string diagram notation, you get this picture, but then if you interpret this other morphism, you get this picture where you see here F and G have been kind of, I mean, we play with the order in which they appear here. And the point is that in a, in a monoidal category, these two morphisms are equal. So really we can be, I mean, we can trust our eyes up to the formation of diagrams. And indeed this diagram here and this diagram here describe the same morphism. Okay. And this is really the beginning of a beautiful story where you try to make, you know, this topological intuition valid up to the point where you can say, okay, I have a nut and I described as a morphism in a specific category. And the morphism is invariant up to deformation. Okay. So this is what I will explain now. Okay. So a braided category or monoidal category is just a monoidal category equipped with a braiding. So it's a family of isomorphisms from A tensor B to B tensor A. So, and this is the way I will draw them. Okay. I think many people here know this, all the story, but I felt that it's good to tell it again anyway. So this is the way the braiding is described, but of course there is an inverse which is described as a negative braiding whereas here it's a positive braiding. And our coherence diagrams. So this one, for instance, says that these two sequences of arrows are equal in the braided monoidal category. And diagrammatically it says this, okay, that if we permute A with B tensor C is the same as permuting A with B and then permuting A with C. Okay. Another diagram which is essentially saying the same, but for the other configuration. And so this is like braided monoidal category, but then we can define, and I will be very much interested in the notion of balanced monoidal category. So it's just a braided monoidal category with a twist, which is defined as a family of isomorphisms and which I will depict as a twist like this. Okay. So there is a little like we twist the ribbon and this is why I work with ribbons rather than just strings. Okay. So we can see this little action here on the ribbon. And so it should satisfy that theta i equals the identity. So when we twist the unit, we just do nothing and then this very nice equation, we say that when we twist the tensor, so when we twist the tensor, it's the same as braiding, twisting, braiding. Okay. And so this is the way to sit. So if you want, so A tensor B is really A and B in parallel. So if you twist A tensor B, this is what you get. You need to twist A and B independently, but also braids them twice. Okay. So you see this is a typical example where purely algebraic, you know, coherence property, which says that this map should be equal to this sequence of three maps, coincides with a very topological intuition about how we twist ribbons. Okay. And so now I carry on, you know, what I'm trying to do now is really to build what I will call a ribbon category. So we have, we need a notion of duality. So a dual pair between an object and A and an object B, where we say that A is left dual to B, is defined as a pair of morphisms. So one morphism from the unit object to A tensor B, and the other morphism from B tensor A to the unit. So you can think of it as some kind of identity here that we are building. And here are some evaluations. So this is sometimes called co-evaluation map and evaluation map. Okay. So typically we have that when A, imagine that A is a finite dimensional vector space and this is B is the vector space of its, I mean, it's dual vector space of forms. Okay. So like this is V and V star. Okay. And so we ask that these equations are the exact equalities are satisfied, okay, which are represented like this. And we said that, yes, so in that case, we said that A is the right dual of B. And yes. So anyway, a ribbon category is simply defined now as a balance category. So if you remember, it means braiding, twisting, okay. But moreover, every object A has a right dual. And there is this further requirement that when I take A star tensor A and I twist A and then I evaluate, so I get this map from A star tensor A into I, or I twist A star and then I evaluate, I should get the same. Okay. And the nice thing is that once we, I mean, in any ribbon category, the object A star is also left dual to A. And so you can use the twisting and the braiding to build, I mean, a unit, so a co-evaluation map and an evaluation map, but where you see A star is on the right now. Okay. Oh, yes, so here. So A star is not anymore just a right dual, it's also a left dual, thanks to this structure of twist and this equation. Okay. And so in particular, we have this nice equation satisfied in every ribbon category. That twisting is the same as braiding and then doing this, you know, playing between the here, co-evaluation and co-evaluation, but this is the original evaluation and then this is the one that we deduce from the twist. But we can also define the twist from that fact that A star is at the same time a right dual and a left dual of A. Okay. And I will come back to that later because we see the similar phenomena appearing in logic and I claim that this is of course topology, but in fact, and this is really the purpose of, you know, I mean, my work, but also this talk, okay, is to show that this kind of phenomena can also, I mean, it's interesting to look at them from a logical point of view. And there are some kind of maybe projections of some more, I mean, like let's say, people say purely logical structures about negations. Okay. Because of course, when I say duality, I have in mind some kind of negations and we will see that these dualities can be seen as particular instances, extremely interesting and rich, but instances of a more general pattern where negation is not involutive anymore. So something important here in this ribbon category is that when we dualize twice an object, we come back to the original object, which is true for instance for finite dimensional vector spaces, or we see representations, which is not true anymore for general vector spaces because typically the map from V to V is double negation or is by dual is not an isomorphism. And so, I mean, what we see here, this phenomena, okay, I mean, this kind of very nice, typically here reconstruction of the, I mean, the twist here from dualities. So you know, these equations can be also played and I will come back to that as a logical level and I will explain how to do that. Okay, anyway, we have this free ribbon category. So there is this beautiful theorem by Schum, which says that the free ribbon category can be constructed and it has, okay, so it's a free ribbon category generated by a given category. So the objects are sine sequences of objects of C. So sine means that epsilon 1, epsilon k are plus or minus to indicate the direction of the links. And the morphisms are frame tangles. So frame tangles means they are ribbon, you can draw them with ribbons with links labeled by maps in C. So this is a typical example. So this is a map from A plus to B plus C minus D plus. And so you see A plus, this is the input. So the map goes in that direction and the output is B plus, I mean, I could say B plus tensor C minus tensor D plus and the C minus here, the minus means that the flow of, you know, of the computation goes in that direction. Okay, so this is typically a map in this free ribbon category. Okay, and so clearly I want to, you remember that in the free Cartesian close category, the maps were proofs, they were lambda terms, they were very symbolic objects, whereas here is purely topological. And so my purpose in the next 15 minutes is to show you that there is a way to think about lambda terms, at least in good situations where the lambda terms are linear. I will explain that. Okay, and you can think of this, you know, in connection to the next talk by Noam Zeilberger. So when the lambda terms are linear, then we get a very, I mean, slightly mysterious, but also very natural, I mean, the two connections between proofs and nuts. And so as I was saying, okay, this category here for the free ribbon category, as this beautiful property that defines the free ribbon category. So every time we have a category with breading, twists, and good dualities, we can take any functor from C to D. So here, really, you should think of this functor as giving an interpretation to each of the links here. Okay, so we give an interpretation to each of the links. And then just from the fact that this category is ribbon, we can lift this functor to a functor where the tangles, of course, it's very important here, the frame tangles, but in the topological sense, okay, so really, modulo deformation, if you like, are interpreted as morphisms of this category. And it's a way to construct many invariance of nuts in topology. And so I will try to explain how this can be adapted, but maybe before that, I will say, I mean, I think it's nice, especially here, to spend maybe five minutes explaining how we construct such ribbon categories, okay, before I move back to proof, because I want to show also that the fact that I look at proof has to do with finite dimensional versus like, possibly infinite dimensional representations of quantum groups. So the idea is that one way to construct these ribbon categories is to define them as categories of modules of half algebras. So suppose given like a symmetric monoidal category V, so you can, for instance, take the category of vector spaces over a field, okay. So bi algebra is an object H of the category V, equipped with a multiplication and a co-multiplication. So I use this diagram here for blue for multiplication. Remember, I always draw, oops, sorry. I go from below to top. So this is multiplication of H and unit and co-multiplication and co-unit. And we should ask these equations so that you know, okay, so typically this is the bi-algebra equation, which says that multiplication and co-multiplication are compatible in this way, okay, and then similarly for, I mean, unit and co-multiplication, multiplication and co-unit and unit and co-unit. Then an antipode is defined as the morphism from H to H, which satisfies these two equations. And whenever we have such, okay, so half algebra, which is a bi-algebra equipped with an antipode, then we can construct a monoidal closed category of left modules where the action on the home, the internal home, is defined by this formula. So I wrote it in the Swidler style, okay, and this generalizes the usual construction for groups, okay, where, so you can think of your half algebra as a group and then what you're asking, I mean, here, this says that you should multiply each input by the inverse of H, apply the function and then multiply by H. And so this is just the quantum group version, but there is the diagrammatic representation of it, okay, that I can just show you here. So that means that this object, the right negation, has, I mean, is an H module, okay. Similarly, there is also a way when the antipode is reversible to define a closure on the left, okay. And so similarly, except that we need to use this inverse of the antipode and we'll get the good properties, required properties. So this is a way to get, you know, this implication here. I mean, so what we get is a monoidal closed category on the two sides, left and right. But now maybe we want that to have also a braided monoidal category. So for that purpose, we introduce a notion of braiding on the Hopf algebra, which is in fact a vector of H tensor H, which satisfies a number of properties which can be represented diagrammatically like this. And now the important point here is that every braiding, okay, induces a braiding, so a braiding on the Hopf algebra induces a braiding on the category of left H modules, okay. And the idea is that you just take V, I mean, like a vector, like V tensor W, you swap W and V and you apply your, I mean, you multiply the braiding, okay, of your Hopf algebra at the same time as you permute the vectors V and W. Okay, so then what we get from that is a way to relate the, let's say, right negation with the left negation. And this is extremely important from my angle, like logical angle, because it's really saying that this braiding will induce a map from the left negation or right negation to the left negation, and that this can be understood in a very logical way, as I will explain. And this map, in fact, can be, I mean, if we compute it, this map, what it does, it associates to any form here, this form where we pre-compose the form with an action of U, and U is equal to this vector here and can be represented in this way, okay. And this is an extremely important vector in the theory of quantum groups. The thing is that it's, as a bad property, that it's not a group-like element. So in order to obtain a group-like element in the Hopf algebra, the very natural way is to define a twist. So this is where we get back to ribbons, which is just a vector of H satisfying these equations that I drew here diagrammatically. And then when you multiply the U with this twist, or its inverse, well, suddenly you get a group-like element. And there is an element of magic in this thing. This is something I try to understand, let's say, from the outside, looking at, let's say, braided monoidal categories and so on. And the reason why it's very important is that, in fact, it's related to this, I mean, all the work here that I'm describing was really developed by Rechetikin and Turayev in the 90s. And they're really the fundamental observation is that if we take the finite dimensional modules, this defines a ribbon category. So if I go back to my little picture before, sorry for that, but here I have this category of finite dimensional representations of my Hopf algebra with structure. So I can interpret ribbons as morphisms between such representations. And then using that, we can construct invariance of the ribbons. So it's a beautiful recipe. And I try to think about what I mean, its logical meaning. And so to that purpose, I will introduce the notion of dialogue category. So ribbon is about ribbons, and since I want to speak about dialogue games, I found it's nice to call my categories like dialogue categories. But you will see they're extremely stupid categories. I mean, the way they are defined is absolutely obvious. So the important thing here is the connection with games and mnemonics, the thing I was telling you before and the idea that there are proofs are based on interactions. And so the dialogue category is just defined as a category with an object button and a natural bijection. So we ask that there is a way to turn any map from A to B to this object, which I will call bottom. And you can think of it, for instance, as the base field, into a map from B into A implies bottom. So this is a very familiar situation in linear algebra. So we can do that on the left or on the right. And this is just a definition of a dialogue category. So it's very stupid and primitive. The important thing is that then we can look at, introduce the notion of pivotal dialogue category, which is a category where we can play with the inputs of forms. So whenever I have a map from A to B to bottom, I can turn it into a map from B to bottom. And this is the way I like to represent that. So it's going like this. And then when asked the coherence diagram, which says that if I turn A and then turn B, it should be the same as turning A to B. And this can be understood as some kind of coherence properties of a map between the two negations that is here, the coherence property is here. But what is important here is that we get from this, I mean, a notion of dialogue category with, you know, so, okay, so this was a pivotal dialogue category, but we could also say, okay, I will define a balanced dialogue category just as a dialogue category with these two negations, a braiding and a twist. And the important property is that every balanced dialogue category is defined as a pivotal category. And what is, I mean, an important observation is that we really need the twist to do that. So the idea is that whenever I have a map from A to B to bottom, I can pre-compose it with braiding, but also with a twist on A here. And really the intuition is that the, I mean, this operation of wheel I was describing here can be described at the same time as a, I mean, you see here is there is a twist and a braiding. So the braiding would not be sufficient. And this is the same, I mean, this is really related to what happens with HOPF algebra. So in the case of HOPF algebra, if you remember, we needed a little tita to make, to construct a group-like element in H. And here it's the same story that, I mean, in that, okay, I mean, when we have a good ribbon of algebra, in fact, the category of general representations, so finite dimensional and infinite dimensional, defines such a balanced dialogue category. Okay. And now from this structure, so in particular, I mean, so it satisfies this pivotal coherence property. And from that, what we get is that when bottom is the unit object of this dialogue category, then that the finite dimensional H modules define a ribbon category. So before what's, I mean, it was, I mean, here it can be deduced from a purely, I mean, a purely categorical and formal way that we have this property. So I will, so I don't have too much time, so I will just show you a little connection between proofs and nots that makes it even more meaningful. So the observation is that, you know, in a ribbon category, every object bottom, in fact, defines left and the right negation where whenever you dualize the object, you tensor it with this bottom. Okay. And so now in the same way as we constructed the free Cartesian close category, we can construct the free balance dialogue category. So I will show you the way it's constructed is very similar as the construction for free Cartesian close category. So it's just the objects are, let's say formulas constructed with a tonsure product, left negation and right negation. And then the maps are proofs. Okay. And so proofs of a logic that I call tensor your logic because it has to do with tensor your algebra. And the proofs are constructed exactly in the same way, you know, using the same, let's say Gensen like, you know, Gensen like constructions of proofs. But there is of course a little care about so called exchange rule. So a logic. Okay. So we need to care a little bit because this, this is really about, I mean, we're manipulating the hypothesis of a proof and so we manipulate them with nuts and rebounds. So we need to have a little bit of information about that. But this can be very nicely done using, I mean, the traditional proof theory is just a very basic adaptation of it. And from that we get the free dialogue category with a ribbon structure. Now we know that every time we have a ribbon category and we fix an object bottom, this defines a dialogue category here. So where we have, you know, two negations on the left and on the right. And so just because it's the free dialogue category, we can construct this functor and the main theorem and I will stop here is that the functor is faithful. Okay. And so what this means is that two proofs here in this logic, I mean, this is the world of logic, this is the world of topology, two proofs are equal in this category. If and only if the underlying ribbon structure is the same. And so I will just show you an application of that because this can be seen as a coherence theorem for dialogue categories. So imagine I take a an object in the dialogue category and I map it here to its double negation and this map is not involutive. Things for instance, you know, you're working in the category of general representations of quantum group. So this is not, I mean, this is not invertible. And then we can also take this other negation to where we take the left and right negation and we change the order. Then we can apply, you know, the two turns here enables to connect these two negations to these two negations. And the point is that we don't, I mean, if we do that, it's not equal to this one, we need to twist also the output and to see why. In fact, so this commutative diagram, I mean, imagine we want to prove that is commutative in any dialogue category. And so we need only to prove that it's commutative in the free dialogue category. And how do we do, well, we construct, we see the two maps as proofs. Okay, so there are the two maps. This is, this one is here and this composition is here. And then we look at their images through this, oops, through this factor here. And the images are just tangles. Okay. And the tangles, what they do, they follow, they track the manipulations we do on the formulas and in particular on this bottom. So they track the bottoms. And then the two tangles here are equal and we know that the proofs are equal. Yes. So I'm, I mean, I could speak more, but I think I'm finished with time. Yes. Maybe you could leave some time for questions. Maybe I should just say one word that these tangles here, what they represent, and maybe I will just show you this picture. Okay. What they represent is the flow of negations in the dialogue between the open and the player. So if you remember at the very beginning, we had this interaction between the prover and the refutation. And in fact, these tangles here, we, they can be also understood as little stretch, but they are not very good. So we have these two strategies where opponent ask a question or maybe I could play here. Opponent ask a question and then the player answers here. Yeah. And so there is this interesting relationship that I think would be, I mean, it's worth exploring more between proofs and topology. Thank you very much. Yeah. Thank you very much, Paul-André. I'm not sure we can hear you. At least I cannot. Yes. Cannot. Do you hear me, Paul-André? I can hear you. Okay. Good. Does anybody hear me? Yes. Sorry. It was my fault. No, no, it is because now you are on the dialogue mode. Yeah. But I realized that during all that talk, I had the sound off. I didn't realize that. So if you wanted to stop me, I was like, you know, a raging bull. I don't know. Okay. So are there questions? Actually, I have a question. It's Maxim. Can you say it? Yes. I got lost a bit when we discuss this, all this branding. Do you introduce because there's so much literature that's kind of polluted by brandings or it's really in this dialogue, things that you don't introduce branding by hand and it appears by itself? Ah, well, the thing is, there is this dream we have to understand the topology of proof. Yes. So in the traditional, you know, logic, we have this so-called exchange rule. So where we need, I mean, clearly when we describe this comes from Genson, it's an old tradition, we need to permute the hypothesis in a proof. So, and so the thing is usually we don't track, we don't track the permutations. Okay. We say it's a symmetry, but then it makes sense to say that whenever we have such a So it's kind of generalization by national grade. But I would not say, I mean, yeah, it's generalization, but I would say it's a more refined picture because this can be done, but now we track, we remember. And so now in the proofs, so, okay, so typically just to show you a proof, we will remember the permutations. Traditionally, we don't because we don't care. But now because we track them, we can get this free construction, which is, which has the, it's some kind of grammar, like a categorical grammar for infinite dimensional vector spaces with braiding. So I mean, and so the braiding is produced by the by the HOPF algebra action. And that's the idea. Yeah. Maybe I just want to give you a comment if you, because it's braiding seems to be it's kind of bit artificial, like fashion instance. Yeah. But if you can see just notion of by algebra, yeah, without anti-port unit to unit just very simple product and coproduct and look how many, this is a basis of map from N spiral to M spiral. Yeah, of course, there's some kind of normal form, but intrinsically there are also some three dimensional manifolds, some colorings. Yeah. So it's even without braiding one can see three dimensional picture. Yeah. So that's, that's, that's, yeah, indeed. And maybe, okay, you know, really all this work started from this more, I mean, you know, no, I mean, unbridled, but, but where, where I manipulate trees. So I, I agree. I mean, but what I wanted to show is that this phenomena that I agree with you there. I mean, there, there appear and it's there a bit questionable. Okay. What are they doing that in fact, from there, they have, they have some kind of, they can be understood at a logical level. That's what I mean before. Because the thing is when we look at ribbons diagrams, we see a lot of phenomena and it's difficult to understand what is really coming from the topology and what is coming from the fact that we can negate and we can turn things. And so the story that I want to tell. So for instance, I mean, just to say very briefly, this thing which I mean, which would have emerged anyway in proof theory without we did not read. So this operation that I was describing here of permuting, you know, to, I mean, by, by doing this kind of operation of, of, of, you know, if we think of a problem, you know, if we think of a proof as something very concrete, not, not something, you know, in the air, but something that people manipulate in space and time, you know, like, like we discussed, then we see that, that this operation is very natural. And then we see that the twist appears here. So, so there is this, I would say, surprising and slightly, I mean, we want to understand it's not, okay, this, this connections and maybe just, yeah, that's, that's, but I, I agree. And I would need, you know, I think we need many faults. I mean, at some point to avoid maybe these breading. My suggestions through to avoid breading make something simply. And in other things, it is all these games also relate to logic and some different way when you consider the statement is which sequences for any, for any, for any, it's like a game. Yeah. Yes, with the exist, but it's, yes, so maybe I should mention very briefly, I mean, just this last, there is something that I wanted to say that there is an interesting open problem. I mentioned the exponential modality, which you can think of, of it as a co free commutative common, okay, generated by, you know, and I, for the moment, I mean, so recently, not, not, not very long ago. Like, I mean, people like Daniel Murphy have, have, you know, studied this construction in, in the category of vector spaces. So, and so it, in fact, I mean, I have shown that it's connected to, you know, three less finite dual construction that, you know, this is one, I mean, I mean, this, this construction is far, I mean, far, far from obvious for me. I would be interested to see how this construction, which we know now on vector spaces could be lifted to representations. Maybe it exists, but I don't know it. And this is the kind of things I like to discuss with Gerard and the connection with automata theory here. And I, you know, all the story I've, I've told was about linear. So let's say arguments where we cannot repeat hypothesis. And so in the case of the drinker formula, the point is that we can, we can construct a co free, I mean, the algebraically, the story is explained like this, that we can construct a co free commutative communoid above the existential. So we have the existential. And this is what enables us to backtrack and change the witness. So if you remember the first time I was taking the Gerard as an existential and then I moved to Nicola and algebraically, this corresponds to the fact that I can construct this bank. So here, so the co free commutative communoid on top of a formula which contains an existential. And, and honestly, I don't fully understand how this could be, you know, expressed in the language of, you know, this hope for algebra. But I'm, I mean, I would love to have elements because that helps me to understand the material nature of proofs. I mean, this is really the point in a way. Dear, I take the opportunity to speak to you directly this regarding maybe questions, but we will, I will ask for questions afterwards. Paul Andre, you know that the street last you all constructed for the scholars being in a field, which, as you mentioned, the vector spaces encounters problems, even when the scholars form, form division, form a domain, ring without zero divisors. But I have been aware of paper by post P. Oh, yeah. Stay. You know it, maybe. Yes. So I don't have to send you. Nobody. So, so I, okay, I tried to adapt this construction to a situation where I mean some, some shift, like, pre shift, so it construction where I changed the base ring commutative ring. And so I needed to, to generalize this. Yes, to. So I use, but it's very categorical. Yes. And, and the question is, we are not, we'd like to understand better the combinatorics behind. And that's what I mean. Yes. So I mean, for, for, for very, you know, kind of general reasons, there exists such such a co free commutative common rate for like any commit commutative ring, but it's not clear why I mean, no, what I find fascinating is that you died all this connection with with with your work and differentiation. I think that I would need to understand better. And so I just wanted to mention this. We can interact in a way. Yeah, of course, I would, I would love to. Other questions. Please. Sorry if I missed it. But why exactly do you need to find a dual here? Do you need it to define exclamation mark mark a or do you need to dualize it? It's about this. So I mean, when you get to, when you want to work with So, okay, so what happened, okay, I will. So what happened is traditional models of, because it has to do with linear logic. So what happens with traditional models of linear logic is that, in fact, we are able to build using topological, you know, vector spaces or this kind of things were able to define the co free commutative common rate by some kind of variation on the general, I mean, symmetric algebra construction. So, and so we get something like, you know, powers, I mean, tensile powers of a to the end. We see, we see metrized it we do some, some clever, you know, limit construction is not anymore some and we get something when we do realize it, this gives the co free commutative common rate. In the case of vector spaces like you know with no topology, there is this construction which I find extremely, I mean, mysterious, where you use the fact that every co algebra induces an algebra structure on the dual. And then you do some kind of clever, clever restriction of the under double double on the by dual, so that you get the co free commutative common rate I'm not sure it answers your question but I just wanted to to say that to me, I kind of understand it by honestly I don't understand it I mean I understand it from the outside, but I don't have a completely clear combinatorial. And this is what I hope I understand I will, I will get from this kind of work by Farah and Christian. Sorry, sorry, I, I, we could discuss offline and I will be happy but thank you very much. Thanks for watching 1330. Thank you.
After a short introduction to the functorial approach to logical proofs and programs initiated by Lambek in the late 1960s, based on the notion of free cartesian closed category, we will describe a recent convergence with the notion of ribbon category introduced in 1990 by Reshetikhin and Turaev in their functorial study of quantum groups and knot invariants. The connection between proof theory and knot theory relies on the notion of ribbon dialogue category, defined by relaxing the traditional assumption that duality is involutive in a ribbon category. We will explain first how to construct the free such dialogue category using a logic of tensor and negation inspired by the work by Girard on linear logic. A coherence theorem for ribbon dialogue categories will be then established, which ensures that two tensorial proofs are equal precisely when their underlying ribbon tangles are equivalent modulo deformation. At the end of the talk, we will show how to understand these ribbon tangles as interactive Opponent/Player strategies tracking the flow of negation functors in dialogue games. The resulting diagrammatic description of tensorial proofs as interactive strategies is performed in the 3-dimensional language of string diagrams for monoidal 2-categories (or more generally weak 3-categories) initiated in the mid 1990s by Street and Verity, McIntyre and Trimble.
10.5446/51290 (DOI)
Okay, so I'll try to do my best to put it on the level. I will speak it's a privilege to be the last speaker so I'll try to take much time to make it short. The idea is to present the generalization of Dirac's equation incorporating colors. There's a special difference. The works are endowed not only with half integral spin, but apparently they have another variable, which takes on not to but three values, and they are exclusive. We'll try to include this colors as three valued variable into the equation so we'll need to generalize the idea of half integral spinners. They'll have much more components. And then you will find the spinorial representation of the Z three graded Lawrence algebra, because everything will become Z three graded. The three graded means that there is instead of Z two is based on two. There is one generator who is minus one, for example, and the square is one here the generator is cubic root of unity. And it's square is also cubic root of unity but it's a different one. And finally only the cube is equal to one. And now I would just to tell you that what I will be exposing here is based on the common work with yes you look at skip from Poland from Brussels University. And there are publications which are an archive. If anybody is interested, you can look at them. And now let's see. This is just illustration to tell you what it is about it's about quirks which are in a way, not really hypothetical because the experimentally they give signs of existence, but they're in, not like electrons or protons or other particles they cannot be observed free. So you see there inside the new plans. There are as small as electrons, if not smaller. So you see for example here, the new clone is about 1000 times smaller than any atom. And the nuclei are in the nuclear even 10 times smaller, but quirks or electrons are even 1000 times smaller than nuclear than protons or neutrons. But what is also strange that quirks cannot propagate freely, they can propagate freely inside the proton, but we cannot extract them. So there is something very strange. If they were just obeying Dirac equation like any fermions, like electrons, there would be no reason not to observe them freely so so probably there's something different in them. This is the confinement mystery. Deeply in the last six catering. I mean when physicists have very energetic electrons, they penetrate inside the new clone inside the proton or inside neutron, and then they scatter, and the scattering image proves that there are some very small point like particles inside, but they cannot be extracted. They are free only inside but not outside. So they cannot be directly observed. And this is what I said so I passed to another slide. Okay. This is just to remind that there are three different kinds of elementary forces. The electromagnetic force that we know well, strong forces quirks interact with the strong forces and they carry this new new degree of freedom which is called color. There are three different possible states. There are weak interactions so quirks interact with everything they interact electromagnetically weak and strong, while the electrons or other leptons like mew mesons, they do not see strong interaction, they do not have colors. They see only weak and electromagnetic interactions. Here's the image. And this is, I would like to underline. Just the present knowledge, we have not all types of quirks, we have six different types of quirks, of which this type there are three families, and two different states of course, so the, the family that is really well known is up and down this one, the quirks constitute neutrons and protons, and this, these are the most common ones. Then there are strange and strange and charm and top and bottom this works are so heavy that they are observed only in very very energetic collisions, and usually we don't see only this one, but there are three different families, and three, and each family has two work states. What is also interesting that we have this three colors. And in quantum chromodynamics, quirks are considered like ordinary fermions, but endowed with, sorry, something. But yeah, so they are endowed these colors and this is how they compose particles that are observable, the proton, neutron, these are hyperons, and so on, and only different colors cannot coexist. It is exactly like half integral spin of electron in any atom if you have two electrons, this, beside the spin they have the same energy the same magnetic number the same or momentum, so they cannot have the same spin, they must have two opposite spins. But here, if you have quirks inside the proton or neutron, whatever you choose, they must have different colors this is the new variable. It's like spin but it takes on three values and not to spin remember to describe this Z two symmetry that there are only two states of spin, which cannot coexist or is this plus or minus here to describe three states, of course the natural thing is the three grading and not Z two grading. Now I come to the expert to Dirac equation, how it was, it could have been discovered by Pauli it was not but this shows you how the new degrees of freedom can impose some new symmetry. So after the discovery of spin of the electron, Pauli understood that one Schrodinger equation for one power for one wave function is not enough to describe this two different states. So this is why he proposed to describe the dichotomic spin variable by introducing two components function to functions, which are called now they're called Pauli spinners. And of course on this two component functions, you must have hermitian matrices that act upon because all matrix or all quantum operators acting on states should be hermitian in order to have real expectation values. So, very well probably this are the three. This is the basis of three traceless hermitian matrices. They must be traceless because if you want to exponent sheet if you have the algebra of such things. You have unitary representation, and there is another hermitian, the fourth Hermitian matrix two by two which is just a unit matrix, but it is not traceless so the exponent will not have the term equal to one. So the Pauli matrices span the three dimensional the algebra, which is algebra of rotations. And they also spend the Clifford algebra of three dimensional Euclidean space. So now how Pauli proposed first, he wrote the simplest Schrodinger like equation in Schrodinger equation you have to the energy is replaced by minus IH time derivative and momentum minus age gradient. The simplest Schrodinger like equation acting on this two component wave function would be will take energy just proportional to unit matrix mass is proportional to unit matrix, and then momentum has it is a vector, but it has to act on two by on two component column. So we multiplied scholarly by Sigma matrices and this is another two by two operator, which is Hermitian. Fantastic we have a linear equation looks like Schrodinger equation for these two components, but unfortunately, it is not Lawrence invariant. It does not obey the Lawrence and invariance because if we square such an equation. If you iterate it, it becomes diagonal. But then we have the relationship, you see, there will be a square. There will be momentum square multiplied by c square. The mass will give us squared. Fantastic, but there will be a double product. And this double product destroys the Lawrence covariance because relativistic invariant is like this. This square. This is the square of pseudo scholar. This is pseudo scholar product of a four vector, which is called for momentum energy and momentum in relativity you have for momentum. The momentum square is constant. This is the mass square. But this equation, although it's very simple, does not obey the relativity requirement. This is not relativistic invariant. In order to, you see, when you have something that is different of squares, the natural thing is to think well you can, you can produce it like a product of difference by a sum. You can multiply it by e plus p minus multiplied by e minus p, then you have a square minus p square. But how to do it. Well, in order to do it, introduce another, another policy spinner and mix them up you see if we have psi plus which is a policy spinner with two, two way functions. But here, the momentum acts on a minus, and then e on minus has to, we have to have minus sign here, then by iteration will get will get rid of this double product you see. There will be no double product anymore if you put it on the other side. And as a matter of fact. What happens if we iterate it now both side plus and psi minus will obey the Klein Gordon equation, which is, of course, it is Lawrence invariant. Of course, this two equations can be written in a more concise form. We introduced gamma zero, which is sigma three, transfer tensor with unit matrix this is gamma Dirac gamma zero and gamma K, the three remaining space components are obtained by tensor is with I sigma two. Why we must put I here, because this matrix is her mission. This should be anti her mission, because the squares should span the Minkowski matrix so the square of this will be one, because sigma three is one, and the square of this sigma K square gives one but this I will give minus one. So we'll have the proper signature of Minkowski space like this. So we have the. So we have now created the Clifford algebra related with Minkowski metric tensor, the Sigma matrices created spent the Clifford algebra of Euclidean three dimensional space, this gamma matrices of Dirac, they spend Clifford algebra of Minkowski space. And of course they commentators give the generators of the Lawrence algebra. And as you certainly notice that the price to pay for it was the introduction of minus mass of negative mass, or of negative energy depends how you see it but. So this was the, the problem, the Pauly was scared of but Dirac accepted it and of course he predicted the positrons, the electron is of mass positive mass but positron can be regarded upon as an electron with negative energy, or negative mass. It's just the same. And they have been discovered of course. So, relativistic invariance, the spinners, Pauly spinners that compose the Dirac spinner, they under Lawrence transformations they transform differently, because they're two different representations of SL to C group, which is covering group of the Lawrence and everything becomes, sorry, everything becomes Lorentz invariant. Now, these two coupled Dirac, Pauly equations, they can be written like this, and they are interpreted this is called Dirac's equation. You have this Dirac spinner is a four component because psi plus and psi minus are Pauly spinners which are two components. So now let us see, you see how Z2 symmetry acts on these equations on these states, because if you change spin, if spin changes sign, and momentum changes sign. So, the equation remains the same, the equation remains the same, and if mass changes sign, but psi plus goes to psi minus psi minus psi plus. Again, it is invariant you get the same equation. So you have Z2 cross Z2 group. One Z2 group is this is describing the half integral spin, spin up or spin down the two exclusive states of an electron. So, the Z2 symmetry that has been produced, because we wanted to make it Lawrence invariant. The other is the symmetry is called charge conjugation. It is a symmetry between particles and anti particles. So now let us see how the same thing can be done with colors and with Z3. So we want to describe not only half integral spin, but also new variable that takes on three values. So what we, of course, what we could do what is done currently in quantum chromodynamics is that we consider just three Dirac particles, satisfying Dirac equation. So, we attribute colors to them. And, but now they have to interact by a potential in order to understand why they cannot propagate. So there is a special potential that is created very strange one, instead of decreasing with distance, it's increasingly, it increases linearly with distance. The farther you go, the more the forces that push you together grow. That is why but this is, it is this theory works, it gives good predictions. But there is another possibility that you want to propose, which is to attribute colors not to Dirac spinners, but to give the colors to Pauli spinners. So if you have five plus is a Pauli spinner. We will call it red. This one, Chi plus is blue, and this one is cycle is this green. But remember that all particles, even if they are Dirac particles they have to have partners which are anti particles so we must have also particles that are anti colors. So there are six other functions, we will call them five minus five minus one this is a Pauli spinner but corresponding to anti color, anti color of red is called cyan cyan. I'll take anti color of kind who was, which was blue is yellow and anti color of green of psi is called magenta. And these three colors by the way, you know that the former colors red, blue and green. These are the colors of pixels you see on your screen or on your TV, because they're additive these colors add up when you look at them. So red plus green will give you the impression of yellow. But you probably know, you probably observed that these three colors. So you see in yellow and magenta, they are used, not in TV but they are used in your printers, because they are subs, you subtract them. The white pages white, and then when you put something you subtract one of the real colors and then you get the anti colors. So if you subtract, see, and you'll get red from out of wine. Okay, so now how we'll do. We'll follow the same, the same logic that produced dirac equation out of Pauli equations. So, but now we'll have to incorporate not only see to cross the two but also see three. One Z two for half integral spin spin up spin down one Z two for the fact that there are particles and anti particles. And finally, the Z three symmetry, which describes the fact that we have three different colors. So all in all, the wave function now will have 12 components three times two times two is 12 the dirac particle head for component to do X spinner. Now we'll have 12 components. So this is what I just pronounced let us see what kind of equation can we will follow the same logic as in Pauli from Dirac. And remember that when we passed from particle to anti particle from psi plus to psi minus the mass parameter mass was changed to minus mass. Now we have not only minus but we have also. We have also the generators of will call it J. J is just the cubic root of unity it is e to power two pi i over three. So each time we pass to another color. We have to multiply mass by J. And if pass even more than j square and only after third step we'll have will come back. So this is the. Now this is the generalization of Dirac equation, but which takes into account not only particle anti particle symmetry, but also the color symmetry. You see, we start with five plus red. The mass is the same, the masses positive okay, but you have to go to the next color and to anti particle. So the momentum arcs on chi minus. Now we apply energy to chi minus. Now we have to change sign because chi minus is an anti particle, but also we change color. So we have to employ also the generator of Z three. So here we have mass multiplied by minus J. But then you pass to another color and to particle. And so you see you have to. We must do six such steps in order to come back to five plus from five five plus to chi minus from chi minus to psi plus from psi plus to five minus five minus chi plus and so on so so you see. So we exhausted all six possibilities that it is the two cross Z three and Z two cross Z three, the simple product of Z two by Z three is the six. So these are all all six order roots of unity. So the first two are the square, and one are third roots of unity but if you multiply by minus one you get six roots of you. Okay, so this is the system we have to investigate. And I remind that this, this five plus five minus this, this are Pauli spinners. So each of them has two components so these big things are 12 component. So this is just to remind you what is, what are the coefficients with mass. And then we can write down the whole thing with six by six matrices. In fact, they are 12 by 12. Right because they act on the 12 column vector on the column of 12 complex functions. So here we behind each of these items is a two by two unit matrix. So this of course is 12 dimension. And this is also 10 dimensional because each of this small matrices you see Sigma P is a two by two matrix, but it is better to see it as a six by six block matrices. And to see what is this matrices are of course can be obtained by as a tensor. This is just reminders that they are two by two matrices behind. And now we can. This is another important feature that in order to diagonalize it. Remember that the dirac equations, once you square it, it gave you the proper clean clean Gordon equation. Here, it is not possible because we have this entanglement of six different way functions with three colors three anti colors. So in order to get rid of all mixed of all combined double products, we have to go to six to six power. And it's very interesting because the six power gives you something that looks exactly like a Lawrence in invariant. You remember, E squared M squared P squared, if you put squares here. This was the client Gordon equation. And it is looks like but it is six order. So it is not Lawrence invariant. But if you write it if one writes it in this manner, which looks like Lawrence invariant but it's not of course, then you see that it can be decomposed that it is a product. This is a product of three different factors. These factors, they look like Lawrence in violence look this one is a Lawrence invariant. This is E squared minus p squared. This is fantastic. It's like, if you write that this is M squared. Very good. This is a Lawrence invariant quantity, but it is multiplying by two other quantities, which are complex conjugate. And they look like Lawrence invariants, but they are not because they have this to possible roots cubic roots of union, but they're conjugate and the whole this gives you the real expression. So the idea is that probably behind this, there's a Z three graded Lawrence group. One is zero grade. This is grade one and grade two and all three. If they are then it becomes the rent invariant. Now how much time do I have. You are until 10 past. So now let us write all this in terms of we see you remember there were these two matrices one was for mass and another was for momentum. So if you want, if we introduce this to trace this matrices three by three, we call it be and you call it q three. Then the this 12 by 12 mass matrix can be written like this be tensor at with Sigma three because there's one and minus one and unit matrix two by two. And the momentum was q three, it was like this. There was Sigma one because they were off diagonal and there was this little two by two momentum operators with Pauli matrices. So now, it's interesting that these two matrices. Because this is traceless and this is traceless of course, if you take the enveloping algebra. They generate the lead algebra, which is now this is the equation how it looks like now with this tensor products. This is the unit matrix. This is the matrix that you saw, which is one minus one minus one j minus j and j square minus j squared. And this is also this off diagonal matrix. So now, in order to make it again, like dirac equation will put this on the left hand side, and the mass on the right hand side, and there is still something that is not very pleasant because the mass is not a unit operator would like to make it a unit operator, but this is simple we have to multiply everything from the left by conjugate matrices be dagger and Sigma three then you'll have one here. And this will be this is what we get. Now it looks like exactly like dirac equation because this is can be called gamma zero. This can be called gamma. I. And this is just mass operator. Fantastic. Sorry. Yeah. So this is like standard your operator the only difference is that only six power is proportional to 12 so this is the diagonalization of the system because now each of this components satisfy the same equation. But unfortunately this equation is not Lawrence invariant, but will show that it is invariant under generalization of Lawrence group, which is the Z three grade that Lawrence group. You see this is exactly a dirac equation but the problem is. Sorry. And one can say there are many different choices. The problem is why we choose this one. It depends because we choose one of the generators which was j. We could have chosen j square, then there will be different matrices would appear. And we have a different representation of the same color dirac equation. Now the question is how much, how many such direct equations are possible, because eight different, the eight different generators. You see this six matrices traces matrices traces her mission matrices. And the space of, but this is not complete you have still two other traces matrices which are diagonal. But we will, we shall give them grades. Sorry, I'll come back. So these three, you see they have the same shape as matrices. They will be given grade one. And their permission conjugates will be given grade two and grade zero will be. Yeah, grade zero will be two diagonal matrices but trace less one will be called be another be dagger permission. And they span a very interesting ternary algebra I will not. You see these combinations. The skew, these three commutators are zero and the anti commutator. They have three different permutations, and they are all proportional to one, but this is, this is the tensor one j j square. So this is, this is called ternary Clifford algebra. And of course you have the same for complex for Hermitian conjugates with Hermitian conjugate of this spinorial metric or these are the two matrices that were not in the middle of this so we have eight different generators, which generate this algebra. This is a base basis of SU three algebra, and this basis was already started by Victor cuts in 25 years ago. And then we have this symmetry SU three. You see this is interesting because we started with Z three, we produce an equation. This equation, naturally, introduced this two matrices, these two matrices, introduced the algebra and then we find the Z three generated the symmetry which is SU three, which is fine. The problem is that we cannot produce the Clifford algebra with these gamma matrices. We have this gamma zero and gamma k, but they do not anticommute like Dirac matrices. No good. The problem is how to implement the action of Lawrence group on these matrices. There are only two, but there are many other. So how many we don't know. Don't know yet. Now this is the equation which is written now the gamma matrices are like this. We must try to introduce the generators of Lawrence group which will act on them. Of course, these matrices are, you remember they are 12 by 12 matrices. So the generators of Lawrence algebra, they have to be also because they will commuted them. So we have, we must take them from 12 by 12 matrices. So, I'll speed a little bit. Yeah. So let us start with this kind of commutators. So gamma k gamma zero. And of course these are new matrices, which should be interpreted this are the generators of ordinary space rotations, and these are generators of Lawrence boosts, which makes up time and space. You see that they, these generators that we have produced, they satisfy the exactly what they should satisfy this is the Lawrence algebra ordinary. But the problem is that if you take further commutators, then you get something more. In fact, by commuting more and more we get new generators, which we called you see with q2 with q1 and so forth. So finally, we get the following graded group. Yeah, graded Lawrence algebra. I'll skip the construction, I'll show you the result. The result is that you have the same commutation relations, like with ordinary Lawrence algebra, but they are graded. So if you have the grades add up. For example, if you take zero grade with zero grade here will have zero grade two. But if you have something that has Z three grade one with Z three grade one, it will be give you the three grade two, two and two will give you one. Zero and one will give one and so forth. So these things are taken module three. And the other. So this is the full set of this graded Lawrence group. Sorry, graded Lawrence algebra, these are generators of ordinary algebra. And these two are generators of grade one part and grade two part. And now the problem is how they act on gammas on our color direct matrices. And now the most important things comes. We will, in order to simplify the problem, we will use the same method. And now the problem is how they act on gammas on our color direct matrices. And now the most important things comes. The old possible gamma matrices will be constructed like this you have one of these three by three matrices which are generators, one of the poly matrices and Sigma which can be one or zero zero is the unit matrix, and 123 are poly matrices. So now we start to expand of course there are many, many different commutators to be taken. I don't show you what, what is the result but the result is we start with these two. Remember these two matrices gamma zero and gamma I was what we got when we constructed this colored direct equation. This was this to 12 by 12 matrices, and only one equation, but the problem is that if we commute them with different generators will get will create more and more similar matrices. But what is amazing that after all these commutations. And if you take this. This are the rules. This is with K zero but we have to commute them with K one J one, and so forth, in order to produce more and more. These are Lawrence doublet's because we have if you have 23 and eight to. They will produce from 83 and 22 because they transform into each other. And then you see, you see we have already a doublet because we have matrix and the matrix that is obtained by interchanging the color terms in the first but they all represent the same equation. And finally the final result is the following. Of course, the generators q3 and q3 bar were employed in the construction of Lawrence. Generators be they also, they appear in the first matrix. So, simple. We are in the combinatorics so this is at least one combinatorics here. We see that all gamma matrices that can be produced are as follows. We have to choose this can be chosen from a should not be equal to be there chosen from this set from this set. So, three and no. 456, 788. Yeah, no three and no. Which one. This one is is missing. Okay. Anyway, what is, first of all, we see that we can have as many as 42 different realizations of this thing, but. After completing the all commutators out of this 42, we get only six possibilities. Six gammas and six gamma tilde which are the conjugates. This means that there are only six, six possible different quarks and six possible anti course. This is very interesting because we have exactly what was predict we predict. Of course, it is prediction into the past and not in the future, because it was, it is already known, but we somehow we constructed it from the imposing the colors on the rack equation generalizing it, making it C3. We have C3 graded. We see that it can be done. This Lawrence. Lawrence invariance imposes new degrees of freedom and this new degrees of freedom are exactly six exactly like this what is observed that is not only, you have not only color quarks, but you have six different color quarks. You have two families, and in each family, you have two flavors, as I say, up and down charm and. Strangeness and top and bottom. So you have six. And of course, on take works against six. So this is the result which is came from in position of Lawrence invariance and of course this Lawrence group is interesting in itself because it is, is the three graded covering Lawrence and with this three different items but I think I will stop here because the time is over. So thank you for your patience. Thank you dear Richard, I take the turn after a glib. Yeah, to be the chairman as asked by the live. Are there questions. I don't know how to presentation. I don't know how to I can see. I don't have the. Can you see me better. Yeah, now I can see you. Okay, are there questions. I have a small questions. Your sectors are in number three, because you, you were a grade by Z three. Yeah. Is it related by your to your ternary previous work. Yes, yes, yes, of course it is inspired by it. Yeah. Okay. Yeah, this is, yeah, because we started this ternary algebras. But finally, here it is, it is simpler because there's the variables are not Z three graded. The variables are just complex functions and complex matrices. What is graded are the, it is grading comes because the matrices are different. You have different. Any question and more question remark or comment. People are tired. People are tired. Okay, yes, you can show the archive. If you. Yeah, yeah, yeah, there's this of course. Anyway, your slides are in the slides are accessible. There's much more on the program. Of course, there are papers published. So I take the opportunity to for my closing. Please send your, your slides as soon as possible so that we could put them on the program in IHS. And I thank you for making talks. I thank you for attending.
A generalization of Dirac’s equation is presented, incorporating the three-valued colour variable in a way which makes it intertwine with the Lorentz transformations. We show how the Lorentz-Poincaré group must be extended to accomodate both SU(3) and the Lorentz transformations. Both symmetries become intertwined, so that the system can be diagonalized only after the sixth iteration, leading to a six-order characteristic equation with complex masses similar to those of the Lee-Wick model. The spinorial representation of the Z3-graded Lorentz algebra is presented, and its vectorial counterpart acting on a Z3-graded extension of the Minkowski space-time is also constucted. Application to new formulation of the QCD and its gauge-field content is briefly evoked.
10.5446/51291 (DOI)
Donc, Many thanks to the organizer, for the pleasure to be here. So this talk is about some relation between open complexity problems in convex programming, optimization and zero sum guides. On the link, we go for a tropical geometry that is by considering convex sets over non-archimegane fields. Aux et crées quelques relations entre les problèmes d'oresamènes et de problèmes de jeu. Ça fait nous quelques applications pour comme des problèmes des problèmes pharmatiques intervales, dont les programmesriends sont neufs, dont desיםจaly méthode. fourteen ils слова le ck on Miyawaki курait de cette contestation Chamberrac gauche. On a pas eu de court sur les conditions. On Burma en jouait au 6h et describing le niveau de squak. On repart à 정도 de le complaint. On suscite des prix aujourd'hui. Les joueurs sont deux joueurs max en mine, donc on trouve que le graphisme est départé, cela signifie que les deux joueurs alternent le mouvement. Donc, le max joue, le new joue, etc. Et par question, quand il y a un mouvement, il y a toujours un joueur qui est appelé minimiser, qui joue ce que l'on appelle le joueur maximiser. Et le joueur peut avoir un négatif fonctionnement. Donc, le point de la paiement de la position de l'esprit, c'est que le position de l'esprit, et de l'esprit, et de l'information. Donc, le mi, le paiement est utilisé pour le joueur. Donc, ce qu'il veut, c'est de faire ça, c'est de faire le paiement de l'esprit, c'est de faire le paiement de l'esprit, etc. Donc, Max est intéressé à propos de l'IMA, qui veut maximiser l'IMA de ce représentant, ce paiement avirage perte à l'esprit. Et donc, c'est de jouer un mi, qui veut mettre sur le résultat de ces fields, de l'E1 et de l'E16, qui dit que ce jeu a une valeur, et que les stratégies positionnelles sont les plus importantes. Donc, une valeur qui a un point certain dans le espace de la stratégie positionnelle est un point certain dans le cas où on a le temps de jouer. Voilà, donc nous savons que ce jeu est une stratégie positionnelle, une stratégie positionnelle, c'est une roulue qui est à l'issue, si vous êtes dans le stage, vous devez jouer cette action, et c'est optimal de jouer cette façon, ou un jeu de jeu de jeu. Donc, nous allons voir un exemple. Donc, ici, vous avez un stage de circulaire qui s'est démonitré à minimiser, et vous avez un stage de square-texte qui s'est démonitré à maximiser. Donc, peut-être que le minimiser veut aller en haut, donc le minimiser veut aller en haut, le minimiser, et il veut aller en haut, minus 1, à 2. Et... il veut aller en haut, minus 2, à plus. Et il n'y a pas de choix dans le next stage. Donc, après ça, le minimiser paye 1, ou en faisant ce coup, le minimiser paye minus 2, et le minimiser paye 1, le minimiser paye minus 1, donc c'est le win pour le minimiser. Le minimiser a reçu 1 de max. Mais le minimiser peut aussi aller en haut, donc ici, le minimiser peut aller en haut, et il paye minus 8, et il recevra une diminution de 8. Et puis, le maximiser est placé, donc peut-être que le maximiser va aller en haut. Donc ici, le minimiser est riche, parce qu'il a 8 et 12, mais maintenant, il n'y a pas de choix. Dans ce jeu, la seule possibilité de l'action est de aller en haut. Et puis, pour le plus vite, le maximiser peut renforcer ce coup. Et donc, sur ce coup, le minimiser paye à max, et le minimiser paye à max. Pour le plus vite, c'est le prix de l'agence. Donc, ce que l'on peut dire, c'est que le state initiale est venu pour moi, parce que par ce cycle de plus, le minimiser peut faire sure de renforcer à peu près une partie de l'unité. Mais ce state initiale, c'est venu pour max. Max, il est venu. Parce que par ce cycle, le maximiser peut faire sure de renforcer à 5 par 10 minutes. Et ce minimiser est riche, et fait une bonne action. Il s'agissait de renforcer à un state de losing, ce que je fais. Je le fais encore plus. Donc, le minit paire de partage de minutes ici, on peut dire que, par le point de vue de max, c'est 5 par 10, en dix, minus 1, et c'est pour le cas que l'on correspond aux cycles qui sont en train de se mettre en main par stratégie positionnelle. Donc, le minit de problème consiste en complétant les numéros qui sont très importants au minit paire de partage de minutes pour tous les states initiales. Donc, ce problème, l'introduction, c'était le premier considéré par Gurdjikar Zanov en rachiant, et l'introduction, le premier combinaire de l'horizum pour se remettre en main, et ils demandent, en papier en 88, que vous puissiez remettre l'élementary problème dans le temps polinoméne. Et encore aujourd'hui, ce n'est pas le cas. Ce n'est pas le cas. Donc, pour comprendre ce que le temps polinoménene signifie, c'est... C'est un problème. Vous voyez que la difficulté ici, c'est que, le temps polinoménene, vous avez une nombreuse fédération, comme toujours, dans le modèle de réchauffement, c'est le temps polinoménene, le nombre de bits de impôt. Et aussi, le nombre de bits de impôt, pour mettre le nombre de bits de impôt. Donc, pour comprendre pourquoi le jeu ne s'est encore pas solué aujourd'hui, il est encore un problème ouvert aujourd'hui, c'est parce que le réserve négatif, qui consiste en jouant le jeu dans un temps certain, pour voir si Max or me a besoin de l'idée naturelle, et pour approximer l'infinité de l'horizum par l'horizum final. En ce moment, vous avez besoin de l'horizum, qui est de l'exponential de L, et non de L, vous avez besoin de l'infinité large, c'est un problème difficile. Vous voyez ce que c'est. Et il y a un problème très motivatif, parce que le jeu est un classe complexe introduit par Yann Mons, qui s'appelle l'MP Intercession with Coimby. Ces problèmes sont aussi appelés à une caractérisation bonne par Yann Mons. Donc, quand vous êtes dans cette classe, vous savez que vous n'êtes pas MPR, et que l'MP est equal à l'MP, qui est un MP. Donc, quand vous êtes en MP Intercession with Coimby, vous essayez de regarder si vous n'avez pas de l'horizum. C'est une très importante question. Ou alors, vous vous dites bien, vous vous partez, et maintenant, un deuxième problème, qui est le morneau, morneau, c'est l'inarprogramme problème. C'est encore un peu plus ou moins, donc, vous considérez l'inarprogramme. L'inarprogramme, c'est un problème de optimisation, donc, vous n'y faites que pour les drones, vous vous vous... C'est un problème rationnel, et il y a une question de l'inarprogramme, ou d'autres directeurs, qui sont en train de trouver un point qui est le plus grand, sur le point de direction de l'inarprogramme. C'est l'inarprogramme. Donc, il y a un problème, une question sur l'inarprogramme. Donc, une des façons qu'on a fait, c'est qu'il y a un problème de l'année, il y a un problème, il y a une comp cómo harvestedNotE que le microarespace et le prix de l'inarga releasingато est dans le manque Plan négociéüm et qui est dans la thinking Moi aussi le son est haché. Voilà, on vous entend de manière très hachée. Et j'ai dû pousser le son beaucoup. Soit vous avez un petit problème de contenu? Il est comme un beta, une 1 seconde. Yes. Excuse me. Madame Jasran, vous avez coupé les vidéos, c'est ça? Oui, mais oui, pour que l'intervenant puisse, enfin pour que l'habite sur le film qui sera fait, il n'y ait que l'intervenant. Ok, d'accord. C'était pour en fait, voilà. D'accord. Normalement, il faut changer. Voilà. Normalement, j'ai essayé de l'improver. Il y a un risque qui se démarre une seconde. Ok. J'ai essayé de l'improver. Et maintenant, vous êtes avec moi? Oui, oui, oui, on vous entend. Ok, donc maintenant, le son doit être parfait. Le son est bon maintenant. Oui, oui, pardon. Ok. Donc, quand vous dites que le temps de polynoméatisme a en général une fois de polynoméatisme, c'est comme ça que l'exécution de temps est bondée par une polynoméatisme dans le nombre de bits de la bouche. Donc, la temps de polynoméatisme est très forte. C'est un bon son. C'est un modèle différent de la compagnie, c'est un modèle arithmiétique. Donc, vous voulez que le nombre de ripping de polynoméatisme soit un nombre de joueurs et des adultes au nombre de fonds. On vous dit que c'est un nombre de joueurs et des adultes. Et vous voulez que le nombre de ripping de polynoméatisme soit jointé uniformement, La activité de traiter des лишés ne dumplings pas de mille tasHeels-Lane. Les mines de plutoniet Nie★mime, variant des mines une fois que pour我觉得 qu'elles ne nous agrandent de l'impact de laлект incorporates bunun qu'un type de subst suitable. Le parlement agronomyxene est Gentlemen-Leochon Pop! Ils peuvent êtreorgeous, Servicesksen pour elle ou encore, la thèmeast yemek les qu negations comp forteement On parle de ce aspect de l'anoménie. La méthode de l'anoménie de l'anoménie est la simplex de la tonzique. La simplex, il se dégâche sur le polydron. Il fait un mouvement sur le graphisme du polydron. Il commence par un texte de vie. Vous faites un tour. Dans ce tour, vous vous movez de l'un de l'autre de l'autre. Vous décuisez le tour du polydron. Vous avez une simplex de la tonzique, ce qui est appelé la piébotine. Vous avez des passées de sévénement, qui peuvent vous donner le même optimal. Pour exemple, pour ce polydron, vous pouvez aller maintenant, maintenant, et vous devez aller plus vite. Cette chose est appelée la piébotine, qui t'aimera de l'arrière. C'est le quartier de votre quartier, de votre quartier de votre texte. Donc, ce que vous pouvez vérifier, c'est que pour performer une seule littération, il s'agit de l'art système, de la compétition de la technologie. Donc, une seule littération est de l'art système de la technologie. Mais, il n'y a pas de problème, il n'y a pas de pagotine de roule, qui n'est pas de la technologie. Comme, par exemple, il a été appelé la plus grande de la technologie. Et, pour le moment, il doit prendre le parfait roule, prendre le plus petit espace entre le polydron, et ne pas le faire avec le polydron de la technologie. C'est un polydron de la technologie. Donc, le complexe est très intéressant pour cette perspective, parce que la nature est pas évoluée comme un local de la technologie, mais globalement, ce n'est pas un problème. Maintenant, nous allons regarder pour quelque chose qui est un peu moins connu au sein de la communauté optimisation, qui est la méthode de choix dans le programme linar, parce que c'est une des meilleures effets de la pratique, et que c'est un complexe de révolution. C'est l'un des plus grands méthodes. Donc, l'un des plus grands méthodes, c'est de replacer votre problème de programme linar, par ce qui s'appelle le problème de barrière. Donc, le programme linar, vous avez votre polydron, vous avez votre vecteur C, qui représente la gravité, vous ne pouvez pas minimiser, vous pouvez trouver le plus bas point de votre polydron, c'est comme une force opposée ici, qui doit récolter votre point de maximum. Et donc, le problème de barrière, vous compenser la gravité par une force qui s'appelle le potentiel de Newton. Potential, c'est le logarithmique potentiel. Donc, vous mettez le potentiel de réculte, maisellow courche, qui recette par uneington de virille formatible, et toes solutions. Et ça fait important d'aider les arriér Gmail gaz des reverb faites, et de faire pour le tarif deạn, On considère ce problème pour minimiser l'exp risks. Les optimwelles affirmées sontveyables. À lasaire pour la multi prähestens你看 wid fetfart c'est la place Lunch. исторien Dynamic worker et ompe entre Renaud estqual nl en finising Ô et quand le mu va à 0, la varière vanille est en fin de la fin avec l'optimum. Donc le pass central, il se drogue de la pass, d'un certain point de l'intérieur de l'interieur de la polydronne, qui est un centre de kinéthique, et qui va au maximum. Et donc quand vous avez ça, vous pouvez faire des méthodes miracle, c'est-à-dire que vous pouvez faire des decours mûles, et vous vous loosez des steppes de mûles, vous pouvez rester dans une route de pass central. Et donc le pass de l'interieur de la polydronne vous va vous donner de la façon de faire des choses. Donc ce sont des points d'interieur, donc il y a beaucoup de questions techniques, et donc ces points d'interieur sont de la polydronne. Donc pour ce qui était présenté par Karl Marker, et d'ailleurs pour le temps de la plomb de l'interieur de la polydronne, et pour le style de l'onlion, qui était une question de si vous pouvez éprouvoir la balance de l'interieur de la polydronne, pour obtenir ce qu'il y a pour une métode de la polydronne. Et c'est un certain nombre de distanciers. Et donc c'est très difficile de montrer qu'il n'y a pas de méthodes, qui sont de la polydronne, donc les méthodes sont de la polydronne, et les techniques de Dieu en Chuba sont considérées de purified version. C'est une liste qui décide, d'une fois qu'on considère un nombre de régions, qui ont été réalisées en métode, et qui considèrent une géométrique complexe, une mojeure de la centrale passe, qu'on utilise le total curvature. Et il faut que le total curvature soit petit, donc il y a une ligation, qui peut être une méthode de la polydronne, qui peut être un point de vue de la résulte, qui est purer, qui est mort, géométrique. En tout cas, les conjectures à la totale curvature de la centrale passe se linérent dans le nombre de variables. En l'actualité, ils sont motivés par le théorème, de Dieu mal à la géométrique de Chuba. Et ce qu'ils ont, c'est considéré de polydronne. Quand vous avez un polydronne, en arrangement dans l'hyperplane, qui est éventué par le process de polydronne, en tout cas, il défend le mini-sales. Donc, le central passe de polydronne se part par un algémoïqueur, un misalgémoïqueur, qui fait le fil de l'all-sales, pour une ébruxielle, il y a le central passe correspondant du sel. Le théorème de Dieu mal à la géométrique de Chuba, c'est que le total curvature de la centrale passe, avec un élu de sel, qui est essentiellement le total de l'an plus l'an, et le total de l'an plus l'an 16, donc, avec un élu de sel, il est l'un de l'un. Donc, le conjecteur a dit que le meilleur cas est comme l'affirage cas, de l'aspect. Puis, par exemple, en l'intervalle d'Inch and Co, il a pensé que le conjecteur était tout optimiste. C'est-à-dire, si vous mettez beaucoup de coïquils, vous pouvez défendre le conjecteur. En plus, si vous révissez le conjecteur, vous avez l'expectif de l'élu, le total curvature, vous devez dire OFM. L est le nombre de variables. Donc, maintenant, je vais vous montrer, on va faire un tout le temps, avec des équipes de provicale méthode, avec des applications, avec des complexes élus. Donc, le problème de l'unité de jeu, avec des joueurs de l'unité de jeu, c'est que le problème de l'unité de jeu est relativement appartenu. Mais, c'est une relation très claire à l'égal combinatorie. C'est-à-dire, si vous avez une pivote de groupes pour l'unité de jeu, vous devez réconcilier des conditions techniques qui ne sont pas les pivote de groupes, qui sont des lignes. Ils ne sont pas trop d'agriculture, d'exponential, de transmission, de reculération, de la qualité de l'équipe. Donc, c'est un traitement de la nature et si l'on peut résoudre le programme, il peut résoudre une pivote de groupes en polémique. Donc, dans un moment, une pivote de groupes est plus facile d'assurer que l'on puisse résoudre le programme. Le grand sens. Donc, l'exemple de la pivote de groupes est quand vous considérez un signe de la pivote de groupes pour l'unité de jeu, donc, c'est une pivote de groupes, qui est assez rapide. Donc, plus positive, cela signifie que les résultats polénoméles sont des avirages sens. Par exemple, Adelaide Carpentchamille, qui considère le même modèle de Dieu, le malage de Vitham Chou, c'est-à-dire, qu'à l'aide de un seul poléneur, il considère une collection d'exponentiales de poléneurs, où vous vous refliez l'équipe de la qualité avec la probabilité de 1 outre. Et puis, c'est clair que le simplex est un polénoméle de temps en avirage dans ce model polémique. Donc, quand vous utilisez un pire de pire de pire, vous vous réagissez que vous pouvez faire un jeu de polémique en plus de polémique. Mais, bien sûr, ce modèle de l'arquart Chami est un peu particulier. Il ne reflète pas la compétition d'ordinarité. Voilà, donc, ce sont les résultats positifs. Et maintenant, il y a un négatif, qui est plus négatif. Donc, si vous vous en avez un négatif, ce sont les résultats qui ont un petit corvacheur et qui ne sont pas de corvacheurs, mais d'autres méthodes qui ont un problème. Il y a, pour l'instant, un programme de polémique avec un nombre variable en équilique pour leur art, qui est un des résultats de la totale corvacheur. Le corvacheur est un pire de pire. Et si vous roulez le nombre variable d'intégrité, vous regardez le nombre de régions, vous faites un nombre de régions. Maintenant, je vais expliquer les autres recours pour comment on prouve ce truc. Voilà, donc, les recours, bien sûr, par des idées tropicales, des modules tropicales, et par le fait de la densité non archémiques. Donc, premièrement, vous avez un jeu classique, d'exception, donc, on va y aller. Donc, le moyen de ce jeu est de considérer un jeu de fin de l'horizon qui est difficile dans le milieu de jeu, et qui est considéré comme un milieu de partage. Si vous considérez un horizon fin, vous faites un état de détail, d'un état de haut, et vous agissez que vous jouerez des termes de paix. Vous vous rappelez de ce que vous avez fait, et peut-être 2020, et vous vous réveillez le jeu. Voilà. Et puis, le valeur est indexé par le state initial et par le horizon. Donc, le valeur est le vrai numéro indexé, dépendant de la position de position de position en temps. Et puis, le jeu, le jeu de chapeau est utilisé pour compter la valeur de la série par un programme en ennemi, qui est divisé par une expression de la valeur de la série de la série de la série K-1, et qui est optimale de la série de la série K-1 par une stratégie optimale, par choisir la position de la série de la série et la action maximale de la série de la série K-1. Donc pour le problème de la série de la fin de l'horizon, vous le solvez dans ce point. Et donc, vous pouvez récover le fait que le niveau de la série du vector correspond à la limite de la série de la série K-1. Vous avez le valeur de la série de la série de la série K-1, mais c'est aussi difficile que de trouver un jeu de la série de la série K-1. Mais il y a une combinaison de la série de la série de la série K-1. Et vous avez la limite de la fin de l'horizon qui est égale de la série de la série K-1. Donc, il y a une summary. Vous savez ce jeu, le principe est que vous avez un vector de valeur, et le jeu est en RL, et le nombre de states est en RL, et le valeur vector en horizon game, il y a une transformation de la valeur vector en horizon K-1. Et le valeur en horizon 0, le jeu est terminé, c'est juste 0. Il s'appelle le Chapeau vector. Il y a un déterminage de jeu, il y a un chapeau parateur, il y a 1000 over-reactions de la paiement de la série. Et puis il y a un max en place, plus le max over-reaction, plus le variable XK, le RK, le new state, il y a un max place, c'est une star front stage, une front high, une front K. Chapeau parateur il est obtenu un max over all these moves, c'est bien. Il y a un plus grand abstract, une série de 4 points de vue sur le Chapeau parateur. Donc, on a un max, on a une série de 4 points de vue qui équipe RN de la partie alors-bas, sur le Chapeau parateur et les entrepreneurs présernis, on a aussi une commune, ce que vous avez dit, c'est un constant vector. C'est un variable, un traitement droit, un exemple, on veut apporter des axioms, c'est une gangse. Il y a une particularité, un disque de axioms, un disque de combats. C'est le général exemple de Chapeau parateur. Donc, l'exemple que je vous ai montré était un jeu déterministe, mais en fait, c'est un jeu de casse, un jeu de casse, c'est un jeu de casse qui est très similaire à ce que je vous ai montré. Le joueur veut choisir son action, le joueur max choisit un autre action, le B, donc le B est un action, c'est nécessairement final. Ensuite, il y a un paiement pour le joueur qui veut être un max qui dépend de l'état et qui dépend de l'action de ce que vous avez fait. Et il y a un problème de transition. Au given, le joueur de action qui se débrouille de l'action de l'action, le joueur de chose A et le joueur de chose B qui fera un T, P, I, T, et qui se débrouille de l'action. Donc, le général de Chapeau parateur peut être réveillé de ce genre de choses. On n'a pas d'assurance de le faire. Donc, ce que vous devez remercier ce décembre opérateur est appelé le opérateur de la 1B pour le séquence, c'est que quand vous éterrez ce décembre opérateur, vous jouez à un vécateur 0. Vous avez un ordinateur haute, qui vous donne un valeur de la gamme, l'origine de la gamme, la gamme de l'aérone, qui commence par un set haute. Et vous pouvez modifier le 0 si vous considérez un T qui est un vécateur qui correspond à la modification de la gamme. Vous jouez, comme usual, avec des paiements en vélos quand la gamme est terminée. Vous avez aussi un max pour le paiement de l'Avisionale si le terminus est terminé. Donc, il est intéressant de recevoir l'essentiel pour aller au jeu de varier, pour considérer un modifié. Donc, le premier, ce qu'il est appelé, c'est que le chapeau opérateur qui est raisonnable, qui se met à l'agébraïque, et qui se met à l'agébraïque en non-expansiviste, une mineur, donc il y trouve, un chapeau opérateur qui finit l'action spécifique. Et puis, le mid perfe vector, il ne peut pas tenir le mid, il existe. Il y a un point de vue de l'assistance. Donc, tous les jeux qui sont considérés dans les actions finites, sont en train de l'exister. Et maintenant, il y a la principale relation, la relation est, bien, le comité qui se dévient de ce résultat. Comment vous pouvez le certifier, vous pouvez le convaincre que le jeu est en train de faire un max. Donc, pour le cas déterminique, le résultat est spécialisé. Le suivi est révalent. Le jeu initial est en train de faire un max pour les joueurs. Donc, le même que le state initial est en train de faire un max, c'est le nombre de joueurs qui sont dans le cas déterminique. Il y a une certaine relation qui est en train de faire un max, et qui est en train de faire un max. Et puis, il est caractérisé par l'existence de ce secteur subharmonique. C'est-à-dire, un secteur de la vie comme toujours, plus petit à la vie. Donc, c'est analog ou un fonctionnement subharmonique, de la vie de la vie, comme un nom de lignard Markov opérateur. Donc, de cette façon, vous regardez pour un vector U, comme j'ai utilisé, plus petit à la vie de T. Vous demandez que Uj est différent de minus infinity. Donc, un chapelet opérateur peut toujours continuer à contacter ce argument comme un argument plus petit à la vie de T. Comme un point de vue, plus petit à la vie de T. Et que le certain état de l'exécution des J est devenu. Donc, par étudier de l'espace, étudier de l'espace de la vie, plus petit à la vie de T, est plus avancé en distance pour le soutien de l'exécution. Alors, il est un exemple. Donc, vous vous prenez, je ne vous remercie pas le jeu ici. Il y a 3 états. Donc, ici, je vous montre un set de vector subharmonique. Donc, ils sont dans l'air de la vie et je les envoie pour les écrans de l'air, pour les écrans de la vie et pour les écrans de la vie. Donc, ici, si vous avez, pour exemple, le set de vector subharmonique qui est un point du complexe, qui certifie que tous les états sont un peu plus écrans. Mais si vous avez un set de vector subharmonique qui est concentré sur le facet de la vie, et que le set est concentré sur le facet de la vie, c'est ce que vous pouvez réduire un problème de jeu pour comprendre le jeu métuille du spécifiquement de vector subharmonique. Je vais plus tard. Et si vous avez un jeu de stochestie, ou un jeu de fouson, ou un jeu de fouson, ou un jeu de fouson, alors que le set de vector subharmonique est assez utile, parce qu'il considère vie smaller en T.O.V. On regarde le vector différent de l'existence de la vie de la vie de la vie. Et puis, il caractérise l'existence d'un point de vue de la vie. Maintenant, nous sommes en train de acheter un trompe-cale et de la vie de la vie. Donc, le trompe-cale semi-field est un trompe-cale qui est le type de ring number complique de minus infinity. En Dieu sait que l'admission de cotations de marques est au maximum en train de produire une cotation de marques et une cotation de 0. En somme temps, nous avons aussi un plus sur le fil de la vitesse de marques. En somme temps, nous pouvons être plus en plus sur le module du trompe-cale semi-field. Nous avons un action des marques de marques avec des actes de marques et des actes de marques avec des actes de marques et des actes de marques qui sont le point de vue supra-mode. Donc, en particulier, Hermax oublie, le trompe-cale semi-field est un module de marques et nous pouvons considérer en zéaire précisément le trompe-cale con-ex-col. C'est-à-dire, le sub-sub-sub-vique de ce module, ifonomi, stable, va être le trompe-cale lignard combination dans l'avis X, Y module V, dans l'avis constante, le supe de lambda-6 et le plus en-y. Donc, le module correspond à la log de le trompe-cale. Il y a tout le souci d'atteindre le trompe-cale world, mais les membeurs ils sont négatives. Ça nous donne aussi le différent que le con-ex-col de module. En un deuxième way, il faut considérer con-ex-cette, ce n'est pas le set est tropical du con-ex, stable, va être tropical lignard combination, à cela, le sub-over-vique est con-ex-col. En écuvalence, on est plus en écuvalence, on est plus en con-ex-col, on est plus en origination, il y aura une grosse bocé-tigre de con-ex-col. Voilà. Donc, à haut, il fut look-at, il aura plus remontable, il va plus solver le game, il va considérer le space de sa harmonie de vector, c'est le set of certifications de cette game. Il va y avoir un jeu de game interprétation, un jeu de jeu, un jeu de jeu, un jeu de jeu, il a une grande surface, et puis on a un terminale paiement de jeu. On a un jeu de jeu 0D, et on a une grande surface, c'est-à-dire que vous préférez jouer le jeu un jour et pas de jouer. C'est ce que je veux dire sur cet objectif naturel. Et vous pouvez vérifier que les sub-modules sont précisément le set de la société de chat l'opérateur. C'est un difficulté, le chat l'opérateur continue, et le mode de classe de sub-module est le capital de la société qui est la projection, la meilleure approximation. Donc, de cette façon, les sub-modules sont précisément clôt, le mode de classe de sub-module est le final de la compétition. Plus d'autres, c'est le notion de l'adjoint. On a le module, le module de la classe de la société, le mode kernel, le module d'adjointgt, pas d'adjoints mais en uniformes des sub-modules modulées. Le point odio, c'est �ч you ready? A ou je emphasize disclaimer de.... d'adjoints c'estанный. Donc maintenant, quand vous considérez au château opérateur de l'intergame, vous avez le souvenir que c'était une ligne de payments plus, de maxes de payments plus, de la valeur de l'exe. Donc, c'est précisément la forme écharbe d'IV. Vous avez une table d'inarmes à tropicales d'IV, une table adjointe entre les autres tropicales d'inarmes pour du composite. Vous avez une table d'adjointes proportives. Vous avez une table de vie smaller en TIV, une table de vie le certificé de l'organisme unique. Vous avez une table de vie smaller en TIV, une table de vie la sollicite. Vous voyez que cette vie s'est installée en fin de nombre de lignes de lignes de l'inarmes sur le nombre de sols tropicales. Donc, c'est exactement ce que le set de certificés que l'intergame est. Ce qui est appelé le nombre de sols tropicales est défendu par le nombre de sols de lignes de lignes de l'iniquité. Les nombreuses variables sont le nombre de stets de l'envers et le nombre de lignes de l'iniquité sont les nombreuses stets de l'imposer. Donc, ce créateur, les spaces vis, les couleurs en TIV, les cas de lignes de lignes de lignes, c'est mieux, plus clos de vie, ce que vous cherchez. Le nombre de chèques. Donc, la intersection, le simple exemple, la reposée de la space, c'est que vous avez une single inequality. La single inequality, la max of A plus XR est plus petite, la max of B plus XR est plus petite. Ça va être du cas de... Voilà. Vous avez ce qu'on appelle le tropical hyperplay, qui est un set de points pour le maximum de termes finitaires, comme ça, qui est très proche, et qui définit les limites de la space de secteurs. Et le nombre de variables qui sont plus ou moins élevés par le secteur de l'un à l'autre, comme le nombre de x2, plus ou moins élevés par le max, x1, plus ou moins élevés par le max, et donc, le espace tropicale est un union de ce secteur. Dependant où vous vous mettez le terme de la gauche ou de la droite, vous pouvez avoir un union de ce secteur, donc, ce qui est un plate-plé, c'est aussi le tropical hyperplay. Un espace tropicale. Un espace tropicale. Voilà. Et quand vous allez prendre ce qu'on appelle le tropical tropical, il y a une intersection de milliers de espaces, ou... Il y a aussi des générations de nombre de variables. Vous avez un nombre de espaces qui sont un tropical tropical, et vous avez aussi un nombre de variables qui sont les normes de générations. Voilà. Il y a un exemple intéressant. C'est le tropical citron polytop. Il y a un objectif. Donc, ce genre de tropical polydrain, c'est un œil de polydrain complexe, qui est étudié par des rues dans le contexte. En particulier, le cesse de polydrain complexe est un remarquable structure qui se appelle Alcof polydrain. Alcof polydrain, c'est introduit par les pôles de micof. C'est un cesse de polydrain complexe qui formule un gm de xim, un gm de l'alcool et un gm de l'alcool. C'est le cas normal que les cesse de polydrain sont les victoires dans le système de route ATM. Donc, ce concept de l'opérant tropical comme un complexe dans le contexte est de cette sélection. Avec une molsation. Maintenant, le lien avec l'archimédial de collectivité. Donc, maintenant, on va au classical. Le classical complexe, il est connu en plus sur une arpéramètre comme des morts de testicapes qui complexe pour un problème. Donc, en fait, c'est une vie convaincante to work unit ou à des paramètres qui sont unit to work plus que real plus feed. Le nom d'archimédium valuation ou ce value group est the real number. Donc, pour exemple, comme ça, il recette au général l'excuses de série qui est contacté de paramètres qui considèrent série de paramètres qui sont vétos et chantent non plus real plus feed. Et donc le chantant, si il est connu dans at on a huit tests expérimentaries ASся recommandant cette cette architecture de la de l'art är art Le point de vue de la série est le limit. Le limit est le paramétre qui gosse toute la sécurité avec son poste de devin de l'égalisation. En fait, il va être de la même manière. C'est un très bon instrument. Maintenant, il y a une autre correspondance de code. Donc, je vais me rappeler que si vous avez des codes de code, vous avez le temps de faire un code de code, vous avez le temps de faire un code de code, vous avez le temps de faire un code de code, vous avez le temps de faire un code de code, vous avez le temps de faire un code de code, vous avez le temps de faire un code de code, vous avez l'envers du nombre en drôcle, vous voyez que la instructionalité АрC bulbon d'objectif, boguaring waiting Ready, shr girl, par exemple, c'est n'est pas non plus heat conditions Je vous bats exactementcaste, ce n'est pas gris de l'εται, mais non moins de ventes et res lakhie s âge et des conditions de immersion Signal 12-1. Alors, ce n'est pas Rudolph HePR bour allergic, Donc, si vous avez, donc ce sont des sub-sets de la salle, vous pouvez considérer des sub-sets de la valeur. Un sub-sets de l'air. Donc un sub-sets de l'air avec des bases similaires, c'est, bien, c'est éventuellement école, école et école. Où que l'on soit à l'extérieur, c'est un numéro réel. Et l'extérieur est un ordre linéaire, avec une compétition de la surface. Et puisque l'on commence par les polynomials, dans le sens, on peut échanté le renneau linéaire pour le lieu exponent de l'international. Donc il faut, c'est essentiel, pour obtenir une compétition. Et donc cet objet est appelé le sub-sets de la base similaires. Et le sub-sets de la base similaires est un set de la finition de la base similaires. Donc il faut que les sub-sets de la base similaires se trouvent. Et la valeur de la base similaires est similitaire, et elle est fermée. Donc la valeur de la base similaires est école et devient école. Et il faut aussi que cela soit un résultat de l'exemple. Il faut une diversité de généralité, c'est-à-dire le Deneff Pass, quantité de la réumination, dans la théorie pour l'interview. Il y a différentes approches de la théorie. Et puis il y a quelque chose d'autre, mais nous n'avons pas besoin de la réumination, qui est très expensive, mais il y a quelque chose d'autre, plus constructif et plus explicite. Donc ici, dans le moyen, vous pouvez compter l'image par la réumination de certaines bases de la base similaires. Vous considérez un nombre d'équalities, donc, polynomiaires et éequalities. Et donc vous voulez compter l'image par la réumination. Donc vous pouvez mettre l'équalité de la forme, p minus x plus x, alors vous mettez tous les termes positifs en gauche et les termes non négatifs en droite. Et puis vous pouvez faire une laif tropicalisation, qui est, si vous avez un point x, qui est la solution de cette équalité, vous pouvez dire que la valeur de x, vous applique la valeur à la droite ou à la droite. Et donc vous voyez que la valeur de x, qui est satisfait de plus ou moins une réumérité, vous repliquez le summe de la maximum et vous appliquez la valeur de la position. Donc ce qui est très important, c'est de dire que la valeur de l'interfection de la réumération qui est définie par l'éventuelle équality, est toujours introduite, elle satisfait cette relation, noted par la question du texte du peu contribué à tout au moment où nous allons wave l'Operation au infinity à z ärende de la endings de l'opposition de cette miketition en cliffغg, pour laoldown et sur laBTZ, et pour pour laług face à la blanketsuAF-2. Donc je vaisstanbulx cliquez sur les︠ d'eternim des On a un exemple simple, on a une single polynomial in equality, x1 square smaller than tx2 plus t4, x2x3. On a un polynomial to be the valuation, to just to visualize the inequality, so to x1 smaller than max. And, well, so here you know that is the single inequality of the Z screw, so for the result is that it's the set that you obtain is, in the same time, the true valuation value. For instance, if you consider a set like that, it is a closure for a single variable, so you obtain it by this technique. So we are following a general correspondence. Maintenant, on prend un certain nombre de lignes. Il faut que vous puissiez dire que ce set est l'image par l'évaluation du convoque sur l'algebraicon, dans le développement de la série de la fin. C'est le même, je pense, que c'est un convoque tropique avec un système de lignes. C'est le même de dire que, quand il y a un opérateur de chapeau, on doit nous permettre de faire des jeux de test. Mais, vous avez une spécifique évaluation de la position de probabilité à la quantité rationnelle. Donc, l'image par l'évaluation de lignes de convoque sur l'algebraicon veut préciser le set de séparation des évectors de la charge de l'opérateur avec l'évaluation rationnelle. Donc, il faut qu'il y ait une question de la balance entre l'évaluation de convoque sur l'algebraicon et la fin de la jeu de la quantité rationnelle qui est la position de lignes de convoque. Donc, le point où cela est prouvé est de considérer le cas de la spécificité. La spécificité, c'est un set de vectors, comme ça, on peut prendre une combinaison de cimétriques et de maïquises. Si vous voulez que la combinaison soit positive, c'est bien. Ce n'est pas la spécificité, c'est la désinérisation de la motion de lignes. Donc, vous pouvez définir la spécificité par la image de la spécificité. Et donc, le moyen de prouver que vous considérez la spécificité de la conféctionnation du set est de prendre un vector de la spécificité et de considérer la spécificité de la spécificité. Dans les bonnes partenaires, il faut avoir une spécificité comme une spécificité de la spécificité ou un objectif de la spécificité. Et puis, vous avez prévu la question de la question de la motion de la motion de la motion de la spécificité et de cette question de la motion de lignes. Et-) vous allez répondre questions à la question de la motion de la motion de la motion de la motion de lignes. dramatically naar sucks de la lenses de la la main et dans les rayons de科别es, vousthese ont gên similar à l'al layers mathematics de la la lumière si ça veut dire ce que vous appréciez flavour de l'alges visible, Jeremy Green de l' le fait qu'ellesiendra un photoreas excuse des conditions dont être un shrimps intérieur ou quand vous êtes et ce qui correspond au game, c'est une probability 1h. Voilà, ce sont les six relative, tous les résultats qui ont été mis en projection en convex programe, qui utilisent notre trou de classique à caisse, qui utilisent le trou de la traficale caisse, et ce qui est plus ou moins bon. Alors, on laisse considérer le spécial de la caisse de polydra, qui utilise le better understood. Donc, le trompe de polydrain, c'est précisément la limiter de la photoimage par l'évaluation de la pièce de polydrain, et de la pièce de polydrain, et aussi de l'os d'orphine. Donc, on peut être classique de polydrain, et de polydrain de plus à plus de polydrain, qui est une image de verre, et qui est une déformée, qui est une convergence de trompe de polydrain. Donc, quand vous avez cela, vous pouvez considérer le trompe de l'inharporel, qui est un point de vue de l'inharporel, et qui est maintenant relativement à l'archébillage de l'inharpomane. Donc, le résultat est que vous considérez le trompe de polydrain par le trompe de polydrain, et vous considérez l'image par l'évaluation. Et donc, si la variation de la variation de l'envie est génétique, en même temps, vous savez que la variation est une évaluation par une écolité. En beautisme, ça est un peu plus stressant, mais il y a eu des résultats très sensibles. C'est-à-dire que le tropical polydrain, ici le main-film, entre le non-archébillage et le polydrain, c'est exactement le même qu'on a donné. C'est le même le même le même le même le même le même le même Donc, vous avez un point de vue, et vous avez une autre, voilà. Donc, ici, ici c'est le future. Donc, ici, pour exemple, ici le tropical polydrain, une correspondance de la classicale polydrain, large-ware améterpie, et le théorème, c'est le même le même l'évaluation. Donc, vous avez un théorème, où vous avez à dire que la générité de l'explicit, vous devez considérer un minor, un métu, et vous pouvez vérifier que vous êtes générique. Avec toute la permutation qui arrive au maximum, les minors, ils ont un certain temps de temps, un temps de temps, un temps de temps, un temps de temps, donc il y a un explicit contrôle de la générité, un temps de solide, un temps de problème. Et aussi, si vous n'avez pas de la générité, vous avez un minimum de communication avec l'intersection. Je suis très heureux à l'endroit, je pense, si ma stratégie est correcte. C'est un résultat de notre résultat, parce que, quand vous avez un jeu, vous pouvez le lister pour une instance de non-mersichimé. Donc, si vous avez un agorisme pour résoudre l'inert programmable, comme étonnatur, vous pouvez le planter pour une étonnatisation à l'instance simulée de la simulation, et vous pouvez le résoudre. Donc, vous voulez le coroller, et vous voulez un peu plus de la centrale passe. Pour la centrale passe, où est-ce fait? Sauf la centrale passe magébriquière, qui est la question de votre compétition, de votre problème. Et puis, vous définissez la trompe de trompe en basse à la limite de la centrale passe. C'est une trompe de trompe, une marqueuse pisco. Et c'est pas aussi difficile de montrer que la trompe de trompe a une trompe de trompe en basse à la marqueuse pisco. Vous avez votre rondeur et vous avez à prendre une grande section de votre rondeur par les sépultes. Et quand vous déculez le rondeur, la centrale passe est en bairrit hunting. C'est le bond Sawie. On utilise aussi uneślę 6 means de ce guesses. Cette leadpicalité перекonne lesoccule deingo L'un des programmes où la totale curvature en la centrale passe est exponentiale en un nombre d'orbitage amusant. Le trombic est central passant en exponentiale en nombre de cartes. En plus de classicales de cartes, mais aussi de détachés de la ronde, de l'horizon. Il réalise le classical et d'ailleurs le méthode en ce exemple, avec le large paramétre artisanculiate, en le fait de mettre en expression le nombre d'orbitage amusant. Je pense que la réduction est très importante pour les questions. Merci beaucoup, Stéphane. Oui. Ok. Ok. Vous avez une question pour Stéphane? Oui, je n'ai pas un spécialiste, mais vous parlez de totale curvature. Qu'est-ce que ça veut dire exactement? Ah, ok. Vous avez une page? Je pense que ce n'est pas fort, 2 ou 3 pages. Vous avez une page? Oui. Je pense que ce n'est pas fort, 2 ou 3 pages. Oui, la totale curvature est un nombre de cartes. Il est l'intégrale de l'accélération, la norme de l'accélération. Si vous vous escaliez à une vitesse en écarte. Oui. Vous avez un bon moyen de voir que vous pouvez discrétiser. Peut-être que la totale curvature est définie de plus en plus de cartes. Vous allez avoir un super nombre de cartes sur toutes les cartes de la caméra. Vous vous enlèvrez. Vous enlèvrez le nombre de cartes, 3, 2, 3, normalement. Vous enlèvrez le nombre de cartes entre 2 et 1, 3, 3. Plus, vous enlèvrez le nombre de cartes, vous enlèvrez le nombre de cartes, vous enlèvrez le nombre de cartes. Ah, ok. Vous avez modéré la totale variation de l'analogue? Euh... oui. Ok. Merci pour la définition. Ok. Est-ce qu'il y a une autre question? Non, pas de questions. Merci beaucoup, Stéphane.
Convex sets can be defined over ordered fields with a non-archimedean valuation. Then, tropical convex sets arise as images by the valuation of non-archimedean convex sets. The tropicalization of polyhedra and spectrahedra are of special in- terest, since they can be described in terms of deterministic and stochastic games with mean payoff. In that way, one gets a correspondence between classes of zero- sum games, with an unsettled complexity, and classes of semilagebraic convex op- timization problems over non-archimedean fields. We shall discuss applications of this correspondence, including a counter example concerning the complexity of interior point methods, and the fact that non-archimedean spectrahedra have precisely the same images by the valuation as convex semi-algebraic sets. This is based on works with Allamigeon, Benchimol, Joswig and Skomra.
10.5446/51294 (DOI)
Thanks for the invitation. This is something that I've been working on for the last two or three years, some of the time at least, sometimes all of the time. It has quantum comology in its name. It is actually just mostly a combinatorics and algebra project because it is related to quantum comology in a way you will see, but it is completely elementary in the sense all you need is some symmetric function theory. To some extent, this is a weakness. To some extent, this is a strength. If you actually understand quantum comology, unlike me, there's going to be some interesting questions in there for you, I believe, which I so far have not come close to answering. The slides are online. There is some kind of preprint online. There is a survey that I wrote for FIPSAC back before there were not this many results. This story starts with the Schubert calculus, which is from a modern perspective about the following two co-molotry rings, the co-molotry ring of the Grasmanian and that of the flag variety, both over the complex numbers. We're only going to be looking at the first one in the stock, so we're going to be studying things that are in a way generalizing this co-molotry ring. Classically, it is well known since I think Borrell and his time, what this co-molotry ring is as a ring. It is the ring of symmetric polynomials in K variables over the integers. Modulo, the ideal spent by K consecutive complete homogeneous symmetric polynomials. Now somewhere in the 80s and 90s, I think Maxim can talk, can say a lot more about that. Comology has been deformed. The new thing, well at least the simplest version of this deformation is called the small quantum co-molotry ring. And for the Grasmanian, it's been obtained that it is very similar to the co-molotry ring except you're now dividing not by hn, but by a deformed version of hn. And this Q is the polynomial and determinates that you add to the base ring. So instead of working over the integers you're working over that Q. And it has since been shown that this quantum co-molotry still has a lot of the properties of classical co-molotry. In particular, it has a basis of the module over that Q formed by the Schur polynomials, but not all of them, only the ones where the partition fits into a K by n rectangle, sorry, K by n minus K rectangle. The structure constants are the so-called Gromov-Wiednann variants. Well, the simplest ones again, but... And by now, people have written about it in a way that even I can understand. So a lot of this has been studied. There are some pretty interesting properties. Now the goal of this talk is to deform the co-molotry ring instead of just deforming it with a single Q to deform it with K parameters. And this generalizes QH. This new ring, I don't know what it means geometrically, but I have some indications that it does have a geometric meaning, or at least some kind of meaning. Maybe it's K theory of something, maybe it's a representation ring of something very weird. So let me start from scratch and quickly go over standard notions around symmetric polynomials. I see too much of it as known, but it's something Sir Worsh mentioning nevertheless. And then I'm going to introduce this deformed co-molotry ring purely algebraically. So we start with the commutative ring, boldface K, n is the non-negative integers, K is a fixed non-negative integer. This P is going to be the polynomial ring in K variables over the base ring. That's the store, of course, different case. So standard notations, if alpha is a tuple and i is the number, then alpha i is the i's entry of the tuple alpha. Same for infinite sequences. If alpha is a tuple, then x to the alpha means the monomial x1 to the alpha 1, x2 to the alpha 2, and so on. And this means the degree of this monomial, so the sum of the entries of alpha. So what is a symmetric polynomial? It's a polynomial that's invariant under permuting the variables. And I call the set of the symmetric polynomials s. It's the sum ring of P. Now, a classical theorem I think by Artyn and his Galois theory book, I'm not fully sure where it first appears. It's saying all polynomials are a free module over the symmetric polynomials. And the basis is by the basis of what's called sub staircase monomials. It's the monomials of the form x1 to the power of something smaller than 1, x2 to the power of something smaller than 2, and so on xk to the power of something smaller than k. So k factorial monomials in total. This is another way of writing this down. And the only place where I have really found the proof written up easily to understand is the LLPT notes, Luxov, Lascaux, Prague, and T-Torup. But it's been in the air for a lot of for many years. So if k is 3, this basis is just the six monomials. So now what about the ring of symmetric polynomials itself? That ring has several bases, and most of them are indexed by certain sets of integer partitions. So a partition is a weakly decreasing sequence of non-negative integers that has only finitely many nonzero entries. So things like this, or this, or this. This does not qualify because it's not decreasing. This does not qualify because it does not have only finitely many nonzero entries. And I will lazily say k partition for a partition which has at most k nonzero entries, or in simpler terms a weakly decreasing k tuple of non-negative integers. Why are these the same thing? Because if I have a k partition lambda, I can just stuff it up with zeros and I get an actual partition. And conversely, if a partition has at most k nonzero entries, I can remove the zeros and this trail of zeros until I have only a k tuple left. Any k partition has a young diagram. And the important thing for this audience, I guess, is that I'm writing it in English notation. You probably know what it is. So rows are the lengths of a row is lambda i and the number of rows is at most k because it's a k partition. Now here is the usual symmetric functions or rather symmetric polynomials. I'm not going to use infinitely many variables until the very end. So for each integer m, e m is the elementary symmetric polynomial is just the sum of all possible square free monomials of degree m. That's one way to put it. Another way is to say it's just sums of products of m element subsets of this axis. So e 0 is 1, e something negative is 0 in particular. Now what is e nu where nu is a whole tuple of integers. It's just the product of the e nu i's for the i's and for i's going, ranging over the indexes. In particular, if any of the nu i's is negative, then this is 0. So Gauss proved that the symmetric functions as a k algebra are freely generated as a commutative k algebra actually by the elementary symmetric polynomials. Equivalently, you can write it as a statement about the k module s. Namely, that k module has a basis which is namely this family of all elambdas, where lambda is the partition whose all entries are at most k. Note that e m is 0 when m is bigger than k because you won't have anything in the sum. It will be an empty sum. Now here is an analog. Instead of the e i's, I now have the h i's or h m's, the complete homogeneous symmetric functions, or rather polynomials. These are the sums of all monomials of degree m. So again, h 0 is 1, h something negative is 0. And again, I extend this notion to tuples by just multiplying through the entries. And again, the k algebra s is freely generated by this complete homogeneous symmetric functions if you go from the h 1 to the h k. Now, of course, this time, it's not true that all the bigger h's are 0. So if you want free generators, you need to stop at h k even so they keep going on. And equivalently, the h lambda is the partition whose entries are at most k, former basis of s. Now, let me mention this also. The h lambda is the k partition, also former basis of s. And these two bases are actually different. So in finitely many variables, you have two h bases. Of course, the lambdas here are the transposers of the lambdas here or conjugates, you can call them. And now I'm going to briefly introduce the Schur polynomials. So for each k partition lambda, one way to define it is this. It's a fraction. On the top, you have this modified van der Mont determinant. And on the bottom, you have the actual, the original van der Mont determinant. And yes, this is going to be a polynomial. Another way to define it is based on the so-called Jacobi Trude formula, which is another determinant, basically. Now, it's well known that this equality holds, that this is a symmetric polynomial with no negative coefficients. And a third way to define it is via semi standard Young tableaus. This is not something that I'm going to use much in the stock. So I'm just mentioning this as something that exists. If you've seen it, it's, it's maybe the simplest way to define the Schur polynomials. And again, if you restrict lambda to a k partition, you get a basis of the k-module s. So one thing that's neat about the Schur polynomials, it's not only are they themselves polynomials with no negative coefficients, but also if you multiply two of them, you can expand the result in the Schur polynomials again with no negative integer coefficients. They're called C lambda mu nu. They're known as little wood reaches and coefficients. Yeah, so there is actually many theorems of this shape because this coefficients, they count many different things. Now, to preempt, to preempt something I will have to do later on, we have, let me slightly extend the Schur polynomials. We have defined as lambda for any k partition lambda. Now I'm just going to blindly expand this definition to lambda in z to the k. So I allow nonpartitions. I allow k tuples with negative entries and with increases. And I define them in the exact same way. Now it turns out that I don't actually gain anything new this way because if alpha is a k tuple, then this newly defined s alpha is either zero or it's one of the old slumdas for partitions lambda, except possibly with the minus sign. And there is a rule for actually finding this lambda. Basically, you have to raise the entries of alpha by the numbers k minus one, k minus two, and so on, k minus k. If the resulting tuple still has a negative enter, you have zero. If the resulting tuple has two equal entries, you have zero. Otherwise, you have to sort it in decreasing order. You have to watch the sign of the permutations that you do, and then you have to un-raise these numbers again, and then you get the lambda. To be honest, this is probably not worth stating because this is what you get if you just correspondently permute the rows of this matrix. Note that the alternate formula still holds in most cases. It holds if all this lambda i plus k minus i are no negative. Now, the reason I'm mentioning is this is not an exciting result. This is just something that will come up later naturally. I will need those s alphas. Now, so let's define this new quotient. First of all, we pick an integer n greater or equal to k, and we pick k polynomials, not necessarily symmetric at this point, such that the degree of ai is less than n minus k plus i for each i. In particular, they can be constants, and in many applications they are constants. Let j be the ideal of the polynomial ring that's defined as follows. It's generated by these h's, except from each h I subtract the corresponding a. Note that this can count as a deformation because the degree of each a is just smaller than the degree of the corresponding h, so I'm subtracting smaller terms from each of them. And here is the first result that I actually had to work for, that isn't the literature, that the quotient ring as a k module is again free with the same kind of sub-staircase basis, except you have to adapt it a bit. The basis has the form x1 to the power of something smaller than n minus k plus 1, x2 to the something smaller than n minus k plus 2, and so on xn to the something smaller than n. Overline always means projection because it's a quotient ring. And this is how many elements this basis has. So this ring is zero-dimensional as its geometers would call it, or it is a free module of this rank. So from now on, I will assume that this a's are symmetric polynomials, not just arbitrary polynomials. Then I can use the same differences to generate an ideal of s. Previously it was an ideal of p, but now that they lie in s, I can also generate an ideal of s from them, so an ideal of the symmetric polynomials. And they can take that quotient. What is that quotient? Well, I can characterize that as follows. We define omega to be the permutation that has k entries of size n minus k each. So it's a k by n minus k rectangle. And I define pkn to be the set of all k partitions that fit into omega. So this is this classical notation for partitions fitting inside each other. And this is the same thing restated in pedestrian terms. All it has to do is it has at most k entries and the first entries at most n minus k. Then the k module s over i, so symmetric polynomials model this ideal is free with basis the sure polynomials that fit into this rectangle. Next, I'm going to assume something even more restrictive. I'm going to assume that the a's are actually constants, not just symmetric polynomials, but constants. Actually, some of what I'm going to do is does not require this assumption, but I think it's okay to make it here, particularly because the classical cases are particular cases of this. So the classical co-molitary ring, you can just get it when the a's are all zero and k is the integers. Then s module i becomes this classical co-molitary ring. And this s lambda bars, they are the Schubert classes. Quantum co-molitary still fits in this definition, except you have to put the indeterminate in the base ring. The q, so k must be that q. And this time not all of the a's are zero, one of them is actually q with a sign. And then you get the quantum co-molitary ring. So this above theorem allows us to forget about the geometry if you want. So we can just define the quotient and the theorem tells us that the quotient looks like it should look. It has its nice basis. It doesn't, for example, collapse to zero or something else. So a lot of papers on the subject have, in a way, been using geometry sometimes to derive combinatorial consequences. With the theorems, this is not necessary, at least if you only want the consequences. So there is more to be said. So let's look at these bases one more time. Since it's a basis, we can take the dual basis. So for each mu, for each partition mu that fits into this rectangle, we can take the linear form on s modulo i that just sends every element to its s mu coordinate with respect to these bases. Moreover, for every k partition mu, we define the complement of mu. This is going to be the partition that you get. Remember, mu fits into this rectangle. And now if you remove mu from the rectangle and you turn the result around by 180 degrees, you get a new partition. That's the result. So this is how you can define it formally. And finally, for any three k partitions alpha beta gamma, you can look at the coefficient of s gamma check in s alpha s beta in this quotient ring, of course. Not in the actual symmetric polynomials, but the projections of the quotient ring. This is the generalization of little wood Richardson coefficients and also of the three point Gromov Wittman variance. So as a theorem, that is that this things have the same as three symmetry as the Gromov Wittens. So I can basically permute alpha beta and gamma as they want without changing the coefficient. And even more symmetrically, I can rewrite it as a coefficient of the rectangular box in the product of all three-shoot polynomials. And there is a way to restate it. So what about some other bases? And actually, I must say I have read, I have seen surprisingly little about bases of the Cromology ring, even for classical Cromology. Everybody says that the Schubert classes form a basis, but then what else? Nobody, I haven't seen it written down. Well, now here's a couple more. So the H is also for my basis. And the transfer matrix is unit triangular with respect to a reasonable order, but I don't know how to describe it. It's not like I have this kind of analog of cost-conumbers handy. There is actually a formula for expanding a single H in terms of the Schur's, except in the example that if this M is too big, then the Schur's themselves will not fit into the rectangle and then you will have to expand them. And this is going to get messy. But it's interesting that if you're expanding an H, you get Schur's of hook shape. Okay, here is something else. Remember the Peary rule for symmetric polynomials. It tells you how to multiply a Schur polynomial by a complete homogeneous by an H. The result is the sum of Schur polynomials and the sum is ranges over all k partitions such that the new partition over the old partition is a horizontal J strip. So what is a horizontal J strip? So basically, it's every, it's all possible ways to add J boxes to your young diagram in such a way that no two boxes are in the same column and that you get a partition in the end. This is another way to describe it. So basically the entries of the two partitions interleave each other. So this Peary rule works in the ring of symmetric polynomials and by quotienting it also works in classical homology. Quantum homology has a more complicated Peary rule. Now what about this generalized quotient ring? Well, it turns out that it does have a Peary rule too. So this part is familiar. This is just the normal Peary rule. However, the normal Peary rule will often give you some partitions that do no longer fit into the box, into the rectangle. And then you have to reduce the model of the ideal and you get some kind of decay product, so to speak, error terms, maybe. And here is a way to actually explicitly write this error term down. So this is a very common rule. All of these new things fit into the box. And these are little wood Richardson coefficients. I'm wondering if you've ever seen a Peary rule that involves little wood Richardson coefficients. I haven't before. This is kind of new. And that involves the AIs, but it only involves them linearly, which is also a bit weird. And yes, it generalizes the Peary terminology by Bertram Cio, Kant, Fentany and Fulton. However, notice that this little wood Richardson coefficients here can be bigger than two, bigger than one. So this is not a multiplicity free rule. And this is an example for fairly reasonable N and K. Now you might wonder how about multiplying by an E instead of an H in classical symmetric function theory? There is a complete symmetry between the two. In homologies, there is still a symmetry. It's a little bit tricky because you have to swap k with n minus k, but there's still a symmetry. Even in quantum homologies, the symmetry is still there. Posnickov, I think, has shown it. Well, you can forget about symmetry now because at least it's not obvious if it exists. The thing is in this Peary rule for the Hs, you have linear terms. Only one A1 here, only A2 here, only A3 here, but here you get squares, for example. And I have not been able to make any sense of this formula. I don't even have a conjecture for what the general rule would be if there is a Peary rule for E. And here is another example for those who might think it should be easy to multiply at least with a K. It's kind of the extreme case where you're just adding a column. Well, but you're adding a column and then you're reducing models ideal and reducing can give any kind of mess. So even this case is not easy. Now, I've been talking about reducing. In quantum homologies, there is a pretty nice trick for reducing any Schur polynomial to fit it into the box. In a way, you basically just keep removing certain kinds of hooks from it until it fits into the box. And what you get is a Schur polynomial multiplied with some sine and some power of Q. Now, is there such a thing for the generalized quotient? So here is an example where we are reducing 443. Now, 443 isn't a far cry from the rectangle. The rectangle here is 333. 443 just has two extra cells. So if you get such a messy result, it feels kind of hopeless. It feels like it gets messy. That's sort of all you can say. Well, it turns out that this mess can actually be controlled. So here is a reduction rule. More likely, I call this a straightening rule. It's one step of the reduction rule. If you want to fully reduce something, you have to apply it many times. So given a k partition that does not fit into the box, so its first entry is bigger than n minus k, I take all possible k tuples whose first entry is mu1 minus n. So it's my old first entry lowered by n. And then all the other entries are increased by one or remain fixed. So 2 to the k minus 1 many terms, which are usually not partitions, they have a very negative first entry usually. And even the next entry, they might wiggle instead of being decreasing. So it's not a partition at all. Then my Schur polynomial in this quotient equals this sum. Oh, and I don't sum over the entire set. I sum only over a certain slice of it. So only a given size. Also, I guess with this sum, including this sum, it's a whole set. So this is weird. And this is the reason why I introduced Schur polynomials for non partitions. Because as I said, these lambdas are not partitions usually, but you can straighten them out. So this, and again, you see only linear terms in the ace. However, in the general case, these things will not fit into the rectangle as well. They will still not fit into the rectangle. And then you have to apply it many times. And this is how you get all this higher degree terms. So I call it a rim hook algorithm. But frankly, I don't know what the rim hooks here are. I understand this only in terms of the stuples. I don't understand this in terms of young diagrams. I suspect that this old method I think by King with the slinkies can say something about it. Well, unfortunately, I don't quite understand this method. So maybe somebody here can help me with that. Okay, now for the Holy Grail. And the Holy Grail is still well in its place. Do we have non negativity? So are the structure constants non negative? This has to be interpreted correctly because we have all these AIs now. So the structure constants will be polynomials in the AIs. And if you look here, you will hear, for example, you see a lot of negative signs. But since they're polynomials in the AIs, you may think that you could put some signs in front of the AIs and then it will be non negative. Well, apparently that's true. So I re-labeled, I re-indexed the AIs. It's as BIs with the signs in front of them. I picked three partitions, I picked three partitions, slum them, you knew that fit into the rectangle. The claim is that the coefficient of this, what's going on with the sound? So is this always the same sign? Christian, I'm with you. Yes, it is strange, but it is. So we hear you here, Daryng. So the coefficient of another S in the product of two Ss has a predictable sign. I'm not going to say it's positive because the sign is there. So you would have to also change the signs for all the Ss in terms of the degree. So this has been checked using Sage for all N up to 8. I'm not fully sure I should believe it because honestly I would like to see a few more Ns, but it gets harder and harder to check this. And if it's true, it would generalize the positivity of Gramovitian variance. Seeing that it has only recently been proved combinatorially, it would probably not be very easy to prove combinatorially. And this is of course one of the reasons why I think this ring could have geometric meaning. So as I already mentioned, there is another basis. You can take the monomials, symmetric polynomials, and then you take their projections. And that's still a basis. I don't know what the structure constants are, but I'm just saying it is the thing that's, it's not hard to prove using triangularity again. Here is, here is a thing, something that's not a basis. The power sums do not form a basis. And that's not because of positive characteristics things, even in characteristic zeros, this is not a basis. Even if AI is zero, this is not a basis. I'm kind of curious as what's going on there, what kind of subring we get, but I don't know. I haven't actually looked at forgotten symmetric functions, but there is a bunch of other bases that could be checked. So what are reasonable questions? One is whether S-modulo i has a geometric meaning, and if not, why do we have all these nice properties? Because this ideals, there is no guarantee that an ideal like this could would behave nicely in any way. In theory, you could have some weird big Gröbner bases, it would reduction would give you big masses expressions with no patterns. The next question is partly resolved. This is due to something Kristen Krattenthaler has asked in one of his papers, what happens if we replace the edges of the ideal by the power sum symmetric functions? So as a high school student working in the MIT prime's project last year, he has proved that the basis theorem still holds. So the basis that I mentioned, the sure basis is still a basis for this quotient. So I don't know any further properties for so far. I don't think there is a symmetry. I haven't looked at the theory rules. Well, the other catch all question is what other properties of quantum globalize. I have tried post-nicov's curious duality and cyclic him and symmetry. I've tried this duality between k and n minus k. They don't seem to generalize. Maybe somebody with more intuition for the geometry can say how I should probably tweak them to make them generalize. But if you just follow the algebra, it doesn't seem to work out. Is there an analogous construction for the flag manifold? Now I have to admit the quantum cosmology of the flag manifold already has a lot of deformation parameters. I don't quite see how to deform it further. So I'm not actively expecting something to happen here. Is there an quiverient analog? I started working on this. I haven't gotten very far. And what about quotients of quasi-symmetric polynomials? This is a wild card. Sometimes you get something interesting in there. Sometimes you just don't. There is an SK module structure on the quotient. So this time I'm talking about the quotient of all polynomials, not of the symmetric polynomials, because the symmetric polynomials have a trivial SK action. But what if I quotient all polynomials? What is the SK model structure? Well, it turns out, and I've been too lazy to write this up for over a year now, that this is the SK module you would basically expect. It's the power of the regular representation. If you've seen diagonal harmonics or not diagonal, just regular harmonics, you would not be surprised by this. And finally, how much time do I have left? It is up to 20. Oh, okay. I can actually... It's 13 or 10 or 13 minutes, all depends. Okay, I can talk about the proofs then. So let me quickly talk about a question that just feels natural. I don't know how natural it is. So instead of looking at symmetric polynomials and K variables, I can look at symmetric functions in infinitely many variables. This is like the grown-up version, except it's easier. And I'm going to use the same notations for them, except I will put them in boldface because... So I don't get to confuse them with the polynomials. So the symmetric polynomials can be viewed as the symmetric functions, modulo all the high E's that would be zero in the symmetric polynomials. So as modulo i, what's the quotient of a quotient? It's just a quotient by all those things. So you get it from lambda by quotienting out both the E's and the corresponding Hs minus A's. Now this looks a bit asymmetric. I mean, it's asymmetric in two ways. You have infinitely many E's here and only infinitely many H's, but also you are deforming the H's but not deforming the E's. Now I cannot do anything about the first asymmetry. I've tried. It feels like you should be able to kind of have... You should be able to arbitrarily mix H's and E's here, but it's not clear how. If you think about it, you don't want to mix them randomly. That will give you very random rings. But the second asymmetry, you can just try to fix it by also deforming the E's. So here's what comes out. I will... And I'm switching to boldface letters here because they will be elements of lambda. They will be symmetric functions. So A1, A2 up to Ak and B1, B2, B3 and so on should be symmetric functions with appropriate degree conditions. Then this quotient is a free k module with the same basis that we know and love already from comology. So the same Schubert classes. So sure functions that fit into this rectangle. So now you have infinitely many parameters. I do not know much more about this quotient. There is probably some really low hanging fruit here like other bases. I don't think... I wouldn't really expect much to hold for it honestly because this one really does not have any hint of a geometric meaning. But it just feels like a nice thing to have seen. So let me say a few words about the proofs, well except for the SK action. So many ideas. First of all, the quotient of all polynomials, not the symmetric ones, can be done with Gröbner bases. And in hindsight, this has already been done by Konka, Krattenthaler and Watanabe, already sketched. And Gröbner bases actually, so for everybody who is a bit scared of this, so people usually talk about Gröbner bases over a field. Well, the existence theorem requires a field. But if you have a Gröbner basis, so if you just want to use a given Gröbner basis, if you already know that something's Gröbner basis, you only need a commutative ring. It's like this vector spaces. For a vector space to have a basis, you need a field. But if you have a free module with a basis over a ring, you can use it. You don't need that ring to be a field anymore. So in the same way, this boils down to actually finding an explicit Gröbner basis for this ideal J. Using that and Jacoby Trudy, you can show that the symmetric polynomial quotient also has the right basis. And the rest, more or less, is about computing the symmetric functions. So I will probably not talk about Gröbner bases here. It will probably take a long time. This is all just in case slides. But here is the Gröbner basis of this ideal J. So x i dot dot dot k, so x i dot dot dot k is a shorthand for x i, x i plus one, and so on x k. So the Gröbner basis does not use the complete homogeneous polynomials of the whole axis. It uses them only for certain sub sequences of the axis. So if not for the deformation parameters, this would be a Gröbner basis. Well, now you have to deform it. And you deform it in this kind of strange way with a coefficients and with ETs, so elementary symmetric polynomials in the other axis. With respect to the degree of psychographic ordering on the monomials. And so you have a very nice Gröbner basis with really nice leading terms, which is just x i to the n minus k plus i. So, McCullis basis theorem tells you what the basis is for the quotient ring. Now, how do you jump from all polynomials down to symmetric polynomials? Okay, so using Jacobi-Truidi, you can reduce each shorthand polynomial that does not fit into the rectangle. Two shorthand polynomials that fit into the rectangle. It's a bit of a straightening rule. So the shorthand polynomials that fit into the rectangle span the quotient. Now, we need to prove that this is linearly independent. And as usual, this is the harder part. It's like PBW. However, we know that we know already from Artein that this family is span the polynomials as a symmetric polynomials module. And combining these two gives that these products span the quotient of all polynomials. But we already have a basis for this quotient. That's what we did on the previous slide with Grobler basis. So what if you have a k module with the basis and the spanning family that have the same finite size? It's well known or an easy exercise that the spanning family must then also be a basis. So we conclude that this family is the basis of the big quotient. And therefore, it's subfamily is the basis of the small quotient. So it's a bit of a ping-pong argument. It feels natural if you're into symmetric functions to just work with symmetric functions all the time or with symmetric polynomials. But we couldn't, I couldn't prove it dependence with symmetric polynomials. If you actually try to find the Grobler basis of i inside s, I don't think this has a good answer. I think the Grobler basis become very ugly very soon. So you climb up into the ring of all polynomials, you look at the quotient there, and you have, and then you argue using this fact that the spanning set that has the same size as a basis must also be a basis. I think this is a trick that's a good takeaway if you don't know it. I feel like it can be useful in many other situations. So basically, if you don't find a nice Grobler basis, maybe it helps to go into the bigger ring. And the rest of the proofs, as I said, it's about computing with symmetric functions a lot. I'm just going to briefly notice this identity commonly ascribed to Joseph Bernstein, but it appeared first in the book by Zilevinsky, that it's basically saying if you have a Schur function and you want to insert a new entry at the front, well preferably of course it should be bigger than the first entry, so you still have a partition afterwards, you can achieve this by a certain operator which is made out of skewing and multiplying, skewing by e's and multiplying by h's. This operator is now known as the Bernstein operator, or I think Garcia has called it the Schur-Row-Edder for an obvious reason. So this is in a way the driving force, but there is a lot of computations with Peary-Rolls and Jacobi-Truzzi matrices. Okay, so much for this and I hope to actually have two papers out soon enough about it, not just a big preprint with a lot of messy proofs. So thanks are due to Sashar Posniko for the whole project that I started working on back at MIT. Gerard Duchamp, thank you for inviting me again and a lot of people have done some, have contributed some ideas and thank you all. Thank you dear Dari, now time for questions, remarks or comments, maybe? I do have a basic question. Can you, maybe I missed that point from the beginning, but Dari, please, can you lead us again the connection between the little Richard von Coefficient and the Gromov-Wittern? That's the very beginning, I believe. Can you explain us again the link? Because I think if this is happening maybe for the little Richard von, you may have a rule similar with another invariant for, for instance, the Kroniker by changing the law. So I think this connection may be fruitful for doing other stuff, this is what I want to know. So let's see. Maybe just, you know, if you can, yeah, keep up with that. So this one is a coefficient of a Schur polynomial in the product of two Schur polynomials, except you're working in the quotient ring. But if you're, if you're working in the classical cosmology, then the a parameter is zero. So the quotient ring is basically just looking at like a little piece of the symmetric functions ring. It's not, it's in that case, there is no error terms. It's not being in any way contaminated by the a's. So that coefficient, of course, if you choose alpha, alpha, beta and gamma appropriately, you can get any little Richard from coefficient like this is such a thing. But this is only happening for the deformed Schur the deformed Schur polynomial, right? I mean the one with the bars, not with the classic one, as you say, right? So if you pick, if you take the a is to be zero, then the bars don't really change that much. So basically in low, in sufficiently low degrees, everything behaves as it was. Okay. Okay, thank you. Are there questions? Yes. Do you see another way of deforming so that the duality k to n minus k be respected? So this is tricky. In theory, I could well imagine that it is a duality if you properly somehow, I don't know. No, okay, no, this is probably nonsense. But so the thing is that if I have k, I have k parameters. If I have n minus k, I have n minus k parameters. So it's not quite clear. Okay. So I probably should have mentioned that this is all looking kind of similar to what is called factorization algebra and splitting algebras in the numerator geometry or even in constructive Galois theory. This algebra is where you just pick some polynomial and you declare it to be a product of two polynomials. And the indeterminate coefficients of these two polynomials will be my new indeterminates. But I wouldn't, I don't know if this is in any way as a morphic or a particular case or a generalization of that. They just feel alike. Yeah, actually, maybe a comment because sorry, I missed half of your talk, but still, as I wrote you in chat, this is in fact infinitely many variations, different deformations of algebra with the basis and this is proper to be Frobenius algebra. This is quantum, or big quantum homology gives and k actually are natural, we are in classes of tautological k dimension bundles and n minus k will be a class of n minus k that tautological bundle on grass mining. Yeah, so definitely there are natural k parameter families of deformations from quantum commology from bickering. Thank you. Do you think they could be, they could give the same algebra originalization of mine? Why not? Yeah, I would be surprised to have two different different deformations of same algebra with same number of parameters. Of course, it's a question of how to interpret combinatorially. Dari, I have a quick question. So the deformations that you that you consider all come by tweaking the Jacobi Trudy identity or basically perturbing the Hs. Yes. Is there a way of doing deformations so where you exploit say the Jacobi, the sorry, the Giambelli identity instead? So for instance, instead of perturbing the Hs, you perturb hook shapes. Good question. Or I mean, that's another way is can you, can your constructions be realized through a, you know, through a perturbation of Giambelli? Good question. And I'm thinking about this more in the case of sure functions, the infinite case, because all of the Yeah, no, I don't even know this. Do you know this in classical homology? If you don't deform anything, if you just quotient by Giambelli by hooks instead of by Hs. Oh, I think I, I think I remember something by mother and Bible, maybe, but I think it's not quite, is it hooks? You sure finite elements and lumbering that something similar. I need to read, I need to read that one again. Maybe small small question. I probably missed some part of your talk, but when you consider this deformation, you work with a commutative group, their basis. Yes. In spite of this deformation, what, what do you see any place for non commutative group basis there? Maybe you can directly work in the different situation and construct some non commutative group basis to get the basis which you need to work further. Of course, probably it's difficult to answer right away, but it's something probably, which can probably give some simplification. Some other things to do. I don't know. I'm curious as to how non commutativity will simplify things here, but I have to say right away that I don't know what what non commutative sure functions would be. I mean, there are several versions of them, but none of them feels particularly canonical. Yeah, of course, it should be chosen correctly, appropriately. So yeah, it's difficult to say, just suggestion to try. Because if you work straight away in in quotient, you can still do the same as when you work with commutative monomials. And maybe it could be easier, not necessary, but it can. There is one version with practical classes inside the world, inside the free algebra. Oh, okay. Okay. Oh, since like three ribbons. I don't remember the first authors, but I know that Schützenberg and Lascaux worked a lot on this. Other comments or remarks? So we can thank you, Darij.
One of the many connections between Grassmannians and combinatorics is cohomological: The cohomology ring of a Grassmannian Gr(k,n) is a quotient of the ring S of symmetric polynomials in k variables. More precisely, it is the quotient of S by the ideal generated by the k consecutive complete homogeneous symmetric polynomials hn−k,hn−k+1,…,hn. We deform this quotient, by replacing the ideal by the ideal generated by hn−k−a1,hn−k+1−a2,…,hn−ak for some k fixed elements a1,a2,…,ak of the base ring. This generalizes both the classical and the quantum cohomology rings of Gr(k,n). We find three bases for the new quotient, as well as an S3-symmetry of its structure constants, a “rim hook rule” for straightening arbitrary Schur polynomials, and a fairly complicated Pieri rule. We conjecture that the structure constants are nonnegative in an appropriate sense (treating the ai as signed indeterminate), which suggests a geometric or combinatorial meaning for the quotient.
10.5446/51295 (DOI)
Okay, so I will talk about what happened with the tropical version of the Jacobian conjecture. First, I will briefly remind the classical Jacobian conjecture. I would not dwell much on it because there are very good overviews on it including the very rich and dramatic history. Then I will say a few words about actually giving notations for the tropical settings, actually repeating some parts of what Stefan just did. And then I will tell about results on the tropical version of Jacobian conjecture. So, first of all, let me briefly remind classical Jacobian conjecture. So we have a polynomial map for the characteristic field zero and consider the Jacobian of this polynomial map. So, consisting the matrix consisting of the partial derivatives and this is the classical Jacobian conjecture due to Keller very old source of 39 that if the Jacobian equals to one then f so the polynomial is nine. And it's inverse is also a polynomial map. Well, there were a lot of results around it. I will just mention a few of them which have a flavor common with what I will talk about tropical stuff. Okay, so for an algebraically closed field f if f is injective, then f is by objective as well. So, actually, the proof due to ox was model theoretic and it's very easy in few lines, and it's a reduction to finite fields using no stellen that and if if you formulate this statement for finding fields it's trivial, because in a finite set if there is an objective map it's also a budget. So, it's a very easy, nice result. And one can actually beat a Jacobian conjecture is a local isomorphism. So due to the implicit function theorem applies the global isomorphism. And another, maybe the second and the last result in this area which I will mention is a counter example in fact, which shows that the whole issue of Jacobian conjecture is very delicate. So if you can see that the real field is a ground field and the counter example shows that assumption that just them Jacobian is positive is not enough. So it is the other counter example that we have an isomorphism with a positive with a positive Jacobian. So that that it's necessary to have the job and equals actually equals to constant so you can consider this constant to be one. Okay, that's all what I wanted to say about the classical just just to remind the classical conjecture and Now let me briefly introduce the notations about the tropical world. Okay, so the basic object of them tropical algebra is a tropical submarine, and it's in doubt with the operations, which is denoted in this way, and say, the main source of the tropical submarines is the following we take an order to semi group, and then it says it's a submarine with inhibited operations as the sum for the minimum and the product as the group semi group operation. So if say you have not just a semi group, but rather a group. It could be a billion. So, always it is ordered, then. So if it is a group, then we are talking about that tropical semi skew field. And then we can introduce also the division in the tropical semi field, which is, in fact, the subtraction in the subtraction in the group. Okay, the typical examples are say the non negative, non negative integers. All the non negative integers with the added infinity. The examples of community tropical semi rings and infinity plays a role of the neutral element. And then it's turned zero place the role of the unity. And other examples are just so they were these were the examples of the semi rings. And if we can see them, not just the non negative but the integers or the integers with the added infinity these are semi fields because they allow allow subtraction so the division division in the tropical division and an example of the non commutative tropical semi ring is the semi ring of them and by and matrices say over over any of the semi fields say over Z with infinity with the usual operation for the matrix multiplication. Another important object is the tropical polynomial. So to define it with the polynomial, which is just the product of a variable. Then a monomial. He actually it is a isn't the classical case in the coefficient tropical times the tropical monomials. So consider its tropical degree is the sum. Again, as a classical situation, the sum of the degree is the sum of the degrees and some of the powers and classically we can look at this as a linear adjust as a linear function. Well, this was a monomial and again similar to that classical case we define tropical polynomial is the tropical sub the monomial and this could be viewed as a classical as the minimum of linear functions so a convex convex function. Convex piecewise linear function and well what is so this was before was very typical and similar to classical case but what it differs that tropical geometry and what is maybe psychological difficulty usually is understanding is the concept of the tropical zero. So, as I say, and it has a lot of justifications the definitions that we say that x is a tropical zero, if say of the polynomial of the polynomial F, if the minimum is attained at for at least two different values of of J so for a two different for two different tropical So that means that in other words if we can see the tropical polynomial as a piecewise linear function, then the tropical zero is the points at which the polynomial is is not smooth. Okay, so, well, we can continue these definitions and extend them to tropical algebraic rational functions. So, the tropical fractions, and we have seen that the fraction is a subtraction in the classical sense so this minimum is a tropical polynomial and this is a, we can treat as a numerator and minimum of queues as a denominator and their subtraction. So, the second difference is the tropical tropical fraction. And this we can view as a tropical algebraic rational function and where the peas and queues are linear function with rational coefficients. So, electrically, it is a piecewise linear function. So that means that one can partition the whole space into a finite number of and dimension polyhedra on each of which this function is linear. Actually, convisally, conversely, any piecewise linear function can be represented in this form. So, as the difference of two tropical polynomials. Moreover, if you can see the any continuous function, it is known that it's a difference of two convex functions. So this is a particular case of this theorem for piecewise linear functions. So, generally, and for our considerations would be also valid. One can assume that the coefficients are real not not just rational or integers so more coefficients as it is in the tropical world, but we can consider the coefficients to be real so just consider any piecewise piecewise linear function and the results would be true. So, the question arises how to how to replace the Jacobin for non smooth tropical algebraic say rational maps. So we have now this will be our main object at tropical algebraic rational map so each coordinate is a tropical is a tropical algebraic rational function. So, how to how to replace the Jacobin in the same Jacobin conjecture. Well, actually, unlike unlike the classical situation. Our situation is a little bit well not a little bit essentially easier, because we need to prove only that the map, the map is invertible. So the inverse does exist, because if it does exist, then it is also a tropical map. So it is a piecewise piecewise linear function. And so it can be represented in the same way. So we need to test. So, actually one can view the Jacobin conjecture is the criterion for a map to be in isomorphism. Okay. What will do. First, I would be no unique version of the tropical Jacobin conjecture but there would be a one week and one strong version. So we start with the week version. And first, we need the following definition. Okay, consider it tropical map and the point. And the old. Demand and dimensional polyhedral containing P on which if it is linear so we know that we it is a tropical map it is piecewise linear so we take these pieces on which F is F is linear. And for each for each of these for each of these linear maps with which are which are now Jacobin matrices simultaneously. We denote them a one a K so in the neighborhood in the so that in the neighborhood of the point P. So we assume that P say is in the boundary of say K, polyhedral on which the map is linear. And we can see this. And then we take the codes. So the determinants of these matrices. And we consider the convex hull of these matrices and denote them by DP of F. So the first proposition which is which what I call them a week, the week version of the tropical Jacobin conjecture that if for each for each point P. If it doesn't contain a singular matrix. Then F is an isomorphism. So this is a, this DP replaces the role of the Jacobin. So we assume that that it doesn't contain a singular matrix that's the, that's the assumption. And then we state that it's in isomorphism, actually, the, I can give the proof because it is, it is in the goals in few lines. It relies essentially on them. Clarks a theorem that F actually Clarks theorem holds for any leapshitz map is under this under this condition. So then each DP doesn't contain a singular matrix, then it's a local homeomorphism so it's a local theorem better to say that if DP for a given P doesn't contain a singular matrix then it's a local homeomorphism. And this is true more over for not only for piecewise linear but also for leapshitz maps. We use the easy observation that tropical map is a proper map so the pre-image of every compact is again compact. And this implies that So we know already that it's a local homeomorphism and that then because it's proper then it's a global global homeomorphism so that's the whole proof. Okay, unfortunately, so we have a sufficient condition for a map to be nice amorphism, but unfortunately it is not necessary and I will give a counter example for that which is also quite instructive. Okay, so we can see the tropical map on the plane. It would be. It would be isomorphism, and it's a composition of a lower triangle and upper triangle isomorphism. Okay, so this is the lower triangle and this is the triangle. If we consider their composition, then it is a, well, it's a, it's a linear in piecewise linear in four pieces. And on so on four sectors on four sectors of the origin. Like on the picture, and if we can see the D at the origin, then it's a convex hull of the four following Jacobian matrices. Well, just for from the, from the formula for the lower triangle and upper triangle isomorphisms. And this, so if we take the sum of the second and the third matrices, we see that it is, it is singular when, so when the following condition holds. So when one of the either beta equals to alpha or B equals to a. Okay. Oh, sorry. When so when the product when the product is equals to four. So for example, one can take alpha and a equals to zero and beta, beta equals to two. And so this is the counter example which shows that this sufficient condition is not necessary. And it. Okay, still it would be nice to have a necessary sufficient condition. So we start with the following is the remark. A tropical map is a nice amorphism then all the Jacobians have the same sign. So either all of them are positive or negative. This is due to due to orientation and the degree on the degree of the map equals equals one. The question arises when this condition is sufficient so when the condition of the constancy of the science of of the Jacobians is sufficient to be to be in isomorphism. And it's true on the plane when we can see the tropical polynomial map so that both F1 and the twat tropical polynomials. That means so they're convex. And then indeed that if all the Jacobians say are positive, then F is an isomorphism, but beyond beyond this conditions on the plane and for the tropical polynomials. This can sufficient, sufficiency condition is is not necessary condition is not sufficient, unfortunately. And the following example shows that okay consider the following now we consider noted polynomial map tropical polynomial but but rather tropical rational map. So we write here module function and clearly module function is a is a tropical tropical rational function you can easily write it with the with using the subtraction. And we consider the following tropical rational map. It has one can easily verify that it has positive Jacobians in all its linear pieces in all the pieces where the function the map is linear. But because this function is central symmetric. It is not an isomorphism. So we can modify slightly modify this example to get a now a tropical polynomial map rather than rational map but in the three dimensional space with all positive with all positive Jacobians. And again being again being not an isomorphism. So we see that we have a now necessary condition of positivity say of all the Jacobians but it's not. It's not sorry necessary condition but it's not sufficient. So this one can formulate a one can formulate a nested and efficient condition but it's now it looks not not so natural but it's on the other hand it's good to have an algorithm to verify whether tropical map is an isomorphism. So make sure you remind that actually this is this definition is holds for any for any for any map. So we say that the point is regular if for any. Point in the in the target. If for any pre image from from for from this point is Jacobian is not zero. Okay, so this then this point is regular and then by the set of regular points. So the value is dense. Okay, and then we can simulate now at simultaneously a necessary sufficient condition for a tropical map to be an isomorphism. And namely that means that the condition from the previous slide that all the Jacobians have the same sign. And now we require that at least for one regular value. The pre image is unique. Okay, well, the necessity is trivial but the statement of the theorem and this this condition is sufficient also. If such a point does exist, at least one point then the whole tropical map is an isomorphism and relying on this theorem, we can now design an algorithm to verify whether a tropical, whether a tropical map is an isomorphism. Namely, an algorithm yields a partition into polyhedral such that F is linear on each PI. This can be done. This can be done by means of linear programming. Okay, then if we take any point in the target, which is out of the union of the boundaries, boundaries of this polyhedral. So we take the boundaries of this polyhedral which are polyhedral of one less dimension and take the images and subtract them, then any point out of this out of this union is regular. So we have the criterion from the previous theorem. Test that the pre image is unique of this point. And if it is the case, then we know from the previous theorem then we dealing with isomorphism. All, all this can be performed have an algorithm which tests whether a tropical map is in fact an isomorphism. Okay, and another issue which is related to the Jacobian conjecture is the timeness of the of the automorphisms. And it is a classical Dixmere problem and what do we have in the tropical world in the tropical world. Similar again to the classical, but setting but actually we cannot cannot we cannot use the statement of the classical in the setting but we can we can prove this independently that on the plane indeed, indeed any automorphism is staying. So we define two classes, two classes of automorphism. The first class is triangle. Actually, we had already an example before. So triangle triangle that means with the change we change the triangle we change one only one coordinate at that time. And also, we consider an analog of linear tropical rational automorphisms. So these are automorphism, all with linear and with a determinant with a determinant equals plus or minus one. And, okay, so the proposition states that the group, the group of tropical rational homogeneous automorphisms. So on the on the plane is generated generated by triangle and linear linear automorphism so it is staying. In classic in the classical world we know that in the free three dimensional case. The group of automorphism is not tame so it is not generated by triangle and linear automorphisms. Here. This is an here this is an open question. And my conjecture is that this is indeed the case the group is is also not the. Okay, thank you. Thank you very much for for attention. Thank you. So, I have a question for the map. I have a question with respect to last proposition. So is it related that if we have convex function that we can approximate it by the by the sum of this triangular function. No, I don't think so because this is not approximation this is an exact statement you you have a automorphism is and you need to represent it as a product of triangle and linear linear automorphisms. Yeah, but but in some sense of the same ingredients you have triangular and linear. Yeah, in one approximate as a sum but here you have exact. And then in multivariate case you have to replace triangular by simplex's. Yes, yes, yes, well you can approximate but here it is it is it is an exact it is an exact it is an exact equality of automorphisms. It's different. So if you have a function at piecewise linear. Yeah, so therefore, for piecewise linear it would be exact. So if you have, so if you have just convex function. Yeah. Then for convex function you have approximation but if you have linear then it will be exact approximation by the sum of linear and angles. But people have this lemma by it's called hardy little wood and someone so it is funny funny so great people with such funny lemma. And so here they have piecewise linear then it will be exact. Just obtain this piecewise linear function as a sum. I don't know. Well, you mean the sum but the sum. Well, look, the sum. No, I don't I, I'm not sure. Well, I agree that it's related but I'm not sure that it will give the exact result. Well, it's a good point I will think but I don't think that I think that is I think it is just the opposite the statement is. I mean, it is not tame. Okay, but it's a good point I'll think about the connection. Because I just see the same ingredients. So therefore, yeah, and gradients are the same but it's a different statement. Yeah, and then it was supposed to do with sepulxis and then it was supposed to estimate but but I think there is a problem so no not only such triangle, you have maybe allow some some rotation. But the exact triangles it's not true in the for the for general. And then you have, I have got the exact statement but but I can found the reference. Oh yeah, okay, let's send me then the link. Yeah, that would be interesting. Thank you for the nice talk. Oh, there are no patients so we stop here we have a price until 1150. Thank you very much. Thank you. Thank you for the interaction and so.
We prove that, for a tropical rational map if for any point the convex hull of Jacobian matrices at smooth points in a neighborhood of the point does not contain singular matrices then the map is an isomorphism. We also show that a tropical polynomial map on the plane is an isomorphism if all the Jacobians have the same sign (positive or negative). In addition, for a tropical rational map we prove that if the Jacobians have the same sign and if its preimage is a singleton at least at one regular point then the map is an isomorphism.
10.5446/51296 (DOI)
I will speak about application reflection equation algebra, what is to my opinion, two matrix models. The first question, what is quantum an elechrom, the envelope in algebra of the algebra G. Everybody answer with question, of course, quantum group of the infinity jimba uqg. However, for the particular case of JLN and SLN, there exists another quantum an elechrom in the envelope in algebra. It is a so called modified reflection equation algebra or its quotient. So now I will explain what do I mean by reflection equation algebra. Let R be a heky symmetry coming from quantum group. Recall that by heky symmetry we mean a braiding. It means an operator R acting on the space V squared in the same space subject to the braiding relation. It is well known. And heky symmetry is particular case of the braiding subject to the relation presented here. It is just as follows. Q is generic and Y is identity operator for a matrix. The heky symmetry coming from the quantum group is equal if I choose a group in the space V. In the space V squared the basis is as follows. And so I present my operator by a matrix and with matrix is as follows. You see that as Q goes to one with heky symmetry tends to the flip, usual flip, matrix of the usual flip. However, if dimension is greater than two, there are other heky symmetry tend to the usual flip. For instance, grammar, the sherry way heky symmetry. It is difficult to present explicitly with symmetry, but there are. Now, dimension at above, modified reflection question also generated by the unit is the entry of the matrix larger subject to the system with system. So here are is for example, matrix presented above after m1 and so on and so on. What is m1? As usual, m1 is m tensorited with identity. We are not with algebra L part. And if we consider a similar system, it is system because we have here matrix relations, so many relations. If we consider this system without right hand side, we have zero here. It is called non modified reflection equation. I want to say it's very important property that if I replace here and with relation on with one, by the usual flip P, we get respectively the envelope and algebra. It is matrix presentation of the and we'll show them all commutative algebra. If you put here P, you have just the algebra with commutative. If in the above relation, we replace the heky symmetry are by super flip PMN. It is more or less usual notation. We get the envelope and algebra here with algebra of super algebra and commutative algebra, symmetric algebra, all that super commutative. Okay, now I want to consider an example related to the algebra as follows. So the corresponding matrix are is heky symmetry presented above and now M one is four by four matrix as follows. So if you put all the matrices in the relations defining the modified reflection equation algebra and with algebra, we have the system. It is just an example corresponding to this algebra. If q goes to one, we have such relation in the envelope. A B minus B A is equal to B, it's on its own. It is just with relation. Okay, I want to remind you that there exists another quantum matrix algebra. Quantum matrix means that we present the relation is with algebra by matrix. There exists another algebra called RTC algebra, so I don't want to speak about this algebra. I want only to say that it is not deformation of the envelope and it is only the formation of symmetric algebra gel to. But if we impose with relation more determinant q determinant and sense until what to speak about that is equal to one, we have deformation of functional algebra, you know, in an algebra of s. So for if we want to deform with algebra on the group, it's possible to have with algebra RTC and reflection equation algebra as well. But if we want to deform envelope in algebra, it is possible to use reflection equation algebra in modified for modified reflection equation algebra. So because we are interested in an element of the algebra, we deal with modified reflection equation algebra and not with RTC algebra. Okay, I would like to present now modified reflection equation algebra in a form similar to envelope in algebra. So it's possible to prevent modified reflection equation algebra as follows. It is one generator tensor, another one minus some operator. I apply it to these two elements and a linear combination here, some linear combination. But if I do not with linear combination, as usual as a bracket of two elements, so we have operator R action in the space m squared in the same space m it is the space linear space, the space generated by m y. So we have operator and bracket. And so we come with the object, what object object, the space with operator and the bracket. It is an element on the usual rebracted shale. Now, what is a version of of Jacobi identity. Here, it is more interesting. One was this version. That if we define action of an element, another one as follows by means of breaking on the previous page. So we define a representation, namely, we have we have, we apply y to that after the text to that after that we transpose the answer to y by using r, we apply two times it is it is equal to that. So it looks like the usual Jacobi identity. But instead of super transposition of flip, usual flip, we have here R. Okay, now with data is called generalized ubratively algebra and is denoted glv the space V, equipped with R, with a Hake symmetry R. We have been introduced by myself for our involvement and myself with Yatof Asapanov for Hake R. For Hake R, it is more interesting, of course. And if we take as a Birmann, well, Murakami-Wenzel symmetry, coming from quantum group of other classical theory, it's possible to introduce the subject, no problem. But it is not a good deformation on classical law. For Hake symmetry, it is with this reason why I am interested in Hakey situation and not Birmann-Murakami-Wenzel. Okay, now I want to introduce some operator introduced by Lubashenko here, present now, how we put them, I guess. So operators B and C, some operator. They can be constructed by means of R. But it's difficult to explain. But finally, we have to operate B and C. And what is, why I am interested in this operator? Because if I define action of the generator MYG on the element of the space V as always by means of the rate of B. So we have a representation. And why operator C is useful? Because if we consider the following trace, trace of product, M here is any matrix. And C is just with operator. We have a middle of the trace. And it is trace, I call it trace, coordinated with our initial Hakey symmetry R. Hakey or involutive symmetry R. With operator turns into the classical one. If R goes to the usual flip, that's super one. If R goes to super flip. So in this situation, he becomes identity and trace becomes the usual trace. Here, here is similar. Okay, now it's possible to introduce a, so finally here I tried to explain that it is a structure and endomorphism of the space V. So what is endomorphism of the space V in the category? All spaces are finite dimension. So this is the product as follow. And finally, it's possible to, by using the operator B and B to introduce coupling or pairing between element of the space V and element of the dual space. If I consider in the dual space, the radial basis, it means that it is defined as usual with just data. But if we want to pair, to apply pairing between element in another sense, in another order, it's necessary to use B. So what is difference with classical situation? In classical situation, if we have here delta, we have delta here as well. In general situation, it is not so. Okay, now I would like to mention a property that is very useful for matrix model. Modified reflection equation algebra is in principle isomorphic to non-modified reflection equation algebra if we have the following change of the matrix. It's very useful property, but it is not valid if u is goes to 1. You know that symmetric algebra of GLN, it is not as a morphic to envelop an algebra of GLN. So if r is involutive, it is not so. But if r is taken, it is so. Now, I want to introduce it in my algebra generated by symmetric m, an element of elementary symmetric polynomials and full symmetric polynomials. It is just a definition, what is definition? So it's necessary to put here a on or s. It is an analog of skew symmetrisers. Here we have symmetrisers and here some product of the matrix m, one over line. So here I explain what is notation of indices of a line. But finally it doesn't matter, it's possible. I assume it's possible to introduce an analog of elementary and full symmetric polynomial. It's possible to introduce an analog of short polynomials as well. And for them, an analog of liter Wouter-Schertzon rule is valid as well. And quantum power sums are defined as all. So difference with classical situation is the following one. The usual trace is replaced by an analog address. The only difference. Okay, now we have an analog of Newton and Ronski relations. So you see the difference is that q comes here with relations for tricky situation. For involutive situation q disappears, becomes one. Okay, now more interesting of course. It's really a metonidic. In order to present it, I want to introduce, consider special partition. Lambda, it is partition. Partition was discussed in the previous talk. Partition corresponding to the following diagram. So you see, we have here rectangle, here one column, and one line row. So I denote with the partition or with diagram in the following way. And zero, if we have zero here, I omit this index. Here is similar. Okay, now we say that involutive for here, a key symmetry R is of type MN. If rectangle here we add one here as well, is the smallest rectangle such that the sure polynomial we have the following partition here is equal to zero. It is trivial. It is not evident that there exists the smallest rectangle, but it's possible to show you. So there exists smallest rectangle possessing this property. And Kalyka-Milton identity in a factorized form, if R is of MN type, has a following form. So I present here just Kalyka-Milton identity. And I would like to say it's very important property for us that coefficient here and here are central. They belong to the center of reflection. I want to say that this property with relation is very thin, non-modified reflection in algebra. But by using as a martin mentioned above, I can obtain a similar relation in modified reflection equation algebra. Okay, now let's introduce now mu y and mu y. They are eigenvalues of the first factor and they are eigenvalues of the second factor, which means that Kalyka-Milton polynomial can be presented under the following form. So this is the first coefficient squared after the main coefficient, after that with factorization of the first product and here the second. Okay, now I would like to mention here the results by Hudaverdian-Evoronov. They have shown that for super matrices, the ratio of two Schur polynomials as follows and the following one equals to Berezinian. So for us, it is more reasonable definition of Berezinian because we have here one central element, another central element. So Berezinian is the ratio of two central elements. In our setting, much more general, we have a similar definition. By definition, just with ratio is called Berezinian and it is presented as follows. It is product of mu, product of mu, some coefficient here. But it's possible also to define an analog of determinant. You see Berezinian determinant is two different elements in our algebra. It is a ratio of Schur polynomial corresponding to this partition and here we have here product of mu and product of mu. Here ratio of product of mu. Now, it is interesting to express Schur polynomial here any Schur polynomial. Why give it? For instance, if lambda is non-trivial rectangle, then we have the following expression. So some non-commutative combinatoric. Schur polynomial is a supersymmetric. What does that mean? By definition, if we have two set xy and yj, independent commutative variables, the polynomial p is set to be supersymmetric. If it is symmetric with respect to x, symmetric with respect to y and opposite to u, if we put x1 is equal to y1, y1 is equal to t, the result of polynomial does not depend on t. So, parameterization. I present some parameterization, some symmetric polynomials. But now I put the same question for power. What is parameterization? Parameterization is as follows. So it is usual formula, classical formula. It is valid if we put here 1 and here 1 and here and here and here minus 1. We have just a classical formula. These coefficients are called quantum dimension. But in general, for any mu and u, the formula is as follows. I would like to say that these polynomials are close to whole little wood ones. They are similar. It looks like. So now I repeat. If we put u equal to 1, we retrieve here the classical formula. But the classical was super matrices. So now, if we pass by using the modified or non-modified form of equation isomorphic, we want to get similar formula for general situation, for modified reflection equation algebra. We get the following formula. So this formula is similar. But formula for quantum dimension, a little bit different. You see here q minus 1, here q, that's all. So only the quantum dimension are changed. As we put q is equal to 1, we get a parameterization, for generalization of matrix on the developing algebra as follows. I want to say once more that the relation in the developing algebra, algebra, jlmn, is as follows. Okay, now I want to say that finally, if we consider modified reflection equation algebra corresponding to hachy symmetry coming from the quantum group, we have in principle two parameter deformation of this commutative algebra. One parameter, it is just a passage to envelop an algebra. It is as you wish corresponds to linear bracket. And another parameter comes from passing to reflection equation algebra. So we have two parameter deformation. So we have the Poisson-Corten part, which is generated by two brackets, linear bracket and quadratic bracket. And so finally Poisson-Corten part is pencil. And an eleph of the second bracket sometimes is called Semyonov-Tenshensky bracket, but Semyonov-Tenshensky bracket for me, it is rather bracket defined on the group, and we are dealing with the algebra. Another interesting remark. There is a very interesting result by Shoichet. Shoichet shows that if we have a unimodular Poisson bracket, unimodular means that the key measure is compatible in terms of the bracket. And if we have such Poisson bracket, it can be quantized by using, for example, Konsevich method, but it's possible to quantize so that finally we have an algebra with the usual trace, with trace, from the quantum algebra. Was the quantization with the trace is possible if we have a unimodular bracket. But the bracket related to reflection equation algebra is not possible to construct the corresponding measure. Such a measure doesn't exist. You see, I presented a trace in quantum algebra, but with trace is default. It is not classical. It is not usual. So I want to compare my construction with a Shoichet result. If we have unimodular situation, with usual trace is possible. If we have my situation, it is not possible, but the trace is possible in terms of after quantization. But it is default. I pass to matrix model. We consider the simplest one matrix molded, defined by the following usual situation. We say polynomial. If lambda are eigenvalues of the matrix H, H, it is, of course, a matrix. We consider integration of the set of matrix. We can present with a partition function over wire, wire eigenvalues as follows. So what is here result? The result here is that in this situation, we have the measure D H is just the determinant square. It is just specific for Hermitian situation. For other classical situation, we have here one or four. I consider Hermitian model. Okay. So what is my idea? I would like to quantize. So quantize means that I want to define an error of the production possible to do. It is very tempting to replace the usual trace, but by a trace. Now, instead of H, I write down L. A human L to be generated matrix of modified reflection equation algebra corresponding to invoices for hikismetry. Conceal all times coming in a classical situation should be replaced by the above also. Pre-metrized wire can vary mu and mu, present at the above. But what is an error of the measure? It is not evident at all. So consider the following matrix. Here you see all races are atreuses. Group position. If I consider the classical case, classical means that I is usual flip M equal N and N is zero. So the determinant, usual determinant of this matrix is just the determinant square. Finally, I omit the potential V dependent on thish and we arrive and we fall in form. As we take determinant, usual determinant of this matrix as an error of Wendermond determinant in general. So we arrive as a falling formula and now we expressed our generalized matrix model, or braided matrix model, wire, eigenvalues, mu and mu. You see now finally instead of the classical Wendermond determinant, we have the following expression. Now I want to discuss one question more. The following question. We consider it on the finite dimensional situation and consequences of this consideration for matrix model. But how to go through past infinite dimension situation? So I will present two ways how we can construct some analog in infinite dimension situation. For that, we consider an analog of general, of younger matrices. I call it young braiding, dependent on parameter current braiding. Current means that we consider here parameter. Our braiding now depends on parameter. And here we have just the usual younger braiding, current braiding. If we apply the busterization procedure, we have the following proposition. If it is involuted symmetric, then the falling braiding is either current braiding or it means that it meets the current braided relation. It is braiding the relation as well as above. But now we have a relation with parameter. If r is a hikismetry dependent on q, so then we have a similar statement, but now r is a given by the following formula. We introduce with my call for Pavel Saponov the generalized Youngian of RTT type. You see with relation is similar to relation in algebra. Now the only difference, now the matrix T dependent parameter and the braiding are dependent two parameters as well. And generalized Youngians of reflection equation type. You see the difference is inside. This relation looks like a relation in reflection equation algebra. It is also possible to construct double in-gen like algebra, inspired by Rache-Titian-Symenov-Tanchantian approach. But I want to discuss a little bit more another approach to construct an infinite dimensional structure rising from the affinization. So we have now generalized the algebra denoted as follows. I define this algebra above. It is modified reflection equation algebra. So with notation here it is not good, but it is enveloped in algebra of generalized algebra. Okay. And what is affinization? We take the generalized matrix, initial matrix, and we consider a series of the corresponding matrix dependent on U for number k. As usual, this affinization applied in the usual way. So now it's necessary to introduce to define the enveloped in algebra of generated by a with matrices. Okay. The relation is classical, more or less classical, classical in question marks of course. We have here R hat, Vr hat. It is R evap. But of course it's necessary to interchange the number k and number a. So we interchange here. And here we have an analog. So it's very natural. So it is not possible to define shrinkage term in another way. It is very natural. We have product after that we apply an analog of trace. That's all. So as you wish, we obtain a affine algebra, but with algebra is braided. Okay. But the problem, I discovered with algebra with my colleague some years ago, but finally recently, we understood that with algebra is not, I will explain why. So finally, we consider it with algebra as a U analog of the envelope in algebra of the affine JLN. But question, very interesting for me. Whether it's possible to construct quantum analog of tau function in this way. I would like to mention paper by Marozov control on QT deformation of Gaussian model. And in this paper, they said that tau function was not constructed. It is what was 2018. But with attempt to construct such a quantum tau function has been undertaken by Katchev and Taza in Invisio. So before that, and finally, the attempt failed. It is not so easy to do. And I thought maybe our algebra was good for constructing quantum analog of tau function. Why? Because if we consider QT models in sense of Marozov and tau, we have the relation with this algebra, more or less as follows. Minus means that we have fermions or bosons, it depends. But the relation are more or less classical. Here, we have something depending on Q and T. So Q and T come in only coefficient here of linear terms. And we want to construct a quantum analog of tau function. It's necessary to have the following relation. So Q here. And Q here is absent in the approach. So now I consider our algebra. I repeat. When it's possible to construct Q model by using our algebra in order to solve for linearity, define a finite algebra. Unfortunately, if the initial R is taken, our finite algebra is not the deformation of the classical finite algebra with this problem. And consequently, our finite algebra is not good. However, if R is involutive, our finite algebra is good. It is not strongly proven, but it seems hopefully it is so. So if we want to construct a reasonable Q-vertex algebra by a finite solution, it's necessary to deal with an involutive symmetry with this problem. And such an involutive symmetry, what is page now? Involutive symmetry, let's follow. The symmetry is involutive. You can check it. It's very easy. It is a symmetry, breading. So the breading relation is fulfilled. And it's possible to consider the corresponding algebra, modify perfection of the original algebra, and construct a finite analysis of that. So in this situation, this algebra is a good deformation. And I don't want to go into detail concerning construction of the corresponding Q-vertex algebra. I want only to say that the ordered product of two fields, N and B, must be defined as follows. You see, it is more or less a classical way to define an ordered product. But here, instead of the superflipper, it's necessary to put R-flip, R-head. But of course, it's necessary to take initial, initial operator R, initial symmetry, to be involutive. It's very important. And if we do so, so it's possible to develop a whole theory concerning Q-vertex algebra, and so on and so on and so on. Everything goes smoothly. And finally, it's possible to construct the corresponding field theory, as you wish. But it is only the beginning of the story. It is just in progress. It is not constructed. But it's possible. But in this connection, I finish in two minutes, I want to mention quantum analog of vertex algebra introduced by Pettengov-Kazdan. It is an object, algebra, constructed via double Q and G-ons. Finally, it is in the spirit of Rishritikin-Symenov-Tenshensky approach. So we have Q and G-ons, as we discussed above, dual Q and G-ons, which is constructed in a similar manner. And finally, some relation between two in G-ons and relation between two in G-ons or permutation relation are introduced in the spirit on the paper Rishritikin-Symenov-Tenshensky. If I don't, I have not mistaken, 1892. And, but finally, axiomatic description of quantum vertex algebra introduced by Pettengov-Kazdan is very, very hard. If you want to understand what you have really in this situation, you have something like looking like double Q and G-ons. And our Q and L of vertex algebra is more easily to understand. And there in some sense, more natural. So I stop here. I only want only to resume in two words. Finally, you see the different, our approach concerning matrix model based on reflection equation algebra, in just in progress, the only beginning of the story. But the other part, constructing infinite dimensional algebra, braid attention in dimensional algebra, so it's necessary, to apply either the first method described above, so an L of G-ons or second method described above or affinization of the, and I think that affinization is much more interesting. But it is only the beginning of the story. Okay, thank you very much. Now it's time for questions or remarks. So even comment. So we thank the speaker virtually and it is the end of the day.
Reflection Equation Algebra is one of the Quantum matrix algebras, associated with a given Hecke symmetry, i.e. a braiding of Hecke type. I plan to explain how to introduce analogs of Hermitian Matrix Models arising from these algebras. Some other applications of the Reflection Equation Algebras will be discussed.
10.5446/51299 (DOI)
Thank you very much, Min. I would like to start by thanking the organizers of this conference. Yes, Gerard. So I'm very delighted to be here and I think to keep up on organizing such meetings is extremely important during these times. So I would like very much to praise this opportunity and to keep up doing research and so that life can continue. Anyway, today I would like to present to you this work. It's a joint work with Sanjay of Queen Marie. Can I draw a little? Why can't I see my yes Sanjay? Yes, from Queen Marie University of London. You can find the main things here on this archive paper that appears recently, but you made it also with two paper to understand better if you want the detail and some of the theorem that I will present you today. So as I have many to say, I will just move to the next slide. Isn't it just my arrows that make it moving? I cannot move the screen. Isn't it just moving with the arrows that make it moving? Yes, yes, we see the arrow moving. Okay, good. So here. So good. Now I have some, can I, I think, raise everything here. So the outline of my talk is the following. I will try of course to motivate why, what are we doing here? And where does it come from? What is this question about the chronicle? And I will try to have an overview in the introduction. Then I will start by setting up our tools, which is a particular algebra here, which is also an Hilbert space. And this algebra has a reason from counting graphs. So everything I started from here. And using permutation in fact, so counting graph using permutation methods. That was a starting point of this story. And then we will move through three steps for proving our main results. Step one, step two and step three. The first thing is to introduce operators, amiltonians on this algebra. And we will show that these amiltonians have some integrality structure due to the fact that even the structure constant of this algebra is by itself integral. So this is an important feature that I need to discuss. Oops. Oh, it just jumped to, let me go back. It just jumped to the right thing. So I want to erase this small thing, but I think if I just cannot, so I will move on. So, and then having linear operators that are amiltonians on my Hilbert space here, we could discuss something like a quantum mechanical model. So these two things shows you that there is a quantum mechanical quantum mechanics on that Hilbert space. And that is already interesting. Dear Joseph, we see your drawings. Can you clean them? Yes. So this is what I try to do. But I think as this slide is kind of is produced from from the latex, I cannot remove them. I try to by using the eraser here. And but they, oh yes, okay, it works now. Previously it didn't work, but it's okay. Sorry for the disturbance. So, yes. So I was saying the step two was to introduce this amiltonian the step three is to conquer, you know, to to to to discuss. First of all, how do you extract the square of the conical coefficient out of this algebra and then the conical by itself. And the thing here is here is to find an interpretation or combinatorial interpretation for for the conica. And that is a longstanding problem that I will just describe before drawing some conclusion here. So let me erase this before it's going to propagate along the slides and move on to the introduction. I think it's taking a little lag before it's moving. It doesn't move. So I think it's making a slight moment before is moving from one slide to the next. I don't understand why it's not moving for the moment. Do you see the next slide or do you still on the outline? Sorry, outline, outline. So I it doesn't move. I don't think I don't know if it is a default of my connection or can I can I escape from here and come back. Maybe it is the key you use the key. Well, you just have to push one arrow for moving from one slide to the other. So let me escape this. I think I'm kind of stuck. And I don't have any more control in my device. So let me first off. Okay, good. It's moving now. Okay. So I think when the annotation here makes me some trouble, I would, I won't be able to use them anyway. So let me push this around and start. Okay, so the conical coefficient. What is it? It's, it's counting just the multiplicity of an irreducible representation in the tensor product of representation of the symmetry group. Your SN is the symmetry group of permutation of m objects and taking two spurt module V mu tensor V mu. You can expand them back in irreducibles up to, you know, multiplicity, which is your conical here. That's my conical coefficient. Mu, nu and lambda are just partitions of n's or young diagrams. They are equivalent. And so here you have this direct somewhere up here, this C, which is your conical coefficients. So that's a multiplicity. In a more symmetric way, you can rewrite the same, the same object as that average sum of convoluted product of, of characters. So chi here, chi mu is my character of my symmetry group, given, you know, a given partition mu. So it is a long standing problem to characterize, to give a combinatorial rule to understand this conical coefficients. And as they are integers, as we see, we see them here, we see here, nothing but multiplicity, a dimension of multiplicities. So they are just integers, positive integer in fact. So, and as they are integers, positive integer, are they counting some object? So this is a very old problem stated by Murnahan. We are in 1938. So this problem has been treated along, you know, along years, for instance, Stanley has stated it in his 10th problem. But here he's just using symmetric functions for, you know, for, for, for seeing the C appearing. So any as this comment, often these coefficients, in meant by this DC in particular, we'll have a representation, theoretic interpretations such as the multiplicity of an irreducible representation within some larger representation. This is exactly what I described before. Sometimes the only known proof of positivity will be such an interpretation. And the problem will be to find a combinatorial proof of that. So how do we recharacterize combinatorial and combinatorially the connexor. And in fact, the connexor coefficient is an object which is very well studied in theoretical computer science, theoretical computational complexity theory, as it is the main objects that people in the program of geometric complexity theory program are studying geometric complexity theory has been introduced by Mulmullen and Sohoni. And one, they tried to answer one of the most famous problem in computer science, which is the, how do you distinguish how do you separate the classes PNNP. Along the years, many people have contributed to that. And still recently, this, you know, understanding how you can separate the classes still open, of course, but let's have a look about this, this comment by back and pan over saying the following. More importantly, the connexor coefficient actually are famously sharp, we had to compute and hand be hard to decide if they are non zero. So one should not expect a close formula. What make matters worse, what makes the matter worse, it is a long standing problem, citing Stanley, to find a combinatorial interpretation of the connexor. So it is not even clear what we are counting. In this paper of Sanjay and I, we provide a combinatorial answer to that problem. And in fact, we are the connexor coefficient is counting the dimension, which means vectors of an integer sub lattice in the lattice generated by bipartite with on graphs with some specific teacher that I will of course make precise, regarded as vector spanning a given space. So that's what they are counting these connexor coefficients. And along my talk, I would like to prove this statement. So let's review our main tool, which is this K and K and algebra. And actually, all are started in theoretical physics and leading us leading us to combinatorics. And as we were interested in counting graph at the initial level. So, I have to say that various permutations methods for counting graph are encountered and a very success in theoretical physics. In particular, they have been applied to compute multi matrix correlators in Super Young Mills theory with you engage symmetry. A lot of people have worked on that. And also for exploring half BBS sector and ADSCFT correspondence. So the recent paper by Rangulam and camp actually in 2020 would make me give you perhaps a kind of another view of what's going on there. So it's a well established method for counting graphs counting observables. As I was and Sanjay asking some question about enumerations of graphs and observables of unitary tensor model. So this is where we are starting. It was 2014 and we have very simple question about how do you enumerate observables of unitary tensor model. So that was a question that we have back then. And we were interested you can you may be interesting of tensor for rank three. And how do you count all observables of these types. And we found that actually these observables were in one to one correspondence with bipartite ribbon graphs, counting bipartite ribbon graphs actually. I have to be to make things clear here I'm restricted with ribbon graph with n edges. And as they are bipartite I will focus only on, I will say at most n vertex of a given type say white and n vertex of a given white over given of another type say black. So that's the type of object whenever I say in this talk bipartite ribbon graph. It's at fixed n at most fixed at fixed at fixed and edges at most fixed and vertices and and black vertices. I hope it's clear. More. Okay. So, so we were interested to count these type of objects. So what is a bipartite ribbon graph, you can consider it as an equivalence class or an orbit. So take the symmetric group of n objects, take a pair in SN cross SN. Right. And now you let act on that couple of permutation of these pairs, you know the simultaneous adjoint action on both of the slots. All right. So, having the entire orbit here is should be seen as you know a ribbon graph. How does it work. So, let me just show you. So the construction of the ribbon graph works like this. The cycle of the first element here sigma one will be will define vertices of a given type, say black type. The labels in the cyclic order, you know where you draw the label in the cyclic order they appear in the cycle do you want you consider. And then you have to give a unique orientation for all these vertices. Play the same game with sigma two. So you have a second type of vertices say white, you draw the label in the cyclic order and etc. Then you just have to glue all labels, you know, label by one from one side to the other one, the labels given by two from one side to the other one. So this may be a little bit to theoretical. So let me let me give you an example. For instance, this is if you want the identity I make here this parenthesis empty for for for denoting the identity. So you have here two cycle of length one, perhaps here I need my annotation tools. And perhaps this is useful for giving you explicit example. Maybe I can take here another risk to use the pencil. I don't see the mouse anymore here. So let me take your the pencil and draw you. So here it's my identity. It is made with two cycles of length one. I hope you are able to see. So that's my sigma one on one side. And the same for sigma two. So I'm with that example here. So the same for sigma two right so it's two identities that I'm trying to draw the ribbon graph associated with that. So as you see, this is a cycle of length one. So it's a vertex with only one level. So that's that vertex for instance. So, and I put the label one as in this cycle you have the label one. So this is what I'm putting here. Second here, you have the second cycle, which is the cycle of with contains only two. So you draw here the cycle for flint two here, which we have just one level sorry, which is the two. So here we go, you have this. Great. So you play the same game with this on the other side. And you do have for sigma two. Now the same vertices here. Okay, so again, and then you join the labels. You may join labels one with one two with two, etc. I hope this is clear. And so that you see so let's make something more complicated. Let's have a look about on this side. We can look on this side. So I have here one cycle, sigma one, made with two labels one to so this is the way that you draw them in cyclic ordering one and two labels hooked to that vertex that black vertex. On the other side here you have the second, the second permutation sigma two, where you see here again two labels one and two. If you don't have too many labels, it's becoming a little bit trivial, of course, but when you have more labels, it's not obvious that you will have such a simple figure. And so let's have a look what's happening if you go in higher rank. Okay, so let's let's me take three vertices. Okay, I have to clean this. Perhaps can I just move on. Doesn't want to. Okay, sorry. Okay, good. So here is, for instance, the vertices with more vertices with more edges, sorry, any calls free. So I need to clean a little bit the board. There is a small lag between the moment that I am annotating and the moment is diffusing. I don't see any more my mouse. Anyway, so let me let me keep up. Look at the figure for instance one, the 11 figure below. Okay. I don't see even my mouse. Okay, so here we go is so let's let's focus on this figure number 11, which is right below. It is made with two cycles. One, two, three and one, three, two. So you just listed the labels one, two, three in that order in that cyclic ordering and one, three, two in the other cyclic ordering. Here, and then you join one level one with level one, level two and level two, level three and level three, level two and level two. So in this situation, you generate a graph and the cyclic ordering here defines for you the embedding of the graph in the surface. Sometimes, of course, you may generate graph with genus. For instance, this tenth graph, I hope you see my mouse because now I don't not able to draw easily on the slide. It would be more convenient to do, but I think there is an issue with that. So the tenth graph, as you see, it's one, two, three, one, two, three and there is now a crossing of edges that makes this tenth graph non-planar. So these are certainly a genus. This is a graph with genus. I hope this is clear. We can move on and I need to clean here a bit the board. Dear Joseph, you have a dustbin. You have also trash. Yes. So and if I'm trashing is doing it all along clean or drawing. Okay, great. That's fantastic. Thank you very much. This I think will be useful. All right. So counting counting graphs and counting orbits. So as I was explaining to you, so a reborn graph is an equivalence class of SN course SN under some diagonal action, diagonal adjoint action. On both slots. So if you are interested in counting how many reborn graph do you have, you have to count these cosets. And it is straightforward by using Burnside lemma that counting orbits is counting fixed points. So you can implement the following formula where the delta function here is the delta function on the symmetry group telling you delta of the identity is one. Otherwise, that's zero. And that is the formula for counting those reborn graphs. And you can you can certainly play with your favorite program to compute how many of these number you have. So this is your sequence of number. The fourth, the four number that you see here, the fourth is because you have four guys here. And here you see you have 11 pictures. That's the 11 that you see here and is going for much complicated objects in higher end. All right. So this is the counting. You may ask yourself, what if you revisit the same counting under different light that of representation theory. So this is if you want direct space computation with only sig with only permutation and groups. What if you go to the representation theory. So the delta function in representations actually develops expands in terms of the character. Okay, so this is my character are where that D are is the dimension of your presentation. So it's that's an expansion of the delta functions are is a partition of n or young diagram. And that's a definition of your delta function. So here I'm just playing a trick. I'm adding more variable in order to quickly expand them all and recognize that I'm in fact discounting is nothing but the sum of squares of the correct equation. So with this small game, you are able to say the following. In fact, the sum of the collector squares are nothing but the number of bipartite rebond graph if an edges. All right. So this is already something that is interesting because you do have a combinatorial interpretation of the sum of squares. So, but not yet the C by itself but the sum of squares are indeed constructible because you just need to draw all these type of ribbon graph to know what is the sum of squares. All right. So, but this formula looks looks interesting by itself is a sum of square does it have an interpretation does it have a meaning. And the answer is yes but you need to define an algebra constructed around these these these ribbon graph and this is what I would like to explain you. We call it the graph algebra because it's an algebra made out of the graph seen as vectors. All right. So consider the group algebra C of SN. And this is just the vector space generated by linear combination of that sort. The coefficient being taken in C. And looking at your orbits of this equivalence relation, consider the following subspace of the tensor product of C of SN times C of SN. So that tensor product of two copies of the group algebra C of SN. Well, inside here you can sum over all these elements, all the orbits. All right. So this is given a sigma one and a sigma two. You take the sum over the orbits of these tensor product of the two slots here and you generate sorry, you generate here an algebra vector space first sorry vector space. And it's a fact that that the dimension of that specter space is nothing but the number of of ribbon that you have before. Why? Because I'm just having here all for each sigma one and sigma two I'm generating the I'm taking the orbit of it. So you don't have more than these numbers of vectors in you don't have more than that. Okay. So the span of it gives you the already dimension of K of N, which is this Z that we have already counted. So the second fact that you have to to to understand here is so you do have the ribbon graph as a geometric surface you it's in one to one correspondence with the orbit and orbit that orbit but it's now also a vector. Okay. So you have these triple correspondence between a ribbon graph orbits in C in SN cross SN, but also a vector of in K of N, a vector spanning that K of N. And this in fact, this K of N is more than that. It is an algebra. You can check it in the following way take that vector that particular vector spanning your space, multiply it by another one and you will see that it's some is giving you. Back a sum of element of that sort up to some composition here. So you are you the multiplication is stable. So you are in algebra and you can even prove that this algebra is associative. In this is just because you it's it's just in the rating. It's in the in the rated from the fact that the permutation are in fact associative. So your you do have a national multiplication your K and which was a vector space becomes an algebra. And as I said, there is more than that there is a pairing on that algebra, which is just given by this formula 14. I'm just taking first of all the delta on each element of your the product of delta on your groups that you extend by linearity to K and and you can show that indeed this billionaire pairing is non degenerate. This makes K of N a simple a semi simple algebra. And we do have the following result. So, K and is a unit or associative semi simple algebra, the unique is coming from the identity times identity thing. So, K and is a unit or semi simple algebra, and there is a representation theoretic base q, which that makes the variable art in the composition manifest so the variable art in the composition. It's the one which tells you that every semi simple algebra decomposes as a direct sum of matrix subalgebra. So this is the fact by the number of team. And now you would like to know what is the basis, what makes what how do you see the basis that make manifest that decomposition in matrix blocks. So that's that q base that I'm that you see here, and it is labelled by free young diagrams and a b below here are exactly the matrix. If you want the entries of your matrix, the indices of the entries of your matrix. So how do you produce this base. First of all, introduce the representation base of C of SN. So it looks like this formula 15. I'm summing over all groups element. I'm having here the, the so called Vigner matrix, if you want associated that's a representation matrix in dimension are you sum over all these matrix elements, and this produce you an element here. Okay, q r r j at fixed r at fixed, a label here are and I j which are also fixed. So this element, you can show that actually it's a base, it's a basis element of that, of that algebra, the k, the couple are that you see here is just a normalization factor that you use to, to make it or to normal or to normal. Okay, so it's just a pre factor that plays that role. How do you produce now the basis of K of N. So here's a generic way of doing that so take the ordinary base of C of SN cross C of SN, the tensor product. So I'm taking two of these guys. Okay, and then take the tensor product so that they belong to C of SN, or times twice. I make it invariant by acting left and right, you know, on each slot by the adjunct action on each slot. So this is already invariant, but still you do have these indices i j i j, and I'm using to clubs Gordon to clubs Gordon's to contract those indices. These I j I don't want to see them anymore. So I'm using here some clubs Gordon to neutralize this to contract that. So, but you have to pay attention that on the clubs Gordon's of the symmetry group, you do have a tau index that tower index is the multiplicity. It counts the multiplicity of the tensor product of our free of our free inside a tensor product of our one cross time are two. So you do have this multiplicity. So and that's the multiplicity ranging from one to the corner coefficient by itself. So here is your actually this is a you can show that this is invariant this block here with Q is at fixed R1 R2 R3. It's a matrix in Taiwan, how to and it's a matrix of size see the corner by itself. So that's a block the composition. That's your block the composition of your entire space, the K and space. Kappa here that I'm putting here is again a normalization factor to make this, you know, orthogonal to make it or to normal if you wish. Another important thing about K and it's in fact an illbert space it support a C'squiliniform that is non degenerate. This is your C'squiliniform nothing really fancy, but something really simple. You just take linear combination of of in this tensor product, another element in this tensor product, you just bar on one side, take the complex conjugate on one side, take the coefficient on the other side. And you know delta function of the symmetry group to pair the permutation that you have left. So you can show this is again non degenerate and you do have an illbert space here. There is another operator that it turns out to be important in our setting, which is the conjugation there is a conjugation or some an evolution in fact that we call conjugation if you wish. And it runs like this take a linear combination of that sort. You just need to invert the permutation here. And of course you can check that playing this game twice applying as on this element twice, you get back to your to the initial element. So you do have indeed an evolution here. So now you are ready, you do have everything in place, K and as an algebra K and as an illbert space, we can push now and define operators acting on that on that space. And for doing this I need a priori first of all to to introduce some particular elements inside this space. And I will I will introduce this notation. I'm using a representative. Okay, remember that each pair tau one, tau two, you know, if you let act, if you act by adjoint diagonal action action on this pair, you generate a Norbit that is that is that is also ribbon graph. That's that's important. So let me fix here a particular label of one of my ribbon graph, they're running from one to rib and this is the cardinality. If you want the Z of N that I have before now I just write it like a ribbon. All right. So, so let me take a particular representative inside this orbit and let act again, adjoint action on both slots. So you are and you are again on the same space. Okay, so this is the same formula but I'm just referring here with our I'm just reporting that I'm in ribbon level by our I hope it's clear. You can see that you do have an automorphism subgroup of SN, leaving tau one and tau two that pair invariant. So you can recast this, this summation in terms of some of all distinct element in the orbit. You factorize now the sub the automorphism group outside. So this is simple. So you extract the automorphism group, but this pre factor here is by the orbit stabilizer theorem is nothing but one of the orbit. Okay, something more interesting also that this basis in fact the previous one is orthogonal respect to the billionaire pairing that I have already introduced and they are orthogonal and this is your, your pre factor, your normalization factor. So let's have a look. How do you multiply two of those elements, these two elements are do have the expand back in terms of again the same basis and the coefficient here the structure constants TRST, CRST here, if this is what I would like to understand how does it work. Okay, what is this number. Okay, I will play a little bit with some notations here rather than having to answer product here. I'm just writing it as just a sigma R, rather than having to two elements to sigma one R cross sigma one to I just write it in that way, sigma of R and multiplying this means I am acting on both slots. Okay, so in that sort, the previous formula looks simpler and can be written in that way, or in the other way here. So now you are ready for the computation. So you will let you, you just compute this product, E R E S. And what do you obtain is the following. You just need to do a small algebra. The small steps are here. You end up with this formula 323 and looks like this so multiplying these two elements, you some of all element of the same kind, one of orbit of R and a sum of a delta function acting on orbits. Okay, so what is it that what what is what do you have in here. So the algebra is here. I will let you know the slide to to to to to upload it if people are interested on the detail of the of the of the steps, you can have it there. So, so let's have a look on that what's that. So this is the orbit of all elements in the orbit of our acting on the representative of ES here, and you check if this orbit has an intersection with that orbit T. So that summation that you that you have here over this delta function counts the number of times that the multiplication of of elements from orbit are with a fixed element in the orbit s gives you back an element in the orbit T. So it's a counting procedure here. So it's an integer. And the you and now here you see that this number here is just an integer divided by this or bar that the size of orbit. Now, let's introduce our Hamiltonian in the linear operator acting on K and that we will call Hamiltonian. So take N K in N. Okay, K could be anything from two to N. And I will write CK for the conjugacy class of some elements of C of SN made of a single cycle of length K and all remaining cycles with length one. For instance, if I'm sticking if I'm if I'm just using any calls three K equals to see to will have permutation of the following form length, a single length of size two and one and everything else which means here one element left of size one. So this is our day. So a single cycle of length to cycle of length one cycle of length to cycle of length one. So this is C to for instance, you may play the same game for any CK K rendering from two to N. Now, these are interesting elements I'm summing for all element of this class of this country gets a class I'm summing all these element on this class, you can show that these elements are central central element in C of SN. So the first thing important is a lemma from from camp and rung alarm, and they showing the following the set T two up to K star and T K star and so this is a subset, you know, among all the T two up to T N, that are able to generate the center of the algebra. Okay, so that's a thing that you must understand. And this can needs not to be the end, it might be even smaller. There is a sequence an interesting sequence of what can be K and but I won't be able to discuss it here. You do have a subset of T K is that generate the center. That's important fact. The second fact is the basis, the basis Q that I've introduced you before, I have nothing but the eigenvector of this T K operator. And what are the eigenvalue normalized character so you have the character in the representation are, if you have q R this character will be in that representation as well, divided by the dimension. I will call this ratio here, the normalized character. So T K as for eigenvector, the q base, the representation theoretic base of the group algebra, and their eigenvalue are normalized character to prove this you just need to show lemma and you're going to be over it so it's not complicated to show. Now, let's introduce our operator on K of n, define the following element in the tensor products of SN tensor C of SN. And they are defined in that way. I'm taking T one K, which is the TK acting on the first slot T2K is the TK that you entered I introduced before acting on the second slot this time and T3K is the same sum of other conjugate that conjugacy class, but acting on both slots. It cannot be represented as a TK if you want, but it's almost that in that sense that's the way of you can write it. So these three operator interesting they are linear. And you can show that in fact they are in that in that algebra by just multiplying on the left and on the right by proper element you will see that you can you can show this. But what is is what is more important here is the following TK here acts as linear operator on K of n. This is can be simply seen and their matrices. They do have matrices that I'm denoting like M I K T s. So when the TK act on ES, you do have a matrix here, telling you TK M I K for T I K and then T s for the element of the matrices. What is important here is these matrix elements are non negative integers. How does it work? In fact, it's coming from the fact that the TK by themselves are proportional to the ER that I've introduced you before. And what is the coefficient of proportionality is nothing but the size of an orbit. So the TK is proportional to a certain R K I a certain element and a certain basis element that is called ER K I and the coefficient here is nothing but the size of the orbit. Remember before the action of an ER on an ES gives you one of the sum of the E and the delta function. So now you immediately see that the orbit here can sells away and you do have a nice coefficient which is an integer. So that's your matrix element here. So the matrix element that you look for, which is the matrix element of this operator acting on on K N counts a number of times that this all elements in that sum acting on a fixed orbit produce you an element in the orbit T. So this lesson we already know. Now, what is also interesting and important is that the TK I by themselves are emission operator on K. We can play with this formula. You want to prove this formula and the steps are here. So they are integral in the sense that they are their matrix are integer integer coefficients but they are also emission operators. And that is great because you can discuss now you are in a in an algebra you have also in a nilbert space you do have a mission operator. You can discuss about quantum evolution. So you are right in a setting of quantum mechanics and this is why the talk is called and our paper also referred to the setting as a quantum mechanics of of ribbon graphs. So your ribbon graph are our vectors and you let can let them evolve and think about all all things that important for quantum mechanical system. I won't be able to discuss more about this because I'm kind of already short in time, but I'm anyone want to discuss more about it is more than welcome so there are consequences of that. And it's maybe important. The proposition that you have to see that you have to understand is the following. So T1 T1 KT2 KT3K do have eigenvector which is exactly the basis to the number of things basis. So you remember that was my bed and burn out in basis and the TK acting on those actually produce you the characters. So how this is working so each TK so the T1 K acts on acting on the queue gives you the character of our one the first slot first slot. When the T2 acts on the queue, it gives you the character of the representation of the second slot here. And then the T3 gives you the our free which is the third, third thoughts here so the T do have eigenvectors and eigenvalue which are known. How this is working. Remember the TIK are formed by the TK. Okay, and the queue are formed by the QI. But you know that this one has four eigenvectors these guys. So you understand quickly that this is going to act like that on this in the same way that this is acting on that and the proof is follows rather easily. So what is also important, we will start by this as an input in our framework and this is certainly an input from representation theory. So this is just to mention that, for instance, if you don't, you want your combinatorial proof your combinatorial interpretation to be totally free from representation theory. Well, in our setting here, we do have an input, but at the end of the day we will have a fully combinatorial interpretation of our result. Okay, so we can move on. And another important result is that of Camp and Rangulam and they're telling you the following. In fact, the list of T1 up to 2 through to K, a certain K star T2 to a certain K star T3 to a certain K star here uniquely determines the triple R1, R2, R3. So you can you are able with these with the eigenvalue of these operator to uniquely speak your triple R1, R2, R3. This is important. And now because you know, you will immediately see that these operator actually have an eigenspace that is exactly spanned by your variable acting block. So this is the block that they generate their eigenspace as exactly of the dimension of this block, which is C squared. And that is important because you want to generate the following matrix where the MIK here are exactly the TIK matrix. If I'm subtracting the eigenvalue, be careful here, you should have an identity of the same size of this matrix. I omitted it, but you should have an identity all for the rest here. And what is happening here is you do have this block thing acting on a vector. If you the null space of this problem is exactly the dimension of the variable acting block, which is of dimension C squared. It is important to know that these characters with normalized character are known combinatorially by the Murnahan Nakayamarul. So this is a normalized character. We know how to extract them by a rational, by a not rational moment, but we know how to construct them using combinatorial rules. Second thing, they might be even rational, but we can choose the least common divisor, multiply by the entire setting to remove the denominator if you wish. And now you end up with a fully integral operator, which means an operator with only positive integer entries, multiplying by some vector and collector zero. So you know that this operator as well will have a null space of this dimension of C squared. And you don't the step one. So you have produced your operators. This operator do have a kernel and that kernel is of dimension C squared. Now, how do we extract the conica? We go quickly as I am already almost out of time. And so here we go. So the null space of this operator as I mentioned, C squared. In fact, for matrices with integer coefficient, there is procedure. There are procedures for determining their null space. And it is known that their null space actually are formed by a vector generating lattices, sub lattices in fact of the entire space. So how does it work? Let me just write X for my operator. And in fact, you do have the decomposition, what we call Hermit normal form is a decomposition of your matrix into two matrices, one which is unibodular, that's my U and the H, which is apotriangular. And the data composition, the way that you proceed is just linear, integer linear combination among the rows, swaps of rows, multiplying rows by minus one. These are allowed operations. And all these operations actually are concatenate, the one that generate the U. What is important is the dimension of the null space of X counts the number of null row of H. Oh, I didn't say that perhaps I did. H is apotriangular and is unique. So the number of null rows that are listed below, you know, in H are exactly the null space of X. The null space of X by itself is spanned by the rows of U corresponding to the indices of the null rows of H. So this is your null space and the entire procedure only involve, you know, discrete steps, something which is constructible. So you may say that your construction here is integer. So you are ready for the interpretation of C square by itself. So for every triple of young diagram with n boxes, this is your entire lattice z to the power of the number of ribbon graphs, so that's your space. And that lattice of linear integer coefficient of geometric ribbon graphs, geometric ribbon vector E of R in this space contained a sublative of dimension C squared spanned by integer null vectors of the linear operator L. So that's already an interpretation and a constructible way of getting C squared. But we want the conical coefficient by itself. So we need another refining counting if you want by using now the S squared. So S, you know, divide this space. So for the moment this V is nothing but the vector space now associated with K of n. So I'm forgetting everything and just show you if I'm looking at this space just as a vector space V to the power the dimension of your space which is a rib of n the number of rebounds. So this space decomposes in the block having S as eigenvalue S equals one as eigenvalue or s equal minus one as I'm going to do the text here. And look at the variable art in the composition. You do have this decomposition in this V rib R1, R2, R3 where this space is of dimension C squared. Now you can project this space, this single space on S1 and just the way that S acts on the Q states here, you can count the number of degrees of freedom and you say that the dimension of this space is C C plus one divided by two. For minus one space, you do have the dimension of the space projected on S equals minus one is C C minus one divided by two. So if you want to projection from this entire space to that vector space, you need just to stack the two matrices, okay, below the previous matrices and play the same game of the h&f construction for extracting the new space from this block here. Now, you do have C C plus one C C plus one divided by two you do have CC minus one divided by two. How do you get a C? Well, it's a difference of these two numbers, or you can it's just amount to to precise to to make precise an injection from this space to that space. And this will give you a constructive interpretation of C. And so this is a second theorem that you achieve for every triple of young diagram R1 R2 R3 with n boxes, there are three constructable sub lattice in the entire lattice of respective dimension CC plus one divided by two CC minus one divided by two and see by itself. And this answers the question. So in conclusion, let me quickly say that the coefficient is a dimension of a constructable sub lattice of in the lattice of ribbon graph here, Z ribbon. The proof relies on a mission operator acting with integral eigenvalues, okay, acting on the little bird space, which is also an algebra build over ribbon graphs. So this entire thing could be seen as also as a point to mechanical system. The h and f form h and f algorithm offers the lattice interpretation directly. The method that we have discussed can be generalized to other type of of things which are much more generalized general than the chronicle coefficients. For instance, if you take the multiplicity of our D in the tensor product of R1 up to our D minus one. So this will be the generalized way of extending the chronicle coefficients, you can play the same game and you will have also answers about this and it can be also generalized to other coefficients such that the little rich Hudson but for over group theoretical framework. Our open problems. Of course, they are people working on combinatorial interpretation of the chronicle. We must be making contact this with those studies, because we know what counts the chronicle for all three rectangular shapes or hoop shapes or a mixture of those, but only on this specific case you are able to to define a combinatorial interpretation of the chronicle. So we must be trying to make contact with those studies. So this must be done. But also, I would like to say part of our proof relied on representation theory, perhaps in the spirit of Stanley. It doesn't like this. We don't know. But if you want to remove the this representation theoretical input in your in your in your framework. So you need to look at again this Eigen value problem this Eigen value thing where the TK acts on the queue and you collect the the the normalized character times Q. So if we are able to provide an interpretation of the Eigen value of TK, you know, that was the sum of a Sigma belonging to a class without relying on this queue, we will be done. So why the question is why the TK, the Eigen value of the TK satisfy the more now, if we could answer this, well, the entire stating will be fully combinatorial and we will. We will be able to give a full combinatorial proof for the chronic equation. So thank you for your attention. I will stop there. Thank you. The other questions. Maybe the rich had questions. Maybe no. No, actually, I just was resolved in the chat. Okay, so there are a few questions on the charts. Maybe I can address this. Somebody okay. Is is okay is the multiplication of CSN in a rated from it is a tensor product tensor product CSN tensor C of SN. Yes, yes, yes, our Sanjay is there and already answers it. Okay. I have a question. Go ahead. It is about the the country, they see classes of type and minus one one. So hold on, I think my presentation has disappeared. Yes, it has. Okay, sorry, so I need to reboot it again. Sorry. So let me go back to that. So maybe I can. Yes. Hold on a second. Let me try to reboot it. I've experienced some trouble here. So let me put full screen first. Yes. You're a T. Yes, your T to T K star and yes, conjugate conjugacy classes of type and minus one one because you have a sub cycle of lengths and minus one and fixed point. Okay, so hold on Gerard. So you Yes. Come again, your question please. Yes. Yes. Yes. Yes. Yes. I have not my question is that a new monkey. Maybe you know of him. He made the work. Many, many times ago. Not proving that it is. Not proving that it. Yes, exactly. I think not proving that it generates the center but it is pre prep, pre prep, rating this proposition. Because these, these classes T, TK are multiplicative if you multiply two of them. You find a sum of, of, of, of other. if you consider the Z module generated by the CK up to my remembrance, it is closed by multiplication. Do you know this word? It is a thing. The work of Maki you say? Yes, Antonio Maki. Antonio Maki. From the Sabienza. I didn't know personally, I didn't know about it, but I think it's right as the center is stable. So what you say is entirely right. Yes, yes. As the center is stable, so if you multiply this, you're gonna end up there as well. Of course, now you know more that you can say that his work of 20 years ago can be deduced from this work now. I am just pointing you over. Yes, yes. Maybe not so complete, but important to cite in the bibliography. So Antonio Maki, yes. Antonio Maki, the Sabienza. And I can try to recover the paper. Okay. Because it was an idea of Professor Schützenberger. Okay. Can I make a quick comment? Yes, go ahead. So just, you know, so in general, the linear basis for the center is given by T sub P, where P is a partition of N. Here, this set sort of selects out those partitions which have one cycle of length two and remaining cycles of length one, that's the T2, or T3, which is one cycle of length three, remaining cycles of length one. So it just, there's only at most N minus one of these guys. So it's a subset of all the partitions. But if you take these guys and take products of them, you generate the whole thing, which is linearly spanned by all partitions. So it's a rather small subset in a bigger center of SN. So, but we'll be happy to have a look at that paper, certainly. Okay. Yes, but it was just a pointer. It does not withdraw nothing of your merit, you know. Oh, yes. Okay. So thank you very much for this nice talk.
The action of subgroups on a product of symmetric groups allows one to enumerate different families of graphs. In particular, bipartite ribbon graphs (with at most edges) enumerate as the orbits of the adjoint action on two copies of the symmetric group (of order n!). These graphs form a basis of an algebra, which is also a Hilbert space for a certain sesquilinear form. Acting on this Hilbert space, we define operators which are Hermitians. We are therefore in the presence of a quantum mechanical model. We show that the multiplicities of the eigenvalues of these operators are precisely the Kronecker coefficients, well known in representation theory. We then prove that there exists an algorithm that delivers the Kronecker coefficients and allow us to interpret those as the dimension of a sub-lattice of the lattice of the ribbon graphs.Thus, this provides an answer to Murnaghan’s question (Amer. J. Math, 1938) on the combinatorial interpretation of the Kronecker coefficient.
10.5446/51300 (DOI)
Thank you very much, first important question you can see the slides, hopefully. Yes, we can see you. Yes. Can I start? Should I wait for the minute? No, no, it is a... You can begin now if you want, if you wish. Thank you very much, Jaan, also the organizers for this invitation. It's always very nice to be at CUP now virtually, but I hope at least this way some more people can participate. And the talk I would like to give today is about a topic which I think is quite the spirit of CUP, because it is about this type of research which tries to make sense of the deeper links, the mathematical links between approaches to manipulation of graph-like structures and computer science, mathematical physics and mathematics. So, I mean, of course, every of these disciplines has many, many examples of situations where it is meaningful to look at these types of manipulations. For example, the very simplest such is drawn here at the bottom. It's just when you manipulate, for example, in physics, in distinguishable particles, this would be a model of chemistry, for example. Then, of course, in social network science, maybe you look at processes which can be formalized as, you know, manipulating locally a network graph, like, for example, the rewiring and edging here, but then there are also many, many more sort of Baroque examples. I mean, trees, for example, can be seen as a type of graph which has additional structure, like particular incidences, for example. And my personal motivation is really from organic chemistry and from biochemistry, where you encounter these beautiful types of theories, where, for example, you formalize proteins and other molecules through these type of writing steps. And sort of my main motivation for this work is that I'm originally a mathematical physicist, but I'm now working in computer science, and I discovered that there's relatively little cross-path between these three disciplines that would allow you, for example, to take a continuous time Markov chain theory and directly apply it to one of those organic chemistry simulation techniques in biochemistry, say, for example. And, of course, one of the main motivations is to ask, so in these type of very complex systems, where can you attack with combinatorics? So what are the combinatorial structures, of course, beyond the data type of these graphical structures that one could use to analyze the systems? And I'm just putting this here because, at least for bio and organic chemistry, it is really the case that, I mean, sometimes you have abstractions which are made because you can then, at least as a toy model, talk about these systems. But at least in the case of bio and organic chemistry, these are really the state-of-the-art techniques to simulate these systems. So in Kappa is a framework where you have these sort of circuit side graphs, there's a special type of graph, you know, everywhere it takes some sites at which you can link to other vertices. And in organochemistry, it's even more fantastic, it's really that the molecules you draw by hand also for chemical reactions are a data type that you can then formulate transformations on. And so these are the two main frameworks. So if you, I mean, I would say a little bit about this later on, and it so happens that there's really a very formal semantics for these type of theories. And that is good because that is really, I mean, it's very mathematical formulation. But that's a plus. What's the problem is that if you then look really at a realistic scale example, and here's a little piece of the human metabolic network, in this huge graph, every part of this network is, I mean, an enormous complexity, and you have a very many different types of molecules that participate in these reactions. And here's only drawn like the macro molecules, or the larger molecules. And then you have enormously many reactions. And so the real question is all of these transitions, these manipulations are fired at random. So what is it really you can say about these systems? And especially if you look at biology, which is still a very fast evolving field, of course, I mean, it is nice that you have this interesting data type, which is quite good also at, you know, encapsulating new knowledge you have about just individual transitions. But you can see immediately, since every of these green blobs, these agents, as typically on the order of seven to 10 sites, this is an extremely highly combinatorial data structure. And then if you start manipulating, it is very unclear of what you can actually extract as information. So main problem is to understand the function of such systems, because nature has given them, this is the abstraction that is very accurate. So what can you say about the function of the systems? And so, so to just motivate how one could approach this, I'm going to take an example of the transformation system, which is of course much simpler than these biochemical reactions, but it's complicated enough to explain the main sort of root of attack for these systems. So here's a system where you have an input state, which is just some graph, now performing transformation. So the language for these transformations always looks similar, you have little pieces of evolution, if you will, each of those is asking for some input drawn at the bottom in this little cartoon, and some, then it produces some output. And the output here is, so the input is here, some two vertices, the dashed lines indicate that they are kept throughout the transformations are identically kept, and then you link them up with an edge. That's this local manipulation. And the second part of this framework of the semantics is that you to apply such a rule, you have to exhibit a match, and a match is nothing but an embedding of this input motive into the graph. You see immediately that, I mean, here I've drawn a very, very small graph, and I mean, this rule is sufficiently simple, but already here you have quite a high number of possible matches. So part of this sort of encoding is that it's usually quite simple to write the rules, but I mean, there's usually quite many ways of applying these rules. But anyways, so then here, applying the rule amounts to locally, then, you know, running this transformation, which here amounts to inserting this edge in these two vertices. So let's do a few more of those steps, another of such exactly the same rule applied at a different place. Now, of course, I mean, I'm only showing some very simple rules, you could also have one which unlinks, for example, like this one. And let's take another one to link and another one to link. Okay, so now I have given you a small sequence of the transformation system. And very nicely, not only for this particular case, but I mean, for an enormous variety of possible things of graph like structures and their manipulations, they are all covered by a mathematical theory called categorical writing. And nicely enough, it is very close to implementation sources. I mean, this is something you can, you can put very directly into algorithms. So it's sort of the source code for these sort of manipulations. But sort of the typical problem is now, I mean, going into this picture of what happens in biochemistry, how could you understand it? So imagine you were given these type of transformation, you had a little vocabulary, say just the linking of edges and the unlinking. But now you are asking, so of all of these possibilities, what is the likelihood of if you fire these say at random with same properties for sake of argument, of seeing a triangle appearing? Yes, so I mean, you're trying to track how many times to see a triangle that was really produced, I mean, newly produced, or maybe deleted through this application of sequences. And so what classical writing theory doesn't have much of an answer to is how to approach this and utilizing this feature of writing. So what one can do, and this is now I will motivate this more later on, but sort of the the starting point of this combinatorial analysis I would like to propose has to do with, of course, you can track how actually these rules were applied. And here in this picture, you see that these are simply just the, I think, five steps one after another drawn works vertically. And whenever you you attack with a second, third, fourth rule at a position where you have previously already applied a rule, then this is marked with these light blue lines. And the red lines mean if you touch something that is in your original graph, but not interacting. And so immediately, it is obvious that there's some combinatorics in how you can sort of plug together these, these transformations and acting with each other versus how how they are acting on on this graph state. So again, like if you if you now look at this problem of finding triangles, you discover something very important, namely that sort of producing a triangle is an inherent feature of of this above sequence of rules. I mean, because they are plugged together in a way that they are guaranteed to produce a triangle, there might be other ones that accidentally are produced by just simply linking up a V shape say with one edge. But that is definitely already one guarantee you can give this sequence will produce a triangle. And the other point of interest is, so these two parts of the transformation sequence highlighted in orange, of course, they are happening, this is one possibility to apply, but the two actually do not contribute anything to the production of the triangle. In fact, they they they only sort of they produce an edge and deleted later on, so that that actually doesn't contribute anything to this count. So what one needs is some sort of mechanism, how one can analyze this combinatorially. And the way to do it is to first put to as first class citizens, so to speak, these interactions of rules. So in some shape, to produce these type of diagrams with a recursive or other generative mechanism, and to reason about their combinatorics. And these objects is, I would I call tracelets. Okay. So the plan of the talk is that I would like to start from something which I think is quite close to interest of many people in the audience, which is certain type of HOPF algebras, which have been introduced by Gerard Duchamp, Carville-Penson and others, which give exactly sort of the blueprint of this type of combinatorial construction. And then pretty much the second part of the talk is giving you a little background on categorical writing theory, just just enough to demonstrate how this how this works. And then to show you how you get to tracelets and in particular some notion of algebras that replace the C's diagram of algebras in the general case. Okay. So for me personally, I started working on this about five years ago. And I was studying this paper by Gerard Duchamp, Carville-Penson, and their collaborators from 2011, which was called combinatorial algebras for second quantum theory. Okay. And so from this idea, I developed my very first version of some notion of graphical algebras, graph transformation algebras together with one sort of an host and colleagues. So here really, so I'm presenting to you sort of a slight reformulation of Gerard and Carville in a language which then later directly generalizes to the case of reviting, but it's completely equivalent. So the idea is if you look at the manipulation of just graphs that have you don't have any vertices, then what can I mention, there's some very elementary things one can do. So I mean, we are always only looking at isomorphism classes of graphs. So in this sense, you know, vertices are indistinguishable. And then there are two transformations one can do elementary ones you can create, you can add some vertex or you can remove a vertex. Now, the starting point of getting to some notion of diagrams is that from these elementary building blocks, you can assemble larger diagrams. And they will typically look as follows, you have some occurrences of these particular sort of little elements of the creation and deletion. I should say that each diagram, you have vertices characterized as output and some as input. So in the pictures, because I mean, the creation only has one vertex and I draw these little dash lines to indicate whether it's an output or an input. Yes. So they are read from bottom to top the diagrams. And then sort of the second information in the diagram is how some of these outputs, some of these creations are then wired into inputs or some of the deletions. And so more mathematically speaking, I'm giving two sets, sets of input and output vertices and a relation between them, which should be one to one. So that and also I'm only considering again, sort of essentially isomorphism classes of these type of diagrams. So more concretely, I'm looking at equivalence classes under, you know, joint permutations of sub vertex sets, which preserves incidence structure, situation. Okay, so these are the diagrams. And now this beautiful idea from the aforementioned paper of Sharon Cavill was that you can now enrich the mathematics by, sorry, you can build the mathematics over these diagrams or equivalence classes by constructing a vector space that whose basis is indexed by these equivalence classes. So for each diagram, d, little d, you have a basis vector, which I always write delta of d. And beautifully now, and this is of course, you will immediately see how this glinks to rewriting. Now one can construct on this vector space, a binary operation, sort of called the diagrammatic composition. And the idea is that if you are, if you give two diagrams, you, I mean, first the one, the one and then the two, you can sum over all ways you can consistently wire together these diagrams, meaning you have some, some, like this drawing on the right here is for example, the one and the two, this blue m12 are the two lines that link some inputs to some, some outputs, some inputs. And then performing the composition amounts to forgetting that these were two diagrams wired together, sort of seeing the whole thing as one diagram, and of course, taking equivalence class. And so this gives a very nice composition operation. And the first important result is that this operation to get, I mean, the vector space together with this binary operation gives the associative unical algebra called diagram algebra, whose unit element is the equivalence class of the empty diagram, or I mean the basis vector associated to the empty diagram. So that, that for me was sort of the first, I mean, the literature I knew was the first description of such diagrammatic operation, which, you know, reproduces some, some combinatorics of these composition structures. Okay. And then sort of now looking more closely, we see that actually our little catalog of elementary diagrams wasn't complete, because sort of one of the connected, so essentially elementary diagrams are the connected components of such larger diagrams, which are of course the creation and deletion of vertex diagram, but also the only other connected component you can get is creating a vertex and then deleting it again. So these are the three elementary sort of, sort of little mini diagrams. And if you simply introduce, so now we need a notation, which means sort of pasting this jointly such, such elements, but I mean up to isomorphism. So I mean, so, so happens to be the composition along trivial overlap in this definition. And it, of course, immediately by looking at these diagrams, how they already drawn, you see that the equivalence class of any given diagram is given is completely characterized by the numbers of occurrences of these three elementary letters. Yes. That's simply by the construction. And now that, of course, is already an interesting sign where one could get some combinatorics. And in particular, sort of the, so when one looks very closely, one can define something called, I mean, as usual, we now have a binary operation. So we can define a bracket, a commutator. I'm just using the ordinary commutator symbol for, of course, a commutator in this diagram algebra. And so these three elements have the interesting property that if you take the Lie algebra form by these three elements and the bracket, the only non-trivial commutation is also, I mean, you first create and then delete, you have one more option, namely the e pattern, then the other way around. So I mean, that's the only commutation relation you have. So these three, the basis vectors of these three diagrams form the Heisenberg Lie algebra. This is exactly the commutator structure of Heisenberg Lie algebra. And now one can take sort of very nice results from the mathematical literature, which is a Poincare-Bekhoff-Hitt theorem. So if you form the universal enveloping algebra of the Heisenberg Lie algebra, which is to say, you basically, you can tensor arbitrary basis vectors together of these, you know, chosen from these three types, modulo relation, which is created by the ideal of, you know, exactly created by this commutation relation. There is a very nice basis for this space, which is, I mean, giving some arbitrary order on the elements, but in particular the one where you just sort first the string of all the v plus, the v dagger occurrences and just all the v's and then all the e's. Then you can write a basis for this universal enveloping algebra of the form shown here. This is a Poincare-Bekhoff-Hitt theorem. Now, so the interesting thing is, of course, this is again just characterized by the triple of integers, the numbers of how many times each of the, of these elementary basis vectors, of course. And so, recalling that there was this notation of this transunion of diagrams and that sort of the characterization of an equivalence class of a diagram is exactly by the numbers of connected components of each type. So it's motivated that you can actually find an isomorphism between elements of this diagram algebra and elements of this universal enveloping algebra, the Heismic-Lea algebra. And indeed, it's exactly just like for every distance transunion, you produce the ordered tensor product and conversely, going the other way around, you forget the order, essentially. But more than just sort of a coincidence, it turns out that this is actually also an isomorphism of algebras. So this exactly preserves the algebra product from D into U and the other way around. So that's sort of the first nice observation. But of course, since we have some people very well vested in Hopf algebras here, it's of course well known that this Heismic-Lea algebra, universal enveloping algebra is a Hopf algebra. So the question is where do you get a co-product from? Because I mean, one could ask, is then also the diagram algebra or Hopf algebra? And indeed, there's evidently this idea that you have some diagrammatic structure. It's already characterized through connected components. So if you really dissect a given diagram into connected components, now need some notation. So effectively, every element of the diagrams, every possible diagram is just the disjoint union of these connected diagrams. So in the notation, the only strange thing, but it's for convenience, is that you say disjoint union over empty set is just the empty diagram. It's just to make this formula more consistent, notationally nice. And now the co-product is just the sum of all ways to partition parts of the diagram into the left tensor factor and parts of the diagram into the right tensor factor. Now, this definition gives you indeed a co-product. And indeed, this isomorphism I showed before extends to Hopf algebra isomorphism. So you can indeed define a Hopf algebra structure on these diagrams. And I spare you, of course, all of these many axioms. But this was, I think, one of the core results in the work of Dushan Benson also. Okay. And I apologize for the typos this morning, so I had to write this by hand. So if one now looks closely, I mean, so far we have just talked about high-level properties of these diagrams, but one can simply just write down now the product of two elements in the diagram algebra. Again, they are characterized just by the numbers of occurrence of these elementary patterns. And the main point here is that there occurs a very interesting coefficient, which is the number of matchings of, so the first diagram has K1 outputs, so vertices created. The second one has two many inputs, so vertices that will be deleted. And now the sum is of all ways of pairing those, with forgetting the order of the pairing. So this is exactly the combinatorial coefficient, the number of ways of doing this. But if you look now at the famous Heisenberg-Weil algebra, or more precisely its representation, this number vector basis. So, I mean, of course, everyone knows this from quantum mechanics, for example. So there's an algebra defined over vector space indexed by just natural numbers, which has two generators and their representations act as follows. So a dagger on the vector n gives you the vector n plus one, and a, the annihilator on vector n gives you either zero of the field, if n was zero, or n times n minus one, if not. And indeed, the normal order elements in this algebra, normal order means first of all annihilators and all creatives, have precisely this above number coefficient as in the algebra structure. And for Jean-Pierre-Pencet, that was the motivation to study this diagram algebra, because as you saw before, the coefficient is purely produced by combinatorics of matching in these diagrams. And now one can immediately think about, so we will now simply, I use different letters now. So before these were called v dagger and v for radius creation and deletion. I now use capital A dagger and A in the other algebra, just to say these are, I mean, exactly the same diagrams, but seen in a different algebra. So for each, there's a little dictionary where you say with phi bar, you forget of a diagram in this hopf algebra structure, all the parts which are these great and then the deep pieces. This way you get a diagram and on the right hand side. And if you embed such a diagram in the larger algebra, you just have hopf algebra diagram without any of these create and then delete patterns. And it turns out that this completes the picture very nicely, because now this gives you an algebra H, again defined over diagrams. And it's exactly sort of a diagrammatic encoding of the Heisenberg Weiler algebra. So it's this algebra with the only commutator A with a dagger commuted gives the identity. And finally, sort of a little piece of the information is also that you have a representation, which is exactly just that, you know, to such a basis vector and H, you can assign an action on number vectors through representation. And so everything put together gives you a very nice picture of the structure of Heisenberg Weiler algebra, mostly explained combinatorially. And also even the representation, I mean, there's another way of seeing how the representation can even be seen as a combinatorial action. And so in summary of this part, I found this back then extremely interesting, because sort of all of the combinatorics of how these transformation steps interact is coded in these diagrams. And you then postpone to a later step through this representation row, acting on states. And this is exactly the type of deciphering that is needed to make progress in these transformation systems. Okay, and now I just wanted to show you one sort of, this is still a bit experimental, but I mean, sort of the key point about Heisenberg Weiler algebra is that you can formulate chemical reactions in a language where you do not look at the internal structure of molecules. So you just count abstractly number of occurrences of different molecules. And so one can quickly write down a continuous time Markov chain for it. So I mean, this is now in this Parkman-Fock basis where annihilation is d by dx, creation is multiplication is formal variable, and you are tracking the probability distribution of states with n particles coded as monomial xn. Anyway, so and so this is a birth-death process. And if you draw these pictures again, the birth-death process chooses when to jump, when to perform transitions, either the creation or dilation. And again, of course, we could imagine we now look at the picture in this in this Hopfer algebra. And there you would track when a creation was followed immediately, or maybe at a later point, but connected with the deletion. So it's impossible to put a measure on any possible time point for this. But what you can do is you can put a box of time capital T, and then just characterize the content of these transitions by exactly the classification into connected components. So creation, deletion events, and events where you create and then delete. And so there's a little trick. I mean, so far there is no Markov chain theory for the software algebra in that form to be developed. But here, at least when you create, you can simply make this a different type of molecule. And then when you delete, you can make sort of a little artifact produce some third type of molecule. And this way, if you now track the numbers one, and you run all of this machinery, in the end, you can indeed get a nice expression, which tells you a little bit more about this dynamics, saying essentially, of course, you stabilize on a cross-off distribution, ultimately. But you also see that in the time limit goes to infinity, you grow linearly with the number of these create delete events. So that's the dynamics of the system. And I mean, this is by no means a full theory yet. It's just like one very first motivation that presumably also Markov chain theory, one can accept some interesting information, only that, of course, this key particles have very little structure. And so I return to the question. So how could you now approach this for the much more complex situation, where you're not describing just vertex graph, but you actually describing graph and hitting all of this combinatorial complexity. And so just to recall, the idea is now pretty similar. So we will mostly focus on first of all, classifying combinatorially how to interact with writing steps. And sort of the ultimate goal would be to then understand if you want to count, for example, triangle patterns that are produced or created or deleted by such sequences, how can you how can you give a measure of likelihood on that? That's the ultimate goal. And again, we would like to somehow find a way and this will be through commutators, of course, to sort of drop out some of the possible contributions, which do not actually influence the column. And the only sort of the real obstacle for this was that if you, I mean, this is a perfectly valid sequence of events, but at first sites, what complicates this is. So one could imagine these diagrams, the semantics will just be pairing of subgraph, you know, before we just paired vertices to vertices in these other diagrams. But here, it turns out it's a little more intricate because actually, it seems you're pairing half edges. And that is sort of the real problem for this. I mean, how to come up with a concrete generalization. And so after a lot of experiments, so I came up to formulations in categorical writing theory. And so, I mean, this is maybe a little bit of an exotic theory, but the nice thing about it is this completely formal. And it can immediately be implemented in algorithms. So I'll tell you a little bit about this. And then afterwards show you how you can produce these sort of the analog of these diagrams. Okay. So the full generality of this theory is quite intricate because in general, you know, only look at graphical structures, but also some which are sort of constrained additional structural constraints, for example, in trees. So and I mean, so this is a paper which will jointly version Caribbean, which should be published soon in the applied category theory general compositionality. And so I just want to show you a little part of the theory enough to get the idea, hopefully. So before we had transformations were essentially specified as partial maps between inputs and outputs of vertices, this generalizes now in. So we take a category typically, it should have some some nice properties called adhesivity. But sort of the important thing to notice that any, so for example, graphs or such a category, we can have undirected graphs, multi graphs, and all sorts of variants of specifications. But main point is, you take such a category, and you can formulate partial maps and quotation marks as spans of monomorphism, so spans of embeddings. And so in the most general framework on top, can put some conditions, which essentially constrict, constrain how you can apply these rules. And again, to make a meaningful theory, we, I mean, this would in general be a class, even for graphs, even for good categories. So we typically also have to quotient by some notion of isomorphism as we had to do in the diagram case. Okay, but this is sort of the analog of these little building blocks of the transitions that one can use. And sort of unfortunately, then there's a little bit of, I mean, this is very general, it covers pretty much all of the known case of such transformations, but for that, it's also a little bit technical. It is just to say sort of the analog of finding an input pattern in a state X is now finding an embedding of I into X. And then how to from that, you know, apply the rule how to unroll this transformation depends on your semantics. And it's typically performed either in the double push out semantics, or in the CESC push out semantics. And the first one is essentially trying to compute some notion of set complement to get this object K bar. And the second one is using some construction called final pullback complement. So I mean, in these graphic graphic theories, push out is typically gluing together along common overlap, pullback is finding intersection of two objects, push out complement is roughly like the set complement is just that, for example, if you have a graph, and you just try to delete vertex, then it depends on whether this incident edges. So there's some some difference. But other than that, it's roughly the idea of applying these steps as in the graphical description. But so the key point is that there exists a notion from the pure rewriting theory of how to compose two steps, how to how to interact with two rules. And so this is very much like in the graphical language for these diagrams, you, you have two rules, and you try to find an overlap of one of the output of the first rule, with the interface, the input, I two of the second. And so this is again coded as a partial overlap. Now you glue together along this partial overlap, and then you just run the rules. That's essentially what this says. And of course, there's some, some technical details for how to do this with conditions. But I just wanted to show you, I mean, I think intuitions are pretty good here, also for the graph case. So I'm now drawing the diagrams from right to left instead of bottom to top to be more in parallel with this mathematical notation. So here I'm drawing a picture where I take two vertices, I link them up with an edge and make an additional vertex and an edge. And then I use one of the vertices to link to it, to ask for it to an edge to the incident. And then, you know, delete that edge makes new edge pattern. And so exactly how to interpret this diagram is to say, Okay, so I'm encoding here a sequence of events where sort of the intermediate state, this is shape where you glue together the patterns along the common overlap. And then you run essentially the one rule in the forward direction and the other in the backward direction. And now this is exactly the analog of these diagrams you saw before in this graphical case. So it's a sequence of events, which was produced through interaction of these two rules. And sort of the overall input motive here on the very right top bottom is seen necessary pattern you need to find in any state to apply the sequence. So this is sort of the very first example of such diagrammatic calculus. So there is one complication, maybe which prevented this from being useful from the start, because just graphs are rarely very interesting. Normally, most of these applications, you want to have some more structured species. And for example, one very nice species, I think, which is also good communication grounds with the communaturist is, for example, planar rooted binary trees. And so if one wants to, you know, formalizes in this context of writing, the first step to note is that you can write, I mean, not only trees, the forest, of course, planar rooted binary forest, you can see them as some type of graph, which are typed. So you can take the slice category over some type graph. So every edge is given one of three exclusive types, either being left, right, or the bottom. And so that in itself is sort of a first step. But then of course, you see, it's not only a graph which is colored by these three types of edges, but also it has some a lot of structural properties. And of these, I'm just listing some here. So for example, you never have two leaves directly incident, and you never have two of these root edges and so forth. So I mean, this is not very pretty. You can formalize this as conditions. But of course, this complicates a bit the story. It can be done algorithmically. So at least that's a plus. And so in the end of today, so transformations in this language you can do on trees can be made on can be formalized as writing rules. But I mean, with the expense of those calculus, that is to be said, but in principle, it is now completely formalized how to do these transformations. Okay. And now finally, so the idea is how do you now get to combinatorics? So how do you get to calculus on these graphical writing steps? And the idea is that so as in these pictures in the introduction, you want to reason about all possible ways of applying and different transformation steps, where each step is chosen from some finite vocabulary of possibilities, let's say, for example, vertex creation, vertex deletion. So you want to classify the ensemble of all possible trajectories, let's say for a fixed input state x zero. And the strategy which is very efficient is to first classify all possible ways how these n steps can interact with themselves. So sort of the minimal context you need to fire a given transition to apply a given sequence. And then as a separate step ask how many ways are there then to apply the overall of such input to the state x zero. And so this is precisely this idea of of tracelets completely formalized in this categorical writing theory now. Okay. And so the idea is that if you again look, so this is now the analog of one of these diagrams from before in the graphical setting, where you now have three steps there in the sort of shaded boxes. They are drawn from right to left. And this wiring diagram indicates sort of a possible sequence how these three steps could interact. And you can produce from it now this type of tracelet. So the idea is that each of the wires codes for an overlap. And now for example, you can take you can zoom in on the first two and compute how they interact. Producing this little subsequence of events. And now you see that indeed the overlap of this third diagram with what you just produced is now a proper graph. So there's no half edge or stuff like that. So you can glue together. And you essentially just complete sort of the full sequence of events that is minimally coded in these three rules as the bottom sequence. And it looks like this is and finally to obtain what I call tracelets. I mean, it turns out the only information you need to retain is just the outer hull of this diagram. So it looks precisely like just a sequence of transformation steps. But the specialty is that it only contains enough information to permit the sequence to occur. Not all I mean, this could be happening in a much larger context. And so this is exactly where you gain something in the complexity. And at first sight, it seems this is sort of asymmetric. But one can show that that it's equally possible to build up this trace that from the same diagrammatic overlap structure, just in a sort of first computing what what how three and two overlap and then also overlap this one. So that's that's sort of part of this calculus. Okay, so I mean, it's just to say that these diagrams can be completely coded as just sequences of applications. And on those now you can do a combinatorial calculus. So sort of the what's communatorily interesting is that now you are given your vocabulary. So it's just the abstraction of saying sort of the top parts are the colored bars are the individual rules. And so you can build up all possible trace, let's say here of length four by recursively composing or iteratively iteratively composing your letters. And so in each of these steps, of course, you can perform analysis. So this is exactly this philosophy of the diagrams that you can you can reason transitively so to speak on on compositions. Okay, and so I just want to briefly show how this looks like in practice. So a trace that of length one is just, you know, the special case where you have just one rule, it really only needs its input. So that's sort of the sort of the trivial case. And sort of what I showed just in pictures looks in reality, like you are inductively building tracelets of length n plus one from trace of length n and traces of length one. And so it is just to show you that that even in the case of trees, this doesn't look pretty, I admit, but it can be encoded in the algorithm. So I mean, this is a fully formalized structure. And sort of one of the interesting features of these tracelets is that you have at the bottom of this diagram, the sequence of steps. And so you can read out the composite effect of that sequence. And in the language of the Sean Benson, these hope diagrams, this was exactly this operation of evaluating the net effect, how this acts in the Heisenberg-Weil algebra. So this is now generalized here as reading out the net effect of the sequence of transformations. So that is nice. That's called an evaluation. And this evaluation is also compatible now with composition. So you can, indeed, so the analog of diagrammatic composition is now composing these tracelets. And again, it is only important to take an overlap of the output of the tracelet with an input of the next. And so this is very exactly analogous to this diagrammatic composition. And finally, sorry, maybe just to say, so there is now this operation of composing two tracelets, drawn with this Vetch notation at the bottom here. And this composition, now, of course, if we go with analogy, should be associative in some sense. And indeed, sorry, and indeed, it is associative in the sense that just as in the Sean Benson case, sort of the number of ways to wire two diagrams together and then with a third are exactly isomorphic to the number of ways first wiring the third and the second and then the result with a first. So that is, that is a property this structure has. Now finally, to put everything together. Yes, now precisely, this aforementioned characterization that if you have a sequence of transformations, you can equivalently count all possible ways of performing these transformations by first counting the number of ways you can compose up these tracelets, and then together with number of ways to applying them. So that's sort of, it's just to say this now exists for all of these writing theories, including, of course, hope of case. Okay, and sort of the final piece of the puzzle is then how to actually get from here to algebras, because we want to analyze everything using commutator relations. One thing one has to do from the start, I mean, we already had to go to isomorphism classes for rules to even obtain a set of equivalence classes. And again, there's something like this also for tracelet. So sort of everything is constructed with push-offs and push-off complements and so forth. So you have to quotient by isomorphisms. Something less trivial is that so you do traces are slightly too large. I mean, normally do not want to keep every bit of this information. In particular, you do not want to keep like in this diagram here, information of order when the steps are completely exchangeable up to the effect. And so that's called shift equivalence. And finally, there's one oddity which is that simply formally the trivial rule sort of intuitively should leave a transformation sequence invariant, but formally, I mean, it just produces an n plus one length sequence with some repeated parts. So you can define an equivalence that simply quotients out by such occurrences. And if you put all of these together, indeed, then oops, sorry, sorry, yeah, here, you get to exactly now finally the construction of tracelet algebra. Basis elements are labeled by equivalence classes of these tracelets under the aforementioned equivalences. And now the product is precisely as in the digital pencil construction, the wiring together in all possible ways, the overlaps in all possible ways of the tracelets. So this is now the tracelet algebra product. And you can give an action of tracelets on states precisely by this aforementioned trace characterization that if you have a sequence coded in a tracelet, you can apply the entire sequence by finding embeddings of the overall input into a state. And this gives you a representation of this algebra. So the theorem here is that not only does this tracelet give rise to an associative unit of algebra, but moreover, this row is indeed a representation, which means that you can, if you if you now want to do combinatorics on numbers, if you want to count number of ways of applying writing sequences, you are free to first you can partition your problem by this bottom right outcome of the equation into first trying to characterize the number of overlaps of tracelets, which is very advantageous, because now you can use relations such as commutators and so forth. So this is sort of the final outcome. And I then realized while preparing the talk, this was already much too much information. So let me just conclude on reproducing the special case of discrete diagrams. So here you see on the left, the elementary sequence of creating a verdicts and deleting a verdicts. This is how it looks like in a tracelet. So this is the analog of the diagram of this generator E and the combinatorial hopf algebra. And on the right, you see exactly why you need this shift equivalence, because I mean, creating a verdicts and deleting, I mean, forgetting and taking equivalence classes the same as, you know, in quotation marks first deleting and then creating. And this is in the tracelet language exactly obtained through such an equivalence. So it's not, you know, a trace value, but we label the elements of algebra by equivalence classes. And finally, then, indeed, the type of relation you can then derive are precisely commutation relations. And those are key to the analysis in these combinatorial arguments. Okay. And I mean, I wanted to speak about planar-rooted binary trees, but I think I'm out of time. So it's just to say that in these calculations, and I also showed this last year at CAP, one can now start to see combinatorial simply counting arguments on why certain commutators have the form they do. So that's sort of the main motivation for this work, but I do realize I'm out of time. So it's just to say sort of one of the, so this will be a forthcoming paper for beginning of next year, and one of sort of one of the cases will be an explanation of why sort of, you know, you see certain commutator structure in these planar-rooted binary tree computations. Okay. But so let me conclude to stay in the time limit. So I've given you a quick tour of a new concept which I call TraceLed, which is intended as a generalization of this Dishon-Penzen et al. construction of diagram algebras. It seems to be very useful. I mean, I have a sort of, I'm developing an implementation of this with an Z3 SMT solver, which is available online, but I mean, this is work in progress. And sort of the mathematically interesting question is maybe whether you also have some Hof-Algebras structure on these TraceLeds, which for some cases I know one can demonstrate, but in general is sort of a research question. And sort of the long-term goal of this work is to bring combinatorics also into these bio and organochemical reaction systems, which through the TraceLeds now will boil down to enumerative combinatorics on TraceLeds. Okay. And with this, I would like to thank you for your interest and thanks a lot for your time. Thank you very much, Nicolas, for this is, are there questions? Yes, I have one question. What about directed graphs? Do your construction applies to this? Yes. Sorry, I just showed underrated for the sake of the diagrams. No, no, so it applies for any type of graph you can formalize categorically. So even for simple graph or for multi-graph or for hypergraphs or for attributed graphs and so forth. No, it was just for the pictures. So, directed graphs in particular are appreciative. You know, you can formalize them as appreciative. And so any adhesive category, so any appreciative gives you an adhesive category. So that's, yeah. Okay. And in particular, for the directed graph case in this paper, in this archive footprint, we even have the Hof-Algebras structure. So may I ask a question? Yes. Yes. Other questions? Yeah, I have a question. Yes. Nicolas, could we describe the construction of a penrose tiling with your graphs? Is it possible? Aha. It's like simple chemistry, after all, but the dimensions. So, I mean, the only question about this is whether you can generate your tiling in some process which only asks local information. So, I mean, I'm not quite familiar with penrose tiling, but I mean, if you can describe. You have just several tiles of forms and there are just rules, quite simple rules, how to glue them together. Interesting. Yeah. Yes, so that sounds very much like an example you could study. Yeah, I think all quasi crystals, because it is much more interesting than crystals, because they are, of course, periodic, but this is up-periodic. Yes. So one of the things I could imagine is you might, for example, ask in such a quasi crystal, what is the average occurrence, say, of particular subspecies? No. But probably with your technique, it may be described in a very simple way. Yeah, so in particular, if you have a model for how these local manipulations are fired. Of course, of course. Yeah. I have a little comment on this, because all of your stuff, it's like it's really chemistry, it's bullion, it's no special structure. And of course, in penrose tiling, you get things, like a plane. And in principle, one can, for example, there are these universal equations equivalent to universal Turing machines, you do dominos in two dimensions, like square tiles, and each side is colored by some color, and then you try to build things since it's a universal Turing machine. Of course, you can start to glue things, you get some pieces of planes, so you get eventually some kind of space like structure, abstract story. Yeah. I mean, one of the things we tried was, for example, polygons, so you can have these sort of tissue models, which are simply just polygons glued, but I mean, without gap in the plane, and you can sort of divide the polygons, you can sort of insert triangles by expanding vertices and so on. And this is also in the realm of these writing techniques. And so you can use it in, there is a problem with capsid, you know, the viruses, which have icosahedral symmetry. They are surrounded by capsids. These are just the coat proteins that come together, and they have five-fold and six-fold symmetry. And of course, they build these capsids, and again, there are very simple rules that make them glue together. Yeah, I mean, so the main motivation here is that for these types of theories, if you sort of the motivation is if you have a good, if you have a problem, which you think is of this nature, it is relatively quick to check how to formalize it into the writing. So I mean, it's essentially to see whether you need more information than what you can expand in a small local neighborhood of the rule or not. If not, there's a very good chance you can. So what you get in quotation marks for free are, for example, commutation relations. So you can, for example, ask the pattern, the counting of the pattern is implemented as just an identity transformation of that pattern. And so you can, for example, ask how many of the occurrence do you have more or less before, after applying this transformation? And so these commutators really carry a lot of information. And it's much smaller information than if you were to analyze the entire structure. So that's sort of the main, you get sort of average information. Yeah. Are there questions, remarks or comments? Can I ask one little question? Have it been studied which information on the solgebres are carried by some homologies of the solgebres? Like, hoaxed homologies or whatever, something like that. I mean, I mean, these algebras have not been even written down. So, I mean, so the question is, I mean, the answer is no from my side. I would not be surprised if you look at special examples, maybe you could, I mean, it would be more that you recognize the trace lip stuff in it rather than, I mean, I don't have a good answer. Sorry. I think it's a good question, especially because last year at CUP, we had this nice talk by, we had this nice talk about graph homologies or homologies, I forgot, which also could be formalized through writing. So, I mean, if you have, actually, if you have a good case, which you think it could be of this elementary graph like structure, I would be very interested because I would like to play with it. Yeah, it should be possible in general kind of point of view, saying that they should carry some information. Interesting. No, I would be very interested if you could have a good sign, Yelin, maybe. Okay. Thank you. Any other questions, remarks or comments? Oh, thank you, dear Nicolas for this amazing and huge expansion of our paper with Favère Boisac and Carol Penzon. And I have a small question. Are your variants graded? Do you have each time some way to count vertices or edges so that? Yes, exactly. So, the reason these formalized rise, that I mean, so in my formalism, they are filtered. So, it seems as though in principle, you could count the occurrences of these connected components in some system. So, I mean, if you give grade two to the, you know, this e-pattern, which consists of two subdiagrams and grade one to each of the others, they will give you a grade. But the thing that generalizes is the filtration by essentially the cardinalities of the interfaces. And then this through this filtration, I mean, always when you compose, you exhibit a little bit less of the interfaces to the outside world. So, that decreases the filtration degree. So, I mean, it's clear that this by composition gives a decreasing sequence. I mean, at most, you have the same filtration degree if you don't connect overlap, because it's simply the sum of the overlaps of the interfaces and otherwise it decreases. And this happens to be compatible as a core product. So, yeah. Ah, okay. Thank you very much.
Stochastic rewriting systems evolving over graph-like structures are a versatile modeling paradigm that covers in particular biochemical reaction systems. In fact, to date rewriting-based frameworks such as the Kappa platform [1] are amongst the very few known approaches to faithfully encode the enormous complexity in both molecular structures and reactions exhibited by biochemical reaction systems in living organisms. Since in practice experimental constraints permit to track only very limited information about a given reaction system (typically the concentrations of only a handful of molecules), a fundamental mathematical challenge arises: which types of information are meaningful to derive and computable from a stochastic rewriting system in view of the limited empirical data? Traditionally, the main focus of the mathematical theory of stochastic rewriting theory has been upon the derivation of ODE systems describing the evolution of averages and higher moments of pattern counts (i.e. the concentrations of molecular species). In this talk, we present an alternative approach based upon so-called tracelets [2]. The latter are the precise mathematical encoding of the heuristic notion of pathways in biochemistry. We demonstrate a novel mathematical concept of tracelet algebras and highlight a computational strategy that permits to derive structural, high level insights into the dynamics of pattern counts. In view of the focus of CAP on combinatorial aspects, we will illustrate this mathematical approach with an analysis of planar rooted binary trees in a rewriting-based formulation utilizing the Rémy generator. [1] Pierre Boutillier et al., ”The Kappa platform for rule-based modeling.”, Bioinformatics 34.13 (2018): pp. 583-592. [2] Nicolas Behr, ”Tracelets and Tracelet Analysis Of Compositional Rewriting Systems”, Electronic Proceedings in Theoretical Computer Science 323 (2020), pp. 44-71
10.5446/51256 (DOI)
Thank you for the invitation. I'm very honored to be part of this event. And like Martin, I mean, I was mostly interested in SPDs. And it turns out that if you want to deal with the general KPSZ that Martin presented in the previous talks, you need to use algebra and you have to use this conchimer of algebra. And in this talk, I will try to go through these two renormations, the one which happened in SPDs, the one which is used for recentering monomials and the other one which is for renormalizing them, which is more equivalent to this VPHZ renormization. And I want to interpret these renormations in the framework of bogery, both type recursions, in the sense that I want to give a precise meaning that we obtain in the context of SPDs some algebraic Birko factorizations. So this is a joint work with Kourush, Ebrahim Iffard. So before going to the algebraic setup, I would like to present very briefly two applications or two fields where you can actually use this tool. So first, the one which is more singular SPDs. So you start with here and just write an equation. So you see an example with generalized equations. So I have a PTU minus the fashion of U equal a non-NIT depending on U and its derivatives, but also on the side, which is a space-time noise zone. And the whole idea of a very structure, trying to solve these equations, just give a lot of descriptions of your solutions. So your solution will be locally described by Y equal X plus a sum. Here you have some of the data expansions, where here you sum some combinatorics, as Martin presented in the previous lecture. And every tree here you interpret it as a monomial, so it will be small in Y minus X, where you use some map pi X. And you have some coefficient in front of it. So given by this epsilon tau X, so the coefficients come from when you do your Picar iterations in the equation above, you produce a perturbative expansion and you can grab some coefficient depending on the non-NITF. This is a Taylor expansion, so I truncate, and I have some reminder, which would be a small one, X is close to Y. So what is very important here is that I have a map here, pi X, with a base point X. And this map sent decorated trees without this combinatorial structure to actually recenter the region intervals. So it would be stochastic integral, would be a recenter around the point X. It's like a car on my decorated trees if you want your fame and rule. And this is the map that you will be interested to construct. So pi X is a recentering map, and if you want to have the so-called model that is being used in register structure, you need to add another map, which would be depending on two points, X and Y. And this map allows you to go from an expansion around the point X to an expansion around the point Y, doing with this transformation. And the copal, I mean, pi and gamma constitute what we call a model in this context of a single-paraspid. It happens that in the context of speedies, gamma is strongly determined by the pi X. So the recentering map actually determines your gamma. This was the first, I would say, renormidation or recentering operation. Then you have a second operation, which is more familiar to you. Actually, this pi X star, which are stochastic iterated integral, they could be ill-defined because you have some products which are ill-defined. Of course, these products are there. I mean, they are in this right-hand side of the equation because the noise is a distribution, so you have problems to define these products. And so also, this term will not be confirmed in that you need to construct it with a suitable renormidation. For that, you need to work with some renormidation maps. We'll build in our maps on your set of the corrected trees. And we do it in a way that we have a simple actions on this pi X, where we just apply first renormidation M, and then we apply this pi X, this recentering map. So obtaining these simple expressions really comes from the algebraic perspectives when you have the interactions between the off-algebra which gives you this recentering map and the one which we normalize. And these nice interactions allows you to write this simple formula. There are two renormidations at play in these PDEs, and I would like to give you all you can see then using this algebraic Birkoff factorization. Now I move to another application, which is more recent, and this was for numerical analysis. So imagine that you have some dispersive PDEs, which are all in the form. So here we have some differential operator, here we have P, which would be a polynomial in MIT. Now the issue doesn't come from the fact that you have a singular noise, but you can add rough initial data like this V. And the idea is that you want to derive an efficient scheme for this dispersive equations at lower regularity for your initial data. And it turns out that if you want to write a numerical scheme that I developed, I wrote with my co-author, Catherine H. Ross, what we wrote in actually you can pick up an approximation of the coefficient of your solutions, and it will be given by a sum of a trisor like in a like for a PDE zone. And you still have the same type of coefficients that will depend, this coefficient depends on the initial value. And you have some integrals, iterated integrals, but here the numerical, so it will be not the exact iterated interval that you will construct by doing the P-characterations, but you'll be an approximation of this iterated interval. And here we are taking a sum of our trees, the creative trees, but because we are doing this in Fourier, we have to incorporate Fourier coefficient inside the trees. So that's why I put this is a different set of decorated trees. And as before, we have a character PR, which moved from the decorated trees and the speed out and gives you an approximation of iterated integrals. So this has a main character. It turns out also that in that context, that from that character, you can build another character using some type of algebraic Bercow factorizations. And this allows you to perform the local error analysis in the sense that you want to know locally if your schema is good at approximating well your solutions. So these are the two applications. And now I would like to move into the algebraic framework. So now we switch a bit from the applications and we come to the algebraic setup. Okay, so I start with introducing what they mean by a decorated tree. So we have the first set of non-planar root trees. Then we have a set of finite sets, this geographic logic L. And then I have a set of decorations will be this all times and to the power of d plus one. So this n, okay, what are the interpretations of this decoration? So here, for instance, this finite set will parameterize the differential operator if you have a system of equations, so then they will be associated to some propagators if you want to encode the propagators. And the end d plus one here is the directive that you can put on your propagators. Okay, so you want to dissociate the propagator and the derivatives. And so now we're considering trees, decorative trees on this set of decorations. And there will be all that form. So I will have a node decorations, which will be this map. This map we take value into n to the power of d plus one, meaning that I allow to have some polynomials like classical monomials inside my intervals. So like x to the power. And I have an edge decorations that will encode actually all the propagators and their derivatives. So it would be a map from the edge of the trees into the set of decorations. Okay, and I take the spanner of this, the great trees and I call it H. So I need to have a provide here product. So the product that we use when I talking about having characters will be the tree product between the credit rates. So at the level of the non-planar root, it is just to join the root, the root of the two trees. And then you do the Djun sum between the decorations. So I want to just below you see an example of multiplications. So here I have one degree trees, so I represent here on the nodes here are the node decorations. Here you have the edge decorations is this D and with I do the joint with product, what happened is that, okay, you glue these two roots together. And on the way you add the decorations. So you have L, you add it to M. So this would be our product for for the three products that we use. So these are the set of decorative trees. Now there is a symbolic notation, which I was actually introduced by Martin Eau and this foundation paper on the structures. Is that when you have the decorative trees, we want to represent them with symbols. And then you have some graph-tingual operator, this I subscript A here. And you take a decorative tree and what you do and you just graph this decorative tree with an edge, the grid by A on this node. This is equivalent to is a similar thing to that's the B plus operator that we use for the conchimer of algebra. So you have the grafting of the trees and if I add this just a dot decorated by K, identify it as a monomial that will denote X to the power of K. And now my, sorry, and now my tree product, I mean, if I want to represent that tree using the set, this is two symbols, I actually give you these expressions so I can go it slowly. So here you have at the root, you have M. So this gives you this X to the power of M. Then if I go for the first branch, I have a calligraphic A, so the correction on the H. And on the top, I have this N. So this is X to the power of M. And then I go on with that branch, it would be a calligraphic B, X to the power of P and so on. Of course, here the product is commutative, so I could have written in different ways. So we have like decorative trees, we have a nice B plus operator, we're using these symbolic notations. And now I can provide you the expression of a conchimer type of product, which should be appear in the next slide. So first, before that, I need to say that we are considering trees, but we have some truncations in the sense that we will not consider all the trees, we may sometimes just want to consider trees which actually will give you ill-defined iterative intervals. So you want to have a way to truncate, so you define what is the degree of a tree. So it's more related to the analytical aspects of the SPDs. So one, you have one assignment which is actually defined on the propagator, so basically give you what type of regularity you earn by converting, for instance, by the each kernel or other kernels. Then you have a second map which actually will be on the polynomials, so it depends which weight you put on the polynomial or which other space you want to use. So often in SPDs, we use a 211 scaling in the sense that times can double in comparison to the special components. And then the degree of the tree, it just that you take the sum about all the node decorations, those will be the polynomials one, so actually they increase it because they are out of regularity. And then you have to take the sum of all of the propagators, so here I have the things that I earn by completing with a kernel, minus I need to subtract the initial derivatives that could happen on this propagator. And then because we will be in the finite, I mean, this will also the space of positive decorative trees, which are that form, put in brackets because you will see in a minute, so that trees are that form. And in fact, I am asking that all of the branches which are exciting the root are to be of positive degree. So this will be in the sense that this I will have can associate to it a nice projections, which will be this pi plus. And this pi plus projection would be multiplicative for the tree product presented before. So this is the space that it is useful for doing, for instance, the recentering. And now I come to the main map, which is a complete reform picture of the primary product, which is given above. It looks very similar to the, to the point of the product in a way, in the sense that you can see that this calligraphic idea gives you the same expression that you will obtain with a deepless operator. But there are some subtleties actually, which makes it study a bit more steady, more complicated. In the sense that here I play with my polynomials, in the sense that these polynomials are primitive elements, okay, like even here, these monomials. And I play with decorations here, I have a sum over L in N to the power of D plus one. And this sum has to be understood as if you are doing as a level of the algebra, a Taylor expansion. So I'm taking derivatives on my branch or my calculator. And then I've x to the power of L divided by L factorial will be the classical monomials that you see in a Taylor expansion. And in fact, so this was not done in the original paper of re-destructures, but we did it later with Martin Ehrer and Lorenzo Zambotti. In fact, you can consider this product with an infinite sum, okay, you can keep the infinite sum. But if you want to give a meaning to this infinite sum, we use a big rating. So as a big rating, what you mean, what you will need to measure, of course, you will have the size of the tree. This will be one complement in this big rating. But you also have to take into account the data, for instance, the age decorations, because here when you do that sum, you increase the age decoration by putting these derivatives. So you can give actually a meaning to this co-product in this using the big rating, having this infinity. Okay, so now I'm saying that this is deformed. So this is a recent work with Dominique Manchon. Actually, you can really identify where the deformation happened. I mean, the deformation being the fact that you add all these extra terms, having this x to the power L, tensors, these derivatives on your on your plant tree here. And this is quite involved actually, it's not straightforward that we need to use another product, which is not exactly the grafting of a tree onto the other, but the plugging of a tree into another. And then you can apply, I mean, you can deform this product using these data expansions, apply some algebraic procedure, like the gangou don, which give you a necessity product. And the adjoint of this associative product actually will be this co-product. So it's like the same procedure when you start with a, that you apply it to a pretty product, construct an associative product, and then you can get your co-product. So this explain or you can see this co-product as a deformation of the original co-prime or co-product. Okay, so these are just for commenting on this map when you have made many this infinite sum. And what happened when you look at applications in SPDs or numerical analysis, okay, you will not use this infinity. I mean, you will truncate because they are constrained due to your applications. So the first truncation, which is these corrections, basically what you do, you truncate here. So I put a projection here, my pi plus. So in that way, I don't want this tree to be of negative degree. So by putting this direction, actually, my degree is going down. So at some point I have to stop. So the sum will be finite. And then the subtraction will be determined by the degree. Okay. And if I want to co-product, because this would be just corrections, because here I will be still on H. I need to project also on the trunk. So on the trunk also I have to put this projection pi plus. And this gives you a co-product. So we're not going to details, but if you think about the application I gave on numerical analysis with also these expansions in terms of trees, these projections in pi with pi plus will be different. Because in numerical analysis, you are interested in having an approximation up to order R. And so you will truncate, for instance, you will remove all the trees, which are too big, because there will be of order R. Okay. This is a different projections. Just to tell you that depending on the context, actually, you can play with this co-product and adapt the projections that fit more your word. I mean, the constraint that you have for NDPDs or singular NDPDs are not the same for as for a numerical analysis scheme. So that's why I mean this is very central. And you can really tailor it for your own applications. Okay. So now I will present the main results, which are in SPEs regarding this co-products. Actually, in the sense that you can able to construct an NDPD for this co-product. And the map that presented here, this one when you project on the right, I mean, on your make your expansion finite, it will be a right commodule for this H plus this off algebra of positive tree, of trees of positive degrees. And it turns out that what the map one of the most important map in in SPDs, the one which is used actually for recentering your object in a recursive way is a map called a twisted antipode. So you start to be in H plus, but then you exit and you go into H. And the definition of this twisted antipode also looks quite similar to what would be the definition of an antipode. But then you see that you have a sum, the same type of sum that show up as a level of the co-product. And you have some truncation according to the degree. And you apply it on what should be normally the formula of your of your antipode. And so the idea, and it's what we claim at that time with Martin and Lorenzo, is that actually this twisted antipode give you an equivalent of these algebraic Birko factorizations. And now in the next slides, I would like to make this statement more precise and really try to match up the two languages. I mean, the one you use with this twisted antipode and the one that you will use if you try to develop the classical algebraic Birko factorization or Bogolubov recursion. So here I'm going through what I will consider as my algebraic Birko factorizations and see how one can apply it to the context of this twisted antipode. So I just put here what is a framework. So I start with a connected graded of algebra with a co-product delta. I have the right commodule structure of H given by H-art and this dot. I will consider character over H and also over H-art and they will take value into some commutative algebra A. I will assume that I have some rotorbackstorm map Q from A into A. And this corotabackstorm map needs to satisfy this identity. So this is an identity for a rotorbackstorm map. And this rotorbackstorm map will induce splitting of A into two sub-algebra. So one would be A- when you apply Q and the other one would be A- when I apply the entity minus Q. So this is a equivalent for like if you have theories in Epsilon you just keep the remove the pole part. I mean this corresponds to this type of projections with your rotorbackstorm map. So this is basically the different objects at play and now I'm able to stay with what is my meaning of a bug-lubov type of retrosion. So here we pick up a character from H-art into A and then there will be unique azebramorphism one from H into A minus which would be basically what we call I think it's called in the literature the contraproms and the five plus would be the renormalized character from H-art into A such that for every element in H so here I think I forgot to say but this is an injection of I think it's here I have an injection from H into H-art. So if I pick up an tau in element of H and apply some preparation map called phi bar and apply make sure I got my minus and then my phi bar is constructing with some type of recursion. Recursion where I use three-dimensional notation for reduce a co-options. So this reduced correction which is defined here and I just remove the primitive part of my correction and so I do the recursion with phi and with phi and I define this with phi minus and if I want to get the renormalized character I just take the convolution here between phi and phi minus where for the convolution I'm using the co-action and basically what why is just called a bookly-worth type recursion because here I'm using a co-action if I replace my co-action with a co-product I get an algebraic okay then depends what is your definition of algebraic the alcohol factorization. So here's a recursion is built on these reduced co-actions and what is so important is to have this rotaback format this Q map which actually do the projections. Okay so this is what I'm calling this bookly-worth type recursions and it turns out that in the cases or the application that presented it fits well to at least two applications. So for instance when I was talking about the renormization of the model meaning that you want to recenter your specific processes so this would be done okay you need to extend a bit your expectations but you can think of Q as the expectation and in numerical analysis Q would project according to the frequencies so for instance if you have zero frequencies then you want to remove this this chance so this Q will project according to that. So in that context it works well because for renormization of the models it's an extraction contraction of algebra which is connected and also in the recal analysis the of algebra will be connected so you can basically apply not okay almost directly the bookly-worth type recursion that I've presented before. Okay now it turned out to the of algebra that I presented the one which is h plus this one of trees with positive degree unfortunately it's not connected because I have this x to the power k I mean all these monomials and one needs I mean one needs to find what should be the reduced question in that case because it's not only subtracting the the the permitting part but what you need to do what you have to do to do in that case. Also remark that I will not expand further is that while you have these co-interactions between two renormulations this one recentering and this one for renormulation and we see that they correspond to actually you have two twisted antipodes you have two bogliuble recursions and in sense is that you have some co-interactions between the two bogliuble recursions in the sense that you can start by applying the one bogliuble recursion produce a character and then from these characters you can actually apply the second bogliuble recursions as the co-interactions tells you that actually you can switch between the two bogliuble recursions so this is also something we can be a nice interpretation of these co-interaction properties. Now for the rest of the talk I will go to the complicated example the one when you are not connected and all you need to deal with this problem in this in this application. We need to find a suitable create for a reduced question and then we will be able to perform a so the reduced co-product should be the primitive one in our case because we have these polynomials we need to subtract a bit more so in the sense that this reduced correction will be given as such so tensor 1 and then I need to subtract this big part basically what this big part says that in fact I put all these branches all the part of my branches on my trees are on the right inside of the co-product and then I have all the play with the decorations in the sense that I can extract some x you can put xk which come from the data and I put the sum over li which are the potential derivatives that I can put on the different branches and that's the combinatorial factors here so what I want I want to subtract all this part I want to remove all this combinatorial to all this part which makes sense because these terms are produced by these deformations and they are made the term you want to remove when you consider a reduced map. Okay just one one justification of this choice is that at least on these terms which were at the beginning this is why your offer is not connected at least the reduced map is zero on that terms so the reduced map has been designed in such a way that it is zero on x to the power k to the dot when you have these k decorations. So now we are equipped with this reduced correction and now okay what I want to do I want to introduce this splitting and also introduce this reticulant. So for the splitting we look okay we work as with the spruce models we mentioned that you replace your singular noise by some approximations and reflections so here we are looking at infinity because one into r and what we do is that we fix the base point the x in our d plus one and you get the following splittings where f plus x contains a function that vanishes x and x minus one would be the polynomials which whose coefficient depends on x which are a function of x. Okay and then you have the natural rotor backstorm map even below where I pick up the infinity functions and what I do here I have some telejet between y minus x so I just take the telejet of f and I have some truncation here I mean I just truncated because there is an reticulant object I truncate and this truncation actually reflect in the algebra when we truncate the tele expansions according to the degree. So this is this telejet operator and actually it's well known that I don't know actually with fancy in the literature but you have this rotor backstair identity playing with these parameters when I have alpha and beta and so I have this rotor backstair identity and I think it's been understood in the literature that you can actually extend this result of the Bougou-Roubault recursion with a rotor backstair identity to a family of rotor backstair maps. So this is actually gives you a family of rotor backstair maps and that satisfies these rotor backstair identities. So we already put the framework so we have this splitting space splitting we have the rotor backstair identities we have our reduced fractions all the tools are in place for formulating the Bougou-Roubault recursion. Okay so here we will instead of we consider a family of of characters from H into into A these cn3t functions and here these characters will be indexed by a base point here this x bar so we will see it in the next slides but you have to understand this x bar as some a priori recentering. You know that inside these trees I have some polynomials these monomials x to the power k and analytically I mean I can just interpret this as a polynomial functions or I can say that maybe I want to have a priori recentering of these polynomials so I can try to recenter I'll say that they're recenter random point x bar and see if you cannot get some invariance for that point x bar. So here I have a family of characters depending on this point x bar and then I can set up the Bougou-Roubault recursion exactly the same way as in my main definition where now I'm using my Taylor jet operator so I have always my preparation smart but now it depends on more points so here I have the x bars reset the a priori recentering I have x and y and so this gives me this recursion in terms of phi and phi minus and here this sum this is all three of the annotations for my reduced corrections and actually if I want to compute my phi minus I need to subtract here I'll apply my family of Rota backstorm app and this is where here you have you have the degree I mean the parameters of my Taylor jet is determined by the degree of the tree by the degree of of town so this is where this is this match the the properties that you will see in your co-product when you take you truncate according to the degree and then I take the convolution product between phi and phi minus to get my phi plus delta plus so it looks familiar I mean close to the to the Bougou-Roubault recursion one need to be cautious because here I've introduced several base points one base point was it is x bar that I hope to get rid of at some point because I want maybe to have some imbalance property and there is another base point which is this x which would be the the point x in which I want my my current I mean my renormidation to be recenter around the around that point x so this gives you this nice Bougou-Roubault recursion so actually what you can do with that so we construct these subjects and then okay you have some assumptions okay so you cannot take any any characters depending on x bar which this is quite reassuring because if you look at like this the assumption that you have when you try to design a model in registrations one also needs assumptions on the characters so one assumption which is very natural is the one that okay we want to say that this x bar actually is recentering on your monomials so I have x i would be yi minus x bar i okay so this is what I tell then you have another requirement you want that when you tweak the decoration on the edge when you tell or expand it in the algebra this actually corresponds to derivatives in the analytical part okay in the analytical counterpart okay so you want to match this when you add the decorations this amounts to take a derivative on your new character so these are very natural assumptions but actually they give you an interpretation of these decorations and if you have these two assumptions and using the rotabax stir identity that you've seen before actually you can prove what you obtain in the classical Bougou-Roubault recursion is that this phi minus is a character from h plus into a minus x and the phi plus would be a character from h into a and very nice result we also show that this phi minus it's actually obtained by phi by applying the twisted antibody so this is actually what you see in a register structure what you this what we claim to be the the the algebraic bit of factorizations so these formulations by recursion actually make appear actually match the recursion that you will do with this algebraic object which is the twisted antibody and you can go a bit further you can also have a nice expression for phi plus with the preparation maps will be careful that this map will be multiplicative will be evaluated on plenty trees so a bit will be multiplicative here on on meter so actually you can from this recursion starting from this family of characters you can show the character property that you expect on phi minus on phi plus and then you recover the expression for the twisted antibody okay so now maybe we want to get rid of the x bar so getting rid of the x bar so this you ask some invariance property so here you need a bit more assumptions so basically you need to not only interpret the decoration on the edge of derivatives but you need to make a pair what should be your your calculator or your kernels for instance here I make a pair the two index of the decorations for these two because one give me a kernel okay and the second one will give me the the derivatives on my kernel and here I will put the convolution with the character apply so you need to have another prescription or what should be your Feynman rule you need to have the convolution structure when you are on the on the plenty tree and if you add this assumption on the top of the two previous ones actually you can show some x bar invariance in the sense that for phi plus you can actually remove x bar doesn't depend on x bar then you have some invariance for the preparation the book rule of preparation math phi bar and phi minus but they are weaker that's for phi bar this would be x bar invariant on plenty trees and for phi minus would be x bar and on trees with zero node decorations as a result of some restrictions I mean you get fully the dependence on x bar but for five five plus you get it so this is something you can you can show and now I would like to finish my talk with a final remark is that if you use mostly the language of relative structures what we were considering it's like a character which I denote by this called face pi which for this character indexed by this point x bar and normally if you were in the context of the of the work we did with Lorenzo and marching what you do is that you start with this character you apply a tree student code and then you express it with a convolution with your delta r plus and you construct that map by x to one of the beginning to construct result interval and we show that in the process that actually the map is independent from x bar and this will be the model I mean the the py axis would determine this map and finally what you can say that these maps here which are used for constructing the model they can completely have one-to-one correspondence between the map you can see in the in global recursion so these five plus we give you the pi x and this will be your counter terms even by this five minus and you get the x bar invariance for the model so this really matches the two languages so this the conclusion of this of this of this work that actually is this algebraic bicofactorization or bugly above recursions show up I mean is really in the framework of spds and one too can use this language to reinterpret the fundamental objects at play in in relative structures and all the the renormulations but it's not only limited to stochastic partial differentiation even can be seen the level of pds when one has to deal with numerical schemes so this is a very nice applications and started research I mean I was interesting mostly in PDE's or spds and it was really unexpected but very nice for me to be in touch with these tools this con primer for product and little by little I realized that it was actually a central central object in both spds and maybe next in in this numerical scheme for PDE's so thank you for your attention thank you Ival for your nice talk other questions for Ival from from the audience yes I would have a question about these two bugly above recursions in co-interaction is do you have a general framework for this do you need only I mean a commodule bi-algebra and a character with value in rotabaxter algebra for example I like what you need is like of co-interacting bi-algebra which give you two character group of characters which are can take the semi-direct product of them okay and then within that you can actually write a bugly of a bugly of recursion on this on semi-direct on this semi-direct product of these groups and you can try first to do a as you start by the the recent ring and then you do renormization or you can switch this has not been really formalized it was just a more general idea or an observation that we we got in the paper that PDE maybe can try to push this formalism or this framework actually on the co-interactions yeah looks very nice yes thank you I have a question even though I'm surprised that I'm asking this kind of question so going back from combinatorics to analytics you know in physics one of the consequences of the the coprera and these these these recursions is that it has some influence on on the analytic behavior of the function right so there's something like a leading log expansion or a normalist dimension so by studying these combinatorics to learn something about some features of the actual solutions that are underlying this procedure that it is describing so so I was just wondering maybe also to Martin in these two talks we on one hand this is analytics side on the other hand there's a combinatorial side but is there any feedback in the sense that so far we have seen that you use this to to define what the well-defined to normalize solution is to these kind of equations but is there anything beyond that can you can you learn something about behavior when things become singular when when arguments of these things come close together or anything like that I'm just curious if that makes sense so you what if you can infer from the algebraic part some analytical properties on your solutions or I mean the simplest example in field theory is this thing called the leading log expansion yes so if you have a if you have a greens function suppose it just depends on the momentum and then you want to know how it does it behave as a function of momentum and then if you order it by powers in the momentum then you get corrections by by the log of the momentum and you can for example tell that the highest power of the log like the biggest correction at each order is related to some simple co-product of some simple graphs that enter your combinatorial structure and there is like a there's a way to order everything in terms of these kinds of corrections and logs for example so I'm just curious if anything anything like that is exists also in this spd applications but I have no idea what the analytic question to be asked would be I mean I'm just curious if there is anything I mean as a practitioner we're all like you have some problems you have these analytical features and you want to encourage them at the level of the algebra because you want to organize your computations nicely and to be able to derive general results but I think I've seen recently maybe in the work of Martin was for the support theorem was using some equations on the characters to infer some analytical properties maybe Mark could comment but this this I think was nice perspectives from the from the algebraic part or even he was able to show that the as if all given by some if you have the constant this is that corresponds to some of ideal your for algebra so but it was it seems to be a huge effort trying to connect this what should be normally analytical properties and to see them at the level of the of the algebra it's really a truplici for at least for us I think for everyone but but the kind of result is that for example you can relate the the coefficient of a log I mean it doesn't tell you what the coefficient is so it doesn't really does the analysis for you but it does tell you that a certain coefficient of log is the same as another coefficient of like somewhere else so it tells you some information of tells about how some information gets redistributed due to the nature of subtractions that are underlying the definitions of things but I'm sorry for the super vague around the question unless yeah I actually might have a question do sub hope for a give us play a role for you for example root trees which have at most k outgoing branches and every vertex for if k is one these are just linear root trees which never split so to speak does such sub hope for a give us play a role for you how are you just have linear treat like with no branching at all that would be the simplest one or if you say I just take root trees which have always two branches at most outgoing ah two branches you want to talk about by any way would put such sub hope for you plus play a particular role for you because under the co-products this is closed under the co-products such trees goes into such trees tends to such trees and maybe maybe there is something to be learned from that no for some reason I don't see what I mean I mean when we can't construct this regularity I mean we have this notion of rule right that sort of very yeah that's essentially gives you right because so so in these regularity structures you naturally don't have all possible trees showing up but you know you only have somehow because they're the trees are it's really more like the Feynman diagrams right so in the Feynman diagram you sort of fix yourself the interaction so you give yourself a number of types of notes somehow right and so here here the analog of the Feynman diagrams would themselves be trees already and then the analog of that would also be so if you give yourself rules about how things can kind of come together and then these sort of Taylor expansions they are the ones that look kind of like the conchrima hopf algebra also with these sort of cuts and then in this case they would in our case also consist only of trees that have certain structure right so there wouldn't be all possible trees but they would only be the ones of you know depending on the degree of your non-linearity and these kind of things okay thanks thank you if there are any further questions now then let's thank Ivan again and all speakers of today
Hairer's regularity structures transformed the solution theory of singular stochastic partial differential equations. The notions of positive and negative renormalisation are central and the intricate interplay between these two renormalisation procedures is captured through the combination of cointeracting bialgebras and an algebraic Birkhoff-type decomposition of bialgebra morphisms. We will revisit the latter by defining Bogoliubov-type recursions similar to Connes and Kreimer's formulation of BPHZ renormalisation.
10.5446/51259 (DOI)
Okay, so I'm Andrey Davidovich and I'm very glad that I have a possibility to take part in this conference in honor of Dirk. So, and my presentation is on geometrical splitting and reduction of endpoint Feynman diagrams. Well, I want to start with some very simple slides describing what the Feynman diagram is. And as an example, we can use the quantum electrodynamics where electrons interact using the photons. So elementary interaction vertex is electron emitting the photon. And we can also organize interaction between two electrons using a photon. And also we can have more complicated examples of interaction, where we have more photons and some of the photons go from one electron line to another electron line. And in such a way, so we can get so called closed cycles or loops. And one of the examples here is a triangle diagram. And another example is four point function, which is a box diagram. Now, when we calculate Feynman diagrams, so we are using so called Feynman rules. So each vertex can have some indices, Lorentz indices, spinar structure, and its proportional repoupling constant constant and may depend on the moment of the particles. Then each line connecting two interaction vertices. It's a so called propagator, and it has such a form where we have the four dimensional momentum in the denominator, which basically consists of the energy and of three dimensional momentum. And this is like a green function for the L'Alembert equation. And also each closed loop implies integration over four dimensional momentum k flowing around this loop. But as a result of this integration, we may get divergences. And so we need some kind of regularization to permit these divergent things. And one of the most common tools used in loop calculations is dimensional regularization. The idea is to use the space time dimension. I denote it as a small case n as a regulator. So we basically introduce some small epsilon to this dimension, which is close to zero. So we change four dimensional integration into n dimensional integration. And then if the integrals are singular, then the singularities appear as one over epsilon poles. And this is the simple example of that pole diagram. So for example, new denotes the power of propagator and when new is equal to one or equal to two, so we have divergences. We have gamma function of epsilon, which basically gives us one over epsilon pole. Now, so I'm going to speak about a little bit more general case when we have one loop endpoint function. And it looks like that. So we have basically a capital N external legs here. And in physics, this might respond to the processes say m particles to n minus m particles. And in general, when we consider arbitrary external momenta, arbitrary internal masses, so such an integral, even a scalar integral would depend on half n times n minus one in momentum invariance of such form and n internal masses. So basically the number of variables grows quadratically with respect to the number of external legs. And this is basically the definition of the integral scalar integral scalar frame and integral corresponding to the endpoint function. And it has n denominators like that. And basically we integrate over the loop momentum, which is denoted as q in this case. Now coming back to this birthday. So how Moscow Hobart and Mines got connected through the story begin in the early 90s. And at that point, so I lived in Moscow worked on such endpoint biograms using a million burns approach hyper geometric functions, etc. And Dirk worked in Mines also known as Mayans and studied similar Feynman diagrams using also some hyper geometric functions, but he was using Carlson R functions. And in 1992, Dirk invited me to give a seminar at the University of Mines. And if I'm not mistaken, this was our first meeting. And later on for two years, I was a postdoc in Bergen, Norway, and Dirk went to Hobart in Tasmania. And of course, I mean, many of you know that Australia is called Down Under and Tasmania is under Down Under. And he worked with Bob Del Borger and then Dirk returned to Europe and in next year in 1996, so I was able to continue research work with Bob, basically within the same project. And this is how the how our work on a geometrical approach to Feynman diagrams was started. So and also I spent four years in Mines. And of course, we've had a lot of useful communications with Dirk. So this is how this presentation is related to Dirk. So and I have basically another three point function here. So that's how most of Hobart and Mines got connected through Dirk. Now, back to back to the integral. So let's consider this endpoint integral with the unit powers of propagators. Normally, these are master integrals that we need and use Feynman parametric representation for this integral. So we have n for integral with one delta function. So this and alphas are Feynman parameters. So in the standard Feynman parametric representation, so we have quadratic denominator, which contains some quadratic part multiplied by momentum invariance and some linear part multiplied by the squared masses. But we can using the condition that the sum of alphas is equal to one due to this delta function, we can make the quadratic form homogeneous in alphas. And so basically, we multiply linear part by the sum of alphas. And then we can rewrite it in slightly different form where we get this quantities Cjl. And Cjl can be defined as sum of masses squared minus the corresponding momentum invariant divided by some masses. And they can be associated with cosines of some angles. And when this cosine is equal to one, so this corresponds to two particle pseudo threshold. And when this cosine is equal to minus one, then this corresponds to two particles threshold. And of course, direct geometrical interpretation through the angles and cosines of these angles is possible when the corresponding cosines are between minus one and one. So in this case, the angles tau are real. But in other regions, so we just can consider analytical continuation and instead of trigonometric cosine, so you get hyperbolic cosine. Simple example of a two point function. So basically, we can associate with two point function, where we have external momentum k12 and internal masses m1 and m2. We can associate a triangle. And this triangle has sites m1 and m2. And the sub site is basically the absolute value of the momentum. And also, we will use the perpendicular, which we call m0. So using this perpendicular, so we split our triangle into two rectangular triangles. And just to remind, so the cosine of this angle has such form, of course, it's just trigonometric identity. And again, I repeat that as the pseudo threshold, so these sites m1 and m2 are together, so the angle tau is equal to zero. And as the threshold, so they just go in different directions, so the angle tau is equal to pi. So this is the geometrical picture associated with this simple diagram. For the three point function, this station is a little bit more complicated. So we have three independent external momentum variance. And we have three masses and the geometrical picture would give us a tetrahedron, three dimensional tetrahedron. And the three sites of these tetrahedron are associated with the masses m1, m2, m3, and three other sites shown in red are associated with the absolute value of the external momentum, of the three external momentum. And here also we can drop a perpendicular from this point onto the triangle made out of these momentum variance. And here we have three angles and the definition of cosine of these angles is here. If we go to the four point function, so then we get already a simplex. And the simplex should be understood in four dimensions, so the slide requires four dimensional imagination. So and we have four masses m1, m2, m3, m4, so the diagram itself is shown here. So basically we have four external momentum, but we have six external momentum variance because we also get Mandelstam variables s and t here. And okay, so in the simplex, so we have four masses and we have six momentum variance. And what is shown in red is in fact a three dimensional tetrahedron. But the picture itself is four dimensional because we also have these mass sites. And also here, basically important quantities are gram determinant, which is the determinant of the matrix of these cosines d4. And also the basically the gram determinant corresponding to this red tetrahedron, which I call lambda 4. And the hyper volume of the simplex is related to square root of gram determinant. And the volume of this red tetrahedron is related to the square root of this lambda, which is also gram determinant. And if we drop a perpendicular from this point onto the red tetrahedron, so the length of this perpendicular m0 would be proportional to the square root of d4 over lambda 4. Now, using these geometrical pictures, so we try to go from fine one parameters to the geometrical picture. And on this way, we make some substitutions of variables linear and quadratic substitutions. And in fact, what we do, we change the argument of the delta function that now it has this quadratic structure, which we used to have in the denominator. And this quadratic structure would be equal to one, because it's in the argument of the delta function. This is how we get rid of quadratic structure in the denominator instead. So we put it into the argument of the delta function. But while making these transformations, so we earn some linear denominator, which we have here. And capital C is basically a modified matrix. Well, basically, we have the same cosines, but they are multiplied by some quantities which are square roots of capital F. And capital F are just partial derivatives of gram determinant multiplied by the corresponding masses. And we can see that, in fact, the vector composed of these square root, square roots of F divided by the masses is an eigenvector of this matrix C. So and eigenvalue of this vector is just the gram determinant. Okay, so and we also would need the generalizations of lambda, which can be easily generalized in n dimensions. And also the length of the perpendicular, which is still proportional to the square root of these two determinants. Now, we continue to work on the parametric integrals. And now we try to make the quadratic form diagonal. And we do the corresponding rotation and transform these quadratic form to the diagonal form. And if all lambdas are positive, then we can just rescale the betas and get the delta, which corresponds to hypersphere to the hypersurface of hypersphere. And then the only weight in our integral would be one of the parameters gamma to certain power. And all the rest, all the rest is basically the measure of integration, nothing more. And all the depend except for the prefector, all the dependence on the momenta and masses is in these limits of integration, which is n dimensional solid angle. And amazing thing is that this n dimensional solid angle is basically the same solid angle as we have here in basic simple. So it's so basically by this transformation, so we got the integral where we have the integration over this solid angle and delta function gives us the hypersurface of the unit hypersphere. So basically we integrate over this piece of hypersphere, which is cut by this solid angle. Okay, so, and also from this expression, we can see that we have special case when the space time dimension lower case n is equal to the number of external legs. So then this factor disappears. And basically we have just non-Euclidean hyperbolic or content, how mathematicians call it. And this special case, it's basically a two point case in two dimensions or three point case in three dimensions or four point case in four dimensions and so on. And another important question is what happens if some of the lambdas, which we get when we make the quadratic form diagonal, if some of them are negative. It means that in this case, so we would need to, we would get hyperbolic surface instead of spherical surface. And all our equations or all our results for these Feynman diagrams can be obtained just by analytical continuation of the results which we get in another region. For example, for the spherical case. Now, again, to compare these two approaches or these two representations, this is standard Feynman-Trimetric representation. So we have an unfold integral with one delta function and it depends on the masses and moment invariance basically through these masses and Cjl in the quadratic form, which is in the denominator of this parametric integral. In the geometric representation, well, again, except for this pre-factor, all the dependence on the masses and moment invariance is in the n-dimensional solid angle. So it's just here. So there is no dependence left in the integrand. And another important thing is that we can use this geometric representation for splitting. And I will explain what does splitting mean. And for the reduction of the number of variables in separate of the occurring functions and basically for the simplification of the result. Now, let's continue with the two-point function. So this is the basic triangle associated with our two-point function. Again, we have mass m1, m2, and the absolute value of the momentum. And our integration will go over a circle, unit circle basically over the arc tau12. So this is the angle tau12. But if we use the perpendicular here, then this tau12 would consist of tau01 and tau02. So if you put some point 0 here. And all the relations, I mean, we get just using trivial trigonometric formula. And the area, of course, will be proportional to the sign of this tau12, like in usual trigonometric formula. But now what we can do here. So if we now consider new momentum invariance, k01 squared and k02 squared, and of course, the square root of k02 squared is equal to these two square roots, then each of the resulting triangles will be rectangular triangles. And the sides of these triangles will satisfy the Pythagorean theorem, of course. And we can split our integral, which would go over this angle tau12 into two integrals. One of these integrals would go over tau01, and another integral would go over tau02. And basically, each of these pieces can be associated with new Feynman integral. But with different momentum and masses. So we split, basically, our original integral with k02 squared and m1 m2 into two integrals. The arguments are shown here. But what's important, that now these arguments of each of these integrals satisfy the Pythagorean theorem. So now not all three of them are independent, but just two of them are independent. So effectively, we reduce the number of variables. So if these integral are integral dependent on three variables, now both integrals on the right hand side depend only on two variables. Now, of course, I mean one of the variables we can just use as the dimension. And if we speak about the dimensionless variables, this means that in the original integral, we had two dimensionless variables. And in each of the resulting integrals, we have one variable less. So starting with two dimensionless variables, we end up with one dimensionless variable. And if you look at the quadratic form in the Feynman parametric integral, so the original one was this one. But in each of the resulting integrals, we can use this Pythagorean theorem. And we get, for example, in one of the integrals, we get just such expression, which has two terms instead of three terms here. And we can easily calculate the integral with such denominator, which gives us in arbitrary dimension just the f to one hyper, just how, those hyper geometric function. Now, the three point case. Again, so let's try, well, remember, we have this special case when the space time dimension is equal to the number of external legs. And here, basically, in this special case, our result for Feynman integral is just the area of spherical triangle, which is the spherical axis. And if we consider these dihedral angles size, so this would be just the sum of this size minus pi. And this result corresponds to the result obtained by Berni-Nickel in 1978. So we just reproduce the result, his result in this simple case. But in general case, when we have arbitrary space time dimension, so we basically follow this geometrical procedure. So first, we drop perpendicular from this point on to the red triangle, which is composed of the moment invariance. And this is the picture, basically, this is the spherical triangle or spherical or hyperbolic triangle. So basically, the integration goes over this spherical triangle. But the solid angle is defined by this tetrahedron. So basically, we have dual picture. Now, so we drop this perpendicular. So let's call this point zero. Then we connect it with each of the vertices of this red triangle. And we get corresponding connections in this picture as well. And basically, we split our original tetrahedron into three tetrahedra. And in each of these tetrahedra, we have extra two conditions on the variables, which we get. Now, just take one of them, say this lower one along the points one, two, and zero, and split it again into two tetrahedra. So now, I mean, our original tetrahedron is split into six tetrahedra, but by dropping this perpendicular, so we get an extra condition on the, again, due to this prefabricated theorem. And if we look at the number of variables, then in the original one, in the original three point function, we had six variables, but one of them can be extracted as dimensions. So we get five dimensionless variables in the original one. After the first splitting into three tetrahedra, so we get two relations. So now, each resulting, each of the resulting integrals depends only on three independent variables, dimensionless variables. And after the second splitting, so we get one extra relation, so each of the integrals depends only on two dimensionless variables. Starting from five, we get two, but we have six pieces. And the original quadratic form of in Feynman-Prametric integral was this one containing six terms. And if we take one of these lower tetrahedra, so the quadratic denominator would contain just three pieces. So in this, it's basically an illustration. So we started with five, basically we reduce the number of variables by three. So we go from five variables to two variables. And using this representation with quadratic denominator containing just three terms, so we can easily create it and we get, for the general space-time dimension, we get the result in terms of apple hyperdromatic function of two variables. And by the way, all these arguments of the occurring hyperdromatic functions have very transparent geometrical meaning. So basically, if you look at these pictures, so it's just the length of the corresponding line. And that's it. Okay. And now for the four-point function, so for the four-point function, so we have these simplex in four dimensions. And we have certain hyper-solid angles, so to say, at these vertex. And what this solid angle cuts out of hypersphere is a non-Euclidean tetrahedron, in fact, which might be spherical tetrahedron or hyperbolic tetrahedron depending on our external invariance. And so I'm trying to make this picture here, which should also be understood in its three-dimensional non-Euclidean tetrahedron, basically either spherical or hyperbolic. And now we also begin the splitting to reduce the number of variables because we have too many variables in the general case. In the general case, we have four masses and six momentum invariance. So we have 10 variables, so we have nine dimensionless variables. So let's do the splitting. So first we drop a perpendicular from this point on to this red tetrahedron. So their intercept is just one point. So because this is one-dimensional object, this is a three-dimensional object in four dimensions, so their intercept is just a point. And this point corresponds to a certain point inside this non-Euclidean tetrahedron. Now we connect this point to all the vertices of this red tetrahedron and also here we get corresponding lines. And now we basically split the original tetrahedron into four tetrahedrons. Now we take one of them, for example this one, and drop a perpendicular from this point to the base of this tetrahedron. Now we split it into three tetrahedrons. So we split the original one into four. Now we split this one into three. And at each step of splitting, we get extra conditions due to this Pythagorean theorem because we always drop the perpendicular. So we use this Pythagorean connection many times. So now take one of the resulting tetrahedrons, this one for example, and drop another perpendicular from this point to this point, to this point, to this side. So basically this is what we get. So sorry, we first we split the original tetrahedron into four, then into three, and then into two. So basically we split it into 24 tetrahedron, four times three times two. So and now let's look at the number of independent variables in each of the pieces. In the original one, so we had 10 variables minus one dimension, so it's nine dimensionless variables. After the splitting the original tetrahedron into 40 tetrahedrons, so we get three relations in each of the resulting pieces. So we get six independent variables, sorry, dimensionless variables. After the second step, we get four dimensionless variables. And after the last step, we get three dimensionless variables in each of the resulting 24 pieces. So starting with nine dimensionless variables, so we end up by the functions which depend only on three dimensionless variables each. So we go through it basically by six of them. And if we look at the quadratic form in Feynman-Prameter integral, so the original one had 10 terms. And if we just take one of these two, sorry, for example, this tetrahedron, so then the corresponding denominator has just four terms instead of 10 terms. And you can also see that normally we have partial sums of these Feynman parameters multiplying the corresponding invariants. And this looks like a general feature, so I will discuss it a little bit in the general table. And yes, and we can basically, for each of these integrals, so we can calculate the, we can get exact result in arbitrary dimension, and it would be a certain case of Loretz-Lasaron function, so fn. So here is the definition of this function, but it depends on three dimensionless variables. So like we calculated by this Pythagorean theorem. And here is the table showing the reduced number of variables for two, three, and four point function. And also it can be easily generalized to arbitrary endpoint function. So total number of dimensionless variables is two, five, or nine, and this is the total number for endpoint function. The number of pieces in how many simplices we split our original simplex is two in the two point case, six in the three point case, 24 in the four point case, and one can easily see that each time we just multiply it again, so we get n factorial in the general case. And the reduced number of variables is one, two, three, and in general case it would be n minus one. So we started with the number of variables quadratically growing here, and we ended up through this geometrical method by linearly growing number of variables. And also if you look at the quadratic forms, so again we see that, I mean, there is a general feature that we get this partial squares of partial sums of Feynman parameters. Okay, so now the summary. So well, so we suggest we propose this geometrical way to regulate dimension-regulated Feynman diagrams. All our momentous squares and masses require direct geometrical meaning, and we also get this hyper-solid angle, which basically gives us all the information about the dependence on the momenta squared and the masses. And in the one loop endpoint case, so we can relate the results to certain volume integrals in the non-Euclidean geometry, and of course it could be either Lobachevsky case or Schleffly case, so we can study this, but it depends, basically it depends on our momenta variables and the masses. And analytic continuation so can be done from basically from spherical case to hyperbolic case, and in a number of cases, so the results of the epsilon expansion, so you get hyper-dramatic functions, but still we need to consider epsilon expansion if you want the results for physical quantities. So some of them can be presented in terms of generalized polylogarithms in more complicated cases, so we can get multiple polylogarithms and so on, but we didn't consider the epsilon expansion in this presentation. And dramatical splitting gives us basically straightforward way to reduce general integrals to those with lesser number of independent variables, and the resulting integrals can be calculated and can be expressed in terms of hyper-dramatic functions, and here are the types of hyper-dramatic functions that we get here. And also maybe the last slide, so I want to mention that there are several papers right now which use geometrical methods, maybe similar to this one, maybe sometimes slightly different from this one, so I try to collect some of them, but certainly I couldn't get all of them, so my apologies if I forgot something, so you can tell me if I forgot something here, but there are many people working basically using some of these methods, so thank you. Thank you, Andrei, thanks very much. Thanks for all the time you've put into these nice pictures. I have one question just quickly, you mentioned in the end that you did not discuss the epsilon expansion, but what you did say is that you refer to dimensions, right, you had formulas with the hypergeometrics and arbitrary dimension, so I was wondering because at the beginning you said there's something special about the two-point function, two-dimensions, the three-point function and three dimensions and so on, these cases where it seemed like the dimension should be something very specific, and now you said that you can actually do it in all dimensions, I was just wondering if you could explain a little bit. Well, I think in one of these papers, maybe in this one, this case of space-time dimension equal to the number of external legs was explicitly considered for arbitrary number, I think it probably was this one, but I need to refresh my memories, but if we look at the expressions here, so basically let me see, yes, so basically what we can see here in the geometrical representation, when n lowercase is equal to the number of external legs, so basically what we have, so this denominator disappears because it's power zero basically, and what we did is basically just the hypervolume of this piece of hypersphere, which is cut out by our solid angle, so say if we consider four-point functions, so this is the volume of the spherical or hyperbolic tetrahedron without weight factors, and this is the property also in higher dimensions, so for example for the four-point function, it's a three-dimensional non-euclidean tetrahedron, for the five-point function, it would be four-dimensional simplex, but again it would be the hypervolume or content, and so on, so we will not get any weight factors like that, so the situation is simpler, but the problem to calculate such quantities, it's still a complicated problem. You're saying that this geometric decomposition applies no matter if this factor is there or not? I mean we know what does it mean geometrically, but geometrically even calculation of the volume of three-dimensional non-euclidean tetrahedron is a complicated task, so it was a non-trivial task to solve it. Thank you very much, there's a question from David. Yes, well really it's a comment to remind you about a rather interesting geometrical paper that you also wrote in Tasmania with Bob Doborgo about the volume of three-body phase space, which you interpret geometrically. Now that is taking us into elliptic territory, it's the discontinuity of the two-loop sunrise diagram, and since you did that work there's been a lot of interest in three-loop and four-loop sunrise diagrams, so I found myself last week trying to help Albrecht Clem understand the four-loop sunrise diagram, working out the volumes of five-body phase space in two dimensions, and going back to your work with Bob to remind myself how to do that. So not only did you make contributions at the one-loop level where the multiple polylogarithms arise, but you also opened up this field of elliptics and beyond, which is the subject of many conferences these days. Okay, thank you David. Yes, of course I remember about that paper, but I had just 45 minutes for my presentation, why I didn't include it. Thank you very much. Let's all thank Andrei again for the talk. Andrei, thank you.
A geometrical approach to the calculation of N-point Feynman diagrams is reviewed. It is shown how the geometrical splitting of N-point diagrams can be used to simplify the parametric integrals and reduce the number of variables in the occurring functions. Moreover, such a splitting yields useful connections between Feynman integrals with different momenta and masses. Calculation of the one-loop two-, three-and four-point functions in general kinematics is presented. The work on this approach was started in the 1990s in Tasmania, within a project where Bob Delbourgo and Dirk Kreimer were involved.
10.5446/51260 (DOI)
Thank you for the invitation. We're really happy to talk. It's a conference in the honor of Dirk, who I met actually about 30 years ago when he was still a PhD student of Jürgen Körner and Mainz. At the time we had a joint interest in the gamma-5 problem of dimensional equalization. I was a student of Karajyj. I was visiting Jürgen Körner. Could you click on the green button to have full screen? Yeah, I met Dirk 30 years ago and at those days, in fact, from what I remember, nobody thought that Feynman diagrams were something very interesting mathematically. It was something very tedious and it had to be done, but nobody thought that there was any interesting mathematics in it. Now, 30 years later, Feynman diagrams have turned into something like a gold mine for pure mathematicians. I think everybody here agrees that Dirk has a lot of credit in this. Today I'm going to talk about not really about a lot of algebraic methods, but about my own specialty, which is world-line integration. For those who don't know about the world-line formalism, the world-line formalism is an alternative to Feynman diagrams and it's as old as Feynman diagrams themselves. Feynman invented work on passive degrees at the same time when he invented Feynman diagrams. In fact, it seems that he used them as a guiding principle for finding Feynman diagrams and then when he had Feynman diagrams, he was so happy with them that he basically forgot about all those relativistic pass and decals. So let's start with scalar QED and the Zegrenes function and X-space for the Klein-Gordon operator. We have to work in Minkowski in the Pledon space and many people know probably that you can exponentiate the propagator and then convert it into a quantum mechanical pass integral, which is... Maybe just we could try, if you go to the menu bar, maybe in the view you can change to single page view. If you go with your mouse to the... I'm not familiar with TechShop, if you just open it with... You can use Preview display format in the TechShop menu. You go to Preview display format. All right, thank you. Okay. And then... and then full screen. And then... No. Okay. Well, let me go full screen like this. Is it okay? Yeah, I think that's better actually that way then. Yeah, I think that's better. Let's go back to Feynman's work in 1950. In modern notation, he constructs a pass integral representing the propagator of a scalar particle in Euclidean X space going from X prime to X in proper time t and one has to integrate overall pass connecting X prime and X in Euclidean space time. And that's what nowadays called the photon rest propagator. And it's a Feynman diagram that is down here. But it will be important for the following set the photons are not ordered. Yeah, the photons have a fixed momentum, a fixed number, but it is not fixed in which order they are going to arrive or be emitted. Yeah, but here we this could either be the propagator in an external field or you can convert it into the amplitude with n photons specializing the field to plane waves. If you do the same thing for the van Lube effective action, you start with a trace lock of the Klein Gordon operator and then instead of pass that go from X prime to X prime. And space time you have to integrate overall closed pass and space time. And again, if you leave the external field general, then it's effective action. You expand the field and plane waves, then you get the van Lube and photon amplitudes. And now the structure of QED is so simple that it's quite obvious that just from these two elements, connecting photons and pairs, you will be able to construct the complete QED as matrix. And since QED doesn't really have non-trivial symmetry factors, there's also no problem with overcounting which would actually be an issue in phi to 4 theory. And Feynman himself, one year later, generalized this to a spinor QED. So he found what is called Feynman spin factor. So you start with scalar QED and then for a fixed pass, you insert along the pass, you insert that spin factor here, you calculate its trace and the d to the contribution of spin. And after the invention of Grasmund pass integrals, Fratkin, 25 years later, found a more modern way of presenting the spin factor in terms of a Grasmund pass. Why do we prefer this for two reasons? First, you don't have to pass order. That would be very important. You don't have to fix an ordering of the legs along the loop. And moreover, it makes a supersymmetry manifest between the orbital and spin degrees of freedom, which quantum mechanics we have already as a Pauli equation, but in the standard Dirac approach, I can look at some but hidden quantum field theory. Now, these pass integrals were not much used for calculations until the late 80s when people had learned in string theory how to some methods of calculating pass integrals, Grasmund integrals, use-world-G supersymmetry, etc. So, Pauli, a course of our stress, did some work that eventually led to the idea that one should calculate such pass integrals like an one-dimensional perturbation theory. That means the orbital pass integral should be calculated using a Green's function that is for the ordinary second derivative, but adapted to the proper time circumference that fixes the periodicity. And the psi pass integral, etc., represents a spin factor, should be calculated using a Green's function such as the signum function. And effectively contains all the manipulations that normally you do with Dirac matrices. So, in a way, those psi functions are like the Dirac matrices, but they are kind of flexible and not bound to appear in any fixed order in the way we look at it nowadays. Now, in going back to Schiller's QED, if you want to do this explicitly, you go back to the one loop effective action, you put the field in terms of the waves, you expand, you take out the terms that contain every of the embolization and momentum once, the total, we call the total mixed term, that's just a loaded way of doing fluid transformation. And then you get the one loop and photon amplitude in a representation where each photon is represented by a vertex operator and is integrated along the trajectory. And it's very important that at one loop you have here a problem that before doing your pass integral, you must actually fix the zero mode, which is the zero mode tells you that you could actually change a loop just by translating it without changing the work line action. So, that would be a flat direction for your Gaussian pass integral. So, you have to fix it and that eventually will just produce energy momentum conservation. And then to actually, well, at this stage, your pass integral is already Gaussian, yeah, because the orbital X of tau coordinate appears only linearly and exponentially. To get at a closed formula, you do this little exponentiation trick that's also very long from string theory. And then you do the Gaussian integration formally, in flat space you can do that formally, you never have any problems, in curved space it's a bit more tricky. You get what is called a von Kosovo master formula. So, you have one loop in photon amplitude and scalar QED written in a way where you have the global proper time of the particle, the scalar particle in the loop. You have still one integration for each photon, also one of the misredundant. And then you have this exponential, which is kind of fake because eventually these guys will have to be expanded out and you have to take the terms linear and also in each polarization. But the important thing is that it contains only these world line dreams functions as world line dreams function G and it's derivatives. And those who haven't seen this before should take notes that we have a delta function appearing here. Now, this is a very nice formula because it's a one line formula having the full information on the van der Beijn photon amplitude. It's valid off-shelves, that means you can use it for going to higher loop orders by sewing. Very important to write it down, we did not have to fix the ordering of the photons along the loop. Von Kosovo found also sets this formula, has information on the cases where the scalar becomes a spinner or even a gluon. This is as a famous von Kosovo rules. And Strassler actually investigated more the case of QED itself and studied what happens when you actually remove all the quartic vertices. Yes, these delta functions that create signal vertices which we have in scalar QED. And in a spinner QED normally we don't have them, but here we have them because effectively we are using a second order formula. But in any case you can remove them and after that the integrand becomes actually much nicer, which is also kind of familiar nowadays from things like K, KLT or double copy. Many nice things, nice algorithms start working only when you remove all quartic or higher vertices and everything becomes three-valued. Let's look at the four-folding case. It's a scalar spinner QED. Let's make very little difference here. So normally we would have like a spinner QED, we would have a sum of six per moded Feynman diagrams and scalar QEDs are more of this the signal vertices. In the Wurzmann formalism it would be just used to master formula in a straightforward way. We would in fact be doing more or less the same as doing an ordinary Feynman diagram calculation using Schringer parameters. It would be the same amount of algebraic work, it's the same type of integral, it's the same tensor reduction, so that wouldn't be a very interesting thing to do. The formalism becomes interesting only when you use the freedom to actually do something with these integrals and what you can do is you can massage them by a large number of integration by parts and then they actually reduce to five tensor structures which actually are already gauge invariant. So here we introduce a field strength tensor for each of the four photons and here you have the traces of all that the field strength tensors and then here you have some chains of the strength tensors and it is very curious that exactly this basis of five tensors was found in 1971 by Costandini et al. using security water. So they started writing down the 130A tensors which you can write down at four point and then systematically reduced using the security water than the implementation variance. It is very curious that we just by integration by parts generated exactly the same basis just trying to write the integrant as compact as possible. So in this basis the coefficient function for spinor QAD or C5 here, you see they involve the in the exponent you still have the Green's function G and the pre-factor you have only the second the first derivative you don't have the second derivative anymore and more over the parts that comes from the spin factor always appears in these combinations here you have always closed cycles of indices and semideference of orbital term and the spin term and that is actually related to the word Lanzubar symmetry and it was first derived by Bernon-Cosser one. And this combines the cases of spinor and scalar QAD if you want to go to scalar QAD you just delete all the terms that appear. So this is actually a recent result previously we did not have such a compact version and we also didn't have set reduction of the tensor basis. This is what we have in Bocres we use this for the first calculation of the photon amplitudes totally off-shell which was never done in fact that worked by Costardini and all as he calculated this two legs off-shell and two on-shell. So we do that in general kinematics as well as this one or two photons taken in the low energy limit and also not only the integration but also the tensor reduction is actually is kind of really adjusted to the world-wide formalism. Our guiding principle is we never want to really split the amplitude in the order sectors. So we cannot use tensor reduction algorithms because all of them work only once we have fixed the ordering of external legs. So we have to invent an asset that actually works without fixing them. So what is the motivation for this? Well this is not really the workshop to talk about this but for example in G-2 calculations where they are now going to 6 and 7 loops with the light by light sub diagrams figure very prominently they are important here and having nice simple formulas for them which do not make it necessary to fix the ordering of selects could be quite important. Moreover you can also use the one-to-photon amplitudes of course to go to higher orders in the photon propagator like many years ago already we constructed this Michael Schmidt we constructed two loop diagrams over what normally would be three diagrams and the world-wide formalism and managed to calculate the two loop header function without encountering any non-trivial integrals and actually also without having to split them to these two different apologies. Also we never managed to carry this on to the similar case. We still would like to do that. In fact when we all met at the multi-loop workshop in Aspen 1995 that was there David Protost to Turkey, Andrei the VDchef. In fact three loops and higher loop photon propagator and its speeder function was a very big topic at the time. What had been calculated is the four loop header function in spin-off QED and David just had calculated the three loop header function in scalar QED and in both scalar and spin-off QED it so happens that at three loops there are three contributions that cancels out and at four loops in spin-off QED case there are also cedar fives that cancels out. So at the time everybody was convinced that the QED header function at the quench level was rational and David even constructed kind of a proof but years later actually the group calculated the five loop coefficient and there was some cedar threes that just refused to go away. So we know the cancellations are not complete but that's still I think it's a puzzle why it's happened in the first place is still unresolved and we're thinking about. In fact just this week I was in contact with Jonathan Rosner who had actually calculated the three loop QED header function and he told me that he would still very much like to know why cedar tree cancels out. Now why is it why is it difficult to apply the workline formalism to this kind of problem that is because the advantage of having all the diagrams in one big integral is unfortunately somewhat let's say formal. As long as you don't know how to calculate integrals that have absolute values in the exponent, don't try to give mathematical integrals which have absolute values in the exponent for example that's computer's analog good at this kind of the thing and it's a pre fact. The fundamental integral that we have to calculate in at least in a billion theories in the workline formalism is an endpoint integral where all end points are running over the full circle. You have this what we call the universal exponential linear in the greens functions between the various points. So the slumpers square is usually in one loop it's bi.pj, higher loops it can become something more complicated. The pre factor will be a polynomial in the g dots because the river depth is the greens function which has a signals function. So the basic challenge is how do you calculate integrals like this without actually splitting them into order sectors or making case distinctions for the remaining points. Usually you would just start integrating over one variable fixing the others but even then to evaluate evaluate the signal functions you would have to fix another thing. So we know that once you start doing that you lose all the advantage so we don't want to do that. So what would a master magician do? He would start with simple classes of integrals, start with polynomials and then go to exponentials etc. The simple non trivial integrals are what we call chain integrals and here you see if you integrate a product of g dots there's the indices for a chain then you get a Bernoulli polynomial to the same as a GFs you get an Euler polynomial you say more about this in a moment. You can have a totally an integral that is actually totally trivial in a given sector like this one but no computer will actually give you this nice formula because he will split it into various sectors and he will never be able to to recombine the result synthesis. But in the sense there's also issues at three points already the various G dots are not algebraically independent. So generally it is also possible to write down polynomials and G dots and G that are actually zero but in each other sector for a different reason. Uwe Müller has started the algebra of these polynomials. Now some years ago I solved general polynomial problem in a recursive way by founding by deriving this formula here and also when you just want to integrate out one one variable over the circle so you calculate the integral U with any ordering of the remaining U1 to Un and you have G dots to arbitrary powers then this formula tells you how to remove that how to do this one integral and write the result again in terms of the G dot functions of the remaining points as a polynomial. So applying this recursively obviously you can calculate any integral you want if it's just polynomial. Now why do we see Bernoulli numbers and Bernoulli polynomials here? Well that is because remember we started with pass integrals that go over periodic functions and then we had to fix the zero mode and that takes out the constant functions. So we are naturally now in the Hilbert space of periodic functions or Söckl-Gnold is a constant function. Actually being Söckl-Gnold is a constant function makes a big difference because it makes the ordinary n's derivative invertible and the n's derivative into space of periodic functions without the zero mode is essentially a Bernoulli polynomial except we have some signum here in the odd case and the zero's power is special because it's not the delta function but delta minus one because of the zero mode's attraction that's what represents the unit of a rate unknown. It's curious that this formula as it stands in fact you can never find in any book on functional analysis or integral operators. In many books you can find the formula for the Fourier series of Bernoulli polynomials which is actually equivalent to this formula if you assume that x is positive but it misses all the fun stuff that x equals zero the signum and the delta that actually give you a nice closed algebraic integral operators on the circle. For that reason in this formalism Bernoulli numbers Bernoulli polynomials appear all over the place for example a year ago with our previous speaker Gerald we were calculating the two-loop self-dual or Heisenberg Lagrangian and from that we got the corresponding photo amplitudes in the low energy limit with all equal helicities. You see there are some yeah at one loop this amplitude would essentially be a Bernoulli number but there's some stuff here which takes care of the momentum and polarization that's just Bernoulli stuff but the coefficients are just the Bernoulli numbers and the two-loop you start seeing folded sums of the Bernoulli numbers and that's a story I want to tell because it's somewhat involved because I was just visiting the IHS when we were doing this and actually we had actually two formulas for this two-loop effective Lagrangian which both involved Bernoulli numbers but they actually look very different so I asked who can help and tell us whether this is already known is a known identity which allows us to identify these two formulas and he said that only Richard Stanley could help with this so we wrote a letter to Richard Stanley and after a couple of weeks he indeed told us that this is equivalence between these two versions of the expansion coefficients of the two-loop Heisenberg Lagrangians can be actually are equivalent to Euler's identity to combining Euler's identity for the Bernoulli number which is very well known with this most with Miki's identity which at that time was considered the most non-trivial identity known known for Bernoulli numbers yeah so we learned from Richard Stanley that actually Bernoulli number identities that involve only bn over n factorial are usually easy because most likely you can get them from the generating function but identities that involve bn over n are considered difficult and Miki's identity involves actually both and it was considered very mysterious at the time and actually it turned out at about the same time Faber and Pantere Pante published a similar formula which they found in the string theory calculation and couldn't prove actually they guessed that they couldn't prove it since they asked Zagier and Zagier actually managed to prove it and then this after getting this information by Richard Stanley and yeah we send this Gerald Weisen figured out how how to use a word and formalism as a guiding principle to not only give a unified proof of both identities Miki and Faber under the Zagier and also we generalize them in various ways at the quadratic level we generalize them to the cubic level and we in fact pointed out a systematic construction of such identities to arbitrary orders and only that is why I know so far nobody has really done this that's not something we can our students can ask our students to do unfortunately anyway coming back to my real topic after settling the problem of integrating an arbitrary polynomial now we would like to attack the problem of including the exponential factors which is much more difficult and here it makes sense to go back to the simplest case so the scalar case so we have the pure exponential factors no pre-factors this formula was apparently written down for the first time in Polyakov's book and that means we are now talking about the same off-shell scalar functions which for example appeared in Andrei the VDGeph's talk now remembers that on-shell that stuff is easy nowadays but off-shell still not and in fact not many people even want to work on such things like Andrei the VDGeph's talk Riemann, Oleg Tarasov actually all have worked on on the three and four point functions and they all agree that in the three point case you need the hypergeometric functions 2f1 and the uphill function f1 in the four point case you need additionally a laurid what is called a lauricella-seran function so at least one knows in principle what one is looking for this seems this looks deceptively simple is the same formula can also be written down for its effective action in that case in fact it becomes more general because for the effective actions there's actually no reason to assume that it's phi cubed it could be as well any self-induction so here's a closed formula for the effective action and of course here normally people don't talk about the hypergeometric functions what they do is they just expand in powers of these derivatives and then they collect terms of equal mass dimension and that gives you the heat-convex expansion still under the T-indicals you can do doing the T-indical doesn't make any sense because it just creates inverse powers of the masses and moreover it would start depending on the space time dimension but these operators are not so if you would want to do a very brute force approach to calculating the effective action in phi cube theory you would now just expand those exponentials and you would have this integral here with all possible powers of all possible three functions and again this is an integral that is any particular such integral is totally trivial of course but to obtain a closed form expression for arbitrary n and arbitrary exponents is a quite formidable challenge it's only recently that we have made progress in fact towards doing this namely the essential idea is actually to take each exponential factor and not directly expanded in powers of the Green's function but rather in inverse powers of derivatives so if you take an exponential and you ask yourself how would the what coefficients do I get if I actually expanded in the matrix elements of the inverse derivatives on the circle you get actually the following formula which involves the two ends Bernoulli polynomial minus its coincidence limit which is the corresponding Bernoulli number and then it involves an odd Hermite polynomial and yeah you can integrate this formula from 0 to 1 and you get a formula for the ever function which actually expands it in terms of Bernoulli numbers and Hermite polynomials and yeah if you see this you would assume that it must be known to mathematicians but so far we have not been able to find it in the literature so anybody here in the audience has seen such formula I would be happy to know about it so now let's see how to use this formula in the three-point case so we in the three-point case we have three exponential factors on each geoplysis expansion and now we have to remember that the space we are working on is such that you're always orthogonal to constants yes that means also that these matrix elements that involves Bernoulli numbers actually become zero when integrated on either side that means we can say that if I want to use this factor here I must also use this one and use this one and form a closed cycle so as not to have any let's say loose end as it would indicate a zero so if we look at the most non-trivial case that we take the product of C3 then you get this integral and then you use the completeness relation to convert this into a trace that's a trace of inverse derivatives in the space is against is again a Bernoulli number so without any real work what you get is a closed formula for what before we called i3 this arbitrary exponent a b c where you know fc's finite sums appear you have coefficients h that come from the Hermit polynomials and here you have Bernoulli numbers and yeah so we can essentially see already the general structures that you're going to get only is that it's starting from four points we will not immediately be able to do all the integrals using completeness because at four points you have will have integrals where is a variable that you want to integrate actually appears in three what in three factors yeah that's what we call a cubic workland vertex but now it actually paid off said many years ago actually this uberbüller we constructed precisely vertices like this when we actually tried to build yeah we once built a kind of a toy workland quantum field theory where an arbitrary multiple setter value can be encoded in an assignment diagram and an arbitrary identity for between setter multiple setter values can be derived by performing integration by parts on those fine and diagrams yeah because it's exactly the same procedure that we need here remember that delta to the power zero gives the delta function minus a constant so what we do is we do integration by parts lowering one indices and raising the others until one index becomes zero that happens here when that index becomes zero then you have a better function that kills the integral or you have minus one that doesn't kill the integral but at least it removes that factor and sends the remaining factors already have involves integration variable only twice so then you can actually do the integration like you did in the two point case yeah so it becomes very simple combinatorial problem so we can actually foresee that that we should be able to get closed formulas for these coefficients i n not only for three or four but probably for five or six point and then of course there is the question how do we get back to the standard description in terms of hydrogemetric functions like these coefficients must must must be summed over and they must get known hypergeometric functions but there are not no formulas known for hypergeometric functions that involve Bernoulli numbers and so coefficients as far as i know so we will need some unknown identities to do that here and also previous experience this however with Gerald and others has shown that having Bernoulli numbers is always nice also when you want asymptotic estimates because the Bernoulli numbers are related to theta of 2n which rapidly converges to one so you have a very rapid convergence of Bernoulli numbers to this simple factor here so it might be interesting to for getting the large and asymptotic of the one loop endpoint functions yeah and then of course we would like to go back to gauge theory puts a pre-factor polynomial polynomial and eventually back to multiple all right that's all happy birthday dirk thank you christian if you have questions please feel free to unmute yourselves and raise your hand i was just wondering the the expenditure at the very end with this mid-polynomials you said that you did this in the five cubed scalar case yes these coefficients appear when you take standard one loop endpoint diagram yeah yeah so or equivalently say effective action but but that was in the scalar theory right in the scalar theory and any gauge theory you will have some pre-factors involving g-dots yeah but this idea of this kind of expansion is it is it useful in any way also in the qed case or does it just not apply well i'm mostly interested in the qed case yeah but i yeah but we have to go step by step yeah in 90 in 1990s with michael schmitt and then he flew we wasted a lot of time doing multi loop calculations that never worked out because he didn't understand that to calculate these integrals you have to develop your own methods yeah you shouldn't try to use too much of the known stuff yeah but seems like you're making quite some progress so i'm excited to see how well what's coming up um we have a question from david um well it's a comment really it's a thank you for reminding us of the work that dake did with bob del bogo and me in 1995 now um they in the chat room we had a wonderful tribute from bob to dake for a moving i think this is an opportunity for us to me to give our appreciation to bob and you put your finger exactly on the place where bob del bogo's wisdom showed when you said that we also not only did three and four calculations in spinner qed but in scalar qed where you would think there will be extra diagrams from seagulls but bob told us no look i was all set to draw 21 extra diagrams and he said no you can represent your scalar by a spinner using the duffin kemer petyar formula and all i had to do was to take my code and just change and just change the propagator of the spinner so that's a wonderful example and that was uh of how you really benefit when you have a senior calamari but actually i was expecting you would talk about this tomorrow in your posmanian adventures i've i've not chosen to which is why i've uh mentioned it now but uh it was wonderful to have a very wise older collaborator who has still continued uh uh to be productive and focused and clear-minded well above my modest age of 73 so dake has a long uh period of um intellectual activity to look forward to if he can manage as well as bob has done thank you for for sharing this may may i have a question sure hi christian thanks for a very good presentation so uh i actually it's not a question but a comment so i i think i have the formula you were looking for for this error function in terms of hermit polynomials oh that that's well yes yes and if you allow me to share my screen so i will show you the paper there's a formula in terms of hermit polynomials but i haven't seen one involving benoulli yes it's actually in journal of chemical physics uh oh 1999 so i can send you this paper but i can show also to everyone that's great that's great you can also put a link in the chat um oh no no it's uh well actually um i downloaded it in complicated ways so it's well maybe then you shouldn't show it on a very recorded seminar right but i christian i will send it to you we can do that privately right right hasn't been used for something in that paper uh well it's just called hermit polynomial expansions of the error function and related f0 of w integral you see the journal of chemical physics thank you andre this sounds like exactly the kind of interesting connection observation that only happens at conferences when people listen to each other um let's thank every speaker of the this afternoon session and the evening session again
The worldline formalism provides an alternative to Feynman diagrams in the construction of amplitudes and effective actions that shares some of the superior properties of the organization of amplitudes in string theory. In particular, it allows one to write down integral representations combining the contributions of large classes of Feynman diagrams of different topologies. However, calculating these integrals analytically without splitting them into sectors corresponding to individual diagrams poses a formidable mathematical challenge. I will summarize the history and state of the art of this problem, including some natural connections to the theory of Bernoulli numbers and multiple zeta values.
10.5446/51261 (DOI)
So I thought I would do something rather different from what the previous speakers have done. And I would concentrate exclusively on Dick's work before the momentous events of 1997, which I remember very well, where he discovered, you discover unique to him, the hot foundrebre of the renormalization by iterated subtraction of subdiversions. And I've chosen two topics, knots and numbers and four term revisions that you can find in his book, the only book that Dick has published, it's called knots and Feynman diagrams. You saw a picture of the cover and Mark's talk. And one of these have characterizes being not even wrong. It wasn't sufficiently well defined to be turned into something you can say, yes, this emphatically does or does not work. But it was extremely important as a heuristic. Without it, we would not have arrived at our conjectured enumeration of multiple zeta values by weight and by depth. And we certainly wouldn't have been able to evaluate as many counter terms and seven and five of the fourth as we did exactly and remaining numerically. But before that, you have to remember, I'm talking about things during Dick's first postdoctoral appointment in Tasmania. Before that, I thought I would talk about his graduate work. You've heard an awful lot of appreciation for the way that he advises students. I use that word advisedly. You should think of him as an advisor, not a supervisor. So I'm going to say a little bit about his time as a graduate student. Now, I was a regular visitor to the University of Mines working with my good friend Carl Schilcher throughout the 1980s. And towards the end of that, I think it was 1989, Carl said, you have to come. We have this new wonder kind called Dick Kreimer. He hasn't gotten an undergraduate degree in mathematics or in physics. He studied humanities. But somehow he seems to have mastered the whole of Landau-Liffschitz and Pastel or Bender's examinations. And I've given him this problem to work, but he keeps going off with his own ideas. Please come and visit and see if they make any sense to you. And so during Dick's graduate work, these are the some of the things that I learned from him. I also was able to give him advice, but they contributed to my education. The first one, which I mentioned here is a problem of gamma five. This is the matrix which in four dimensions commutes with the four major C's of the Clifford algebra. And you just make it by multiplying the four of them together. But Carl Schilcher had given Dick a really rather neat problem, which he finally got around to answering. And that is, is the weak interaction at one loop multiplicatively renormalizable rather than subtractively renormalizable? And that's a non-trivial question. But Dick immediately saw that there's something very, very different about the weak interaction from all the other interactions. The fact that it doesn't conserve parity means that you have to calculate traces of gamma matrices with an odd number of gamma fives. And there was a long standing prescription going back to shortly after the introduction of dimensional regularization by a Toofton-Veltman in 1977, I think it was. Brightlona and Maison gave a prescription for this, which they claimed would eventually work. But it was extremely inconvenient. It separated gamma matrices into four dimensional ones and extra dimensional ones. Anti-commutativity of gamma five was lost. Everness and operators appeared. And order by order, the fundamental aspect of a gauge theory, BRST invariance, was not preserved. You had to perform finite renormalizations and took my hand. So this young graduate student, his first publication, if you go to his homepage and his publications, is a novel approach to this problem. And he recognized that something had to be abandoned. I mean, the existence of anomaly was responsible for the decay of the pizzeria to two photons, the fact that they cannot have a wanted entity for the one diagram that couples two vector currents to an axial current tells you that there's something really different going on with an odd number of gamma fives matrices. He didn't want to abandon anti-commutation and he didn't want certainly to go through this nasty separation that was near to Feltman's screen. So he had to think what should we abandon? And he said, well, really, we're dealing with matrices of infinite dimensional matrices. And for those, we know that we shouldn't be insisting on the cyclist's unity of the trace. So that's what he gave up. And then he had to give reading instructions if I can't, in my trace cycle, the gamma matrices, where do I start? And he showed that that was consistent with the anomaly. So quite a remarkably in it of a piece of work you see, but he'd been given this problem and he'd put his finger on what he thought was the most important thing to address. Now, there was a subsequent paper with Carl Schilke, his supervisor, and with Jorgen Kerner, which as co-authors. And this attracted quite a lot of interest and my good friend Jorgen Kerner, I think, would agree with me. He wasn't the best person to go around giving seminars explaining Dirk's work. And eventually he said, no, actually, this comes from Carl Schilke's young graduate student, and he can explain it better. And this required Dirk to show one of his other talents, and that is the ability to stay calm under pressure. Because one of the two people that I've mentioned that have this 1977 scheme was not well disposed of an unknown graduate student advancing an alternative. So that was rather impressive. The thing that impressed me most was Dirk's ability as an analyst You see, the title of this conference is Algebraic Structures in Quantum Field Theory, but Dirk is also very talented when it comes to the analytic structure of it. And I would say, perhaps this is too not not very modest thing to say, but I would say that in 1989 I was one of the leading practitioners in doing two-loop calculations with with different masses. And then Dirk came and gave a private seminar to Carl and me and he said, I have a new way of doing two-loop, two-point functions, three-point functions with full factors. And I think I could even do it for double boxes. And his method, which he explained to us very simply, was to split the integrations over loop momenta into those components that were in the span of the space of the external momenta and do the integrations transverse to those. And he showed how the two-loop two-point function with five arbitrary masses, a function of five variables, could be going using skillful use of Cauchy's theorem, taking in Mankowski space, really taking account of where that I epsilon is in the propagators, could be reduced to a double integral of a beautifully symmetric logarithmic expression. We know that when it's polylogarithmic, it's trilogarithmic, but we know in fact that there's an elliptic obstruction down there as soon as you try to do the next integration. And then that was pushed to the three-point functions. There for the two-loop three-point function, there's a planar and non-planar diagram. Everyone's on the planar diagram, so much more difficult than the non-planar diagram is much more difficult, but it yielded to Burke's method. So here was really deft analysis and carried through and good programming. Now, I also thought that I knew really quite a lot about special functions, general hypergeometric functions, but this young student said, well, David, yes, I know you know this, but it's all miscellaneous knowledge. You got it from Abramowitz and Stegen and from the Bateman manuscript and the Russian book, Britshnikov, Maritrov, Prudnikov, and there was a manuscript of a very rare list of results of the Mainz library. He said, but what you really need to do is go off and read this book, David, and he gave me Carlson's book, which was a systematic approach to generalised hypergeometric functions. I think Andrei Dvidechev mentioned this. So that was quite interesting. I mean, when it actually came to doing calculations and so on, I had things to offer, but here was a graduate student telling me, go off and read this book, and I think you'll find it useful. And finally, I mentioned, it was included in this, this is when he was looking at form factors with different masters in certain situations, when some of the masters go to zero or a codenium and mentor, then there are singularities, and he had to start thinking how to regulate those singularities. I'm not talking about renormalisation at this stage. And so he asked himself, now, how would it be best to regularise? And he said, oh, David, I'm going to use Hadamard's finite part. Essentially, what that means if you have a lower limit of an integral, goes to zero, things behave badly, you call that low load epsilon and you throw away not only all powers of one upon epsilon, but also logarithms that multiply those parts. And it happened recently that Francis Brown was proposing a way to calculate the quasi periods of modular forms, and he wrote down an integral that was completely undefined, it wasn't exponentially divergent and the way that he did it. And I said, did you use Hadamard's finite part? And I couldn't get an answer. So Eric and I looked to see whether or not how it would be regulated. And I said, well, it must be Hadamard's finite part. And then Eric said, what's that? And I said, well, go and read your supervisor's thesis. It's quite interesting. And he did the calculation and said, yes, it is regulated by what I now know is Hadamard's finite part. But what I realised, said Eric, is that hyperint, this programme you've heard so much about uses precisely that prescription. So I think you can see a picture there, can't you? Independent mind, clear sight, and all done in a calm way. Dick is a very good listener. He listens more than he speaks, and he's good at telling you what he thinks he's clear about and what he's still struggling to understand. Right, so let's go to Tasmania. What Dick thought was that the momentum flow of finite diagrams might be related to knots. Now, this is a rather strange thing, because the great thing, a knot is something that is a one dimensional structure in three dimensions. But this made some sort of sense to me, because I could see that there was a big difference between planar and non-planar diagrams. And that's something that's not immediately apparent if you're thinking forward a dimensional space time. Had he told me he could maybe work out which diagrams involved Riemann's etero 3 and which didn't, because he could look at the momentum flow and see the trefoil knot, which he wanted to associate with Riemann's etero 3. Now, what he didn't know was that I had done calculations back in 1985 of counter terms, a paper called Scala, Feynman diagrams, five loops and beyond, in which I had identified counter terms involving zeta 3 and zeta 5, zeta 7. But also some numbers I couldn't identify, but I strongly suspected with double sums. So what we were able to do was to test his intuition, the rules that he was developing against case law, which I already had, where it behaved very well, but also to use them to try and work out what analytic expressions to guess, analytic expressions using my numerical data. So now what I want to do is to tell you what he did. I'll read this carefully and then we'll go, if we can, to look at the figures. Dick decorated the braids of positive knots and obtained Feynman diagrams with trivalent vertices. He shrank enough edges to obtain sub divergence free counter terms, forms that I was able to evaluate. So let's find, if I go to new share, the theory is that what I have to do is to get out of this loop because they disappear. I have to, I think, do quite a complicated thing. I have to go down here and find this. So can you now see, is that working? Can you now see some pictures? Hello, can someone tell me? Yes, you can. Very good, very good. Right, so let me go to the first picture. This is a knot. It's one little piece of string and it's got three crossings and you can't undo those crossings. It's the unique knot with three crossings. It's called the trefoil knot and I've written it down here as a braid. This is just two strands running parallel and at the end of the day, I'm going to join up these things. So if you see more complicated braids, you can turn them into knots. They might actually turn into links and what I wanted to do is to relate this to Zeta3, that three loops in quantum field theory. So we decorated the braid with cords here and you had a prescription which he explained to me for turning it into this diagram. Now this diagram, we already knew, it's logarithmically divergence in four dimensions. It has no sub divergences. There's a period, unique period associated with it. We didn't use the word period in those days and it's, it has the value of six, Riemann's Escher of five. In fact, on the fourth theory, you'd have extra vertices, extra edges, external edges associated here. And to communicate between Tasmania and the Open University, all he needed to tell me was the, the braid word here and I asked him for the angular diagram. If we remove, if we say this is the origin and coordinate space, which I used almost exclusively and Oliver Schnetz, I think inherited a taste for coordinate space from niola calculations, then we just remove this and this is the angular diagram. And so you complete the diagram just by connecting the origin to these points and this is the wheel with three spokes, which gives six Riemann's Escher of five. So now let's go to the next one. So now what about six Riemann's Escher of three? Now there's also a counter term in, in five to the fourth that comes from the wheel with four spokes. You just attached external edges to the four points on the circumference of that wheel. But in general, you're not going to find the wheel with five spokes in five to the fourth theory. You do find zigzag diagrams, which you've heard about for which dear can I had a conjecture for all loops later proven by Francis Brown and Oliver Schnetz. But how did you give me something starting with his not? Well, here's, he still is dealing just with a two braid. But with more crossings, he ended up with this trivalent diagram and to give me something that I could relate to a counter term. He had to shrink a propagator, turn this into a four point vertex to end up with the wheel with four spokes. Now, this is all plain sailing. But it gets really interesting when we look at three braids, where the braided group has two generators saying, so this is saying, Sigma one is happening here and Sigma two is happening here, and Sigma two and Sigma two and Sigma one, and they're all positive knots. That means that I have that these are all over crossings. Huge number of knots expands as prolifically as does the structure of quantum field theory. But there's a very restricted number of positive knots. And here's an example where using his prescription, he ended up with something which I had suspected to evolve what we now call multiple zeta values, I call them a double sum. And we were able to establish a dictionary between the knot whose braid was is here. And when there were only four of these blobs, I can determine quantum field theory that involves the multiple zeta values zeta five three. But then he was actually able to take four braids and turn them into diagrams here, for which I was able to actually obtain analytic answers going to so I could find multiple triple sums and multiple zeta values associated with the four braid. Okay, so now let me get out of there. I need to remove myself, go down here, go to new share. And I should be back in my talk. So are we back in business Karen? Yes, it's not full screen though. It's, yeah, you'd like full screen, wouldn't you? Well, you might. So let's get rid of the picture of me. And now I'm on the slides. So we associated families of positive knots with combinations of multiple zeta values. And our dictionary between knots and numbers exploited something which I discovered, which I call the push down of multiple zeta values to alternating sums of lesser depth. We'll see an example of that later. In other words, things which are quadruple sums of weight 12 as multiple zeta values can be expressed in terms of alternating sums, which are only depth two. And it was these depth two things that were coming out of the dictionary between knots and numbers in situations where with multiple zeta values we would need quadruple sums. So this is the infamous broadcast climate conjecture for the number of primitive multiple zeta values of weight n and depth k. It's a generating function here. It involves two variables. If I put y equals one, it means I don't care about k. I just take all of the knots of all of the multiple zeta values of weight n, then I end up with something that's absolutely intuitive. This tells me that zeta three exists and this tells me that pi squared exists. So the whole of the enumeration of multiple zeta values is produced by this. And this is proven of the motive. That's a conjecture of Don Zaghié that's proven at the motivic level. And Francis Brown has an explicit basis in terms of multiple zeta values whose exponents are two and three. But this term, this came out of the observation that at weight 12 and depth two, there was some speaking between depth two and depth four. And it was just the beginning of something that continued to all weights and the way it continues in this generating function. It turns into wonderful numbers verified in the multiple zeta value of weight. Actually, is the same function as enumerates the the cuss forms of the fundamental modular group that we had no suss. That was never in our heads when we did. This was just to fit in with observation, empirical observations of multiple zeta values. Now our results included all the primitive contributions to the beta function of fight of the fourth primitive means free of some that we didn't sit down and do all the renormalization of the of the sub diverge until recently, as you heard. But we knew that the number content could not be bigger than this. You always see the new numbers on the primitives. And we were keen to identify the numbers that could occur at seven loops. Now here's where you see this is not even wrong because it's just a game that takes invented when I say I'm associating them. I'm doing them because he's told me that he's got from the bread word for this positive not to some diagram and I say I can find the multiple zeta values. But what we were hoping was and turned out to be the case that he could give me clues as to what combinations of multiple zeta values to try and fit the diagrams of at seven loops in fight of the fourth theory based upon our previous experiments. And that's that's much more difficult. I can't see what I've written at the bottom of the slide but I'm sure you can. But what they had to do was to open the the the four valent vertices of fight of the fourth theory and you can do that in three different ways. ST and U channels. And so he had many possibilities for rooting the mentor and he had to turn these into linked diagrams and he had to scan those linked diagrams to produce knots and identify the knots. But it worked. Oh it worked when it was well enough to find that they were sure about something and and there were times where we ended up not being able to use this thing to identify numbers. But it turned out that there was a good reason for that. So what is the association? You can find this in our papers but they're very nicely summarized in the textbook. Well I've already told you that the zeta three is associated with this. It's a two braid. It means I only need one generator of the braid group to tell me whether I've crossed over the next one. And if I raise it to the power 2k plus one it's associated with zeta of 2k plus one. But now look here this is the 3 4 torus knot. It's a three braid. So I need two generators sigma one and sigma two. I raise them to the power four. So it's going to be an eight crossing knot. And wonderful to relate. Straight out of Dix. Not knowing that I had a number waiting. Zeta five three. He associated zeta five three. This double sum. But something much more specific. Zeta five three is can occur in combinations with zeta five and zeta three and zeta of eight. And what we discovered was that these alternating sums where you now allow a sine zeta five three would just be the sum of m greater than n greater than zero one upon m to the five n to the three. If you put a minus sign in here which we indicate by a bar and subtract off this is the not number associated by the counter terms of field theory to the knot called 819 the only positive knot with eight crossings which is the so called the three four torus knot. And now at seven loops I could draw seven loop diagrams not five to the fourth diagrams which he could which he could arrive at from the three five torus knot. And we asked ourselves now what happens here you see I'm at oh wait 10 I haven't yet hit the place where multiple zeta values become mysterious. Zeta two k plus five three occurs at k plus six loops. But eight loops or maybe beyond but first at eight loops and the diagrams that I was doing I think Oliver finds has to go to higher loops and fight to the fourth to find this number. We encounter n nine three but also this number n seven five with this very precise term in pi. And at nine loops and beyond again we weren't necessarily looking at fight on the fourth diagrams we find a weight 14. Oh by the way this number here is not expressible in terms of multiple zeta values of weight of depth two you have to go up to depth four. And at nine loops we we encountered truly depth four polylogarithms multiple zeta values that don't push down and we found these very precise numbers down here combinations of pi and these very nice things can you see god really loves the odd integers she hates pi squared and zeta five three and zeta five three three three. That's seven loops. Dick emphatically identified the occurrence of this four braid 11 crossing positive knot but only two positive knots with 11 crossings one of them is the the two 11 torus knot just sigma one to the power 11 and the only other positive knot there are a thousand knots or so to look at has this braided word and for this he always got diagrams which I could evaluate to involve triple sums in this precise combination I don't know zeta three five three minus zeta three times zeta five three. Of course you could have any other and in multiple of zeta 11 because there's another knot to the corresponds to that but we were able now to find the precise combinations of triple sums associated with a family of four braids when I'm talking about a depth three which gave diagrams which I could evaluate so it was a really exciting time so now we're interested well you know how fast the positive knots grow and how fast the multiple zeta values grows and and can we find families of knots that might be associated with multiple zeta values indefinitely. We identified five families here you see a two braid here you see a three braid and here you see families of four braids and you know you can write down all sorts of braid words that corresponds to the same knots that are right amised to moves that turn one into the other. Now one way of trying to work out which knot you've got from some braid presentation that date likes because it gives me counter terms that I can calculate is to look at some polynomial associated with a knot and the best one on the market is called the homfly polynomial these are the initial letters of the six authors whose names have been always forgotten because the acronym is much more memorable and it depends upon two variables and we were able to do something quite remarkable namely for these knots with these positive braids defined an expression for all crossing numbers of these families in terms of these two parameters I've you can see they're quite on eight formulas this is extremely useful you see because I can then investigate relationships between my counter terms and knots I I don't necessarily have to have them in these presentations if it comes up with some different braid word all I do is calculate the homfly polynomial of these braid word and I look it up in my table so what about the multiplicity and again I can't see the bottom of my slide which contains the really important information so I'm going to cheat a little bit so at least I can see it um uh we looked at now knots grow enormously so we said we're interested in positive knots and we went to the best uh notters in the world and asked them how many positive knots and they said that's an impossibly difficult question for us we can't even tell you we know that there are only two is 11 crossings and I said you well I work that out in in uh in five minutes of CPU time on my uh on my uh 25 year old laptop uh but can what can you tell us here and they said no too difficult so we we're interested we we worked it out ourselves we assume that the homfly polynomial is not faithful it doesn't always distinguish knots if two if two knots have different homfly polynomials they're different and if they have the same one they're likely to do the same but but there are counter examples and we assume that as positive knots we're really rather um scarcely great panoply of knots that the homfly polynomial was faithful for those and so I was able to generate a quarter of a million braid words and work out their um work out their homfly polynomials and work out how many there were and eventually the knot theorists I think 10 years later verified this calculation that I did one afternoon in Tasmania and uh I still haven't seen that we put this in the encyclopedia of uh of online sequences but I still haven't seen any whether anyone has uh uh validated this number I'd be surprised if it was wrong now what you can see down here is that these are increasing much faster than um the uh uh multiple zeta values but this is what's so remarkable that things started off in parallel you see did didn't know this only knew is about zeta 3 zeta 5 and zeta 7 he didn't know that the first multiple zeta value occurs there's zeta 5 3 at weight 8 and I didn't know that there was a unique positive knot with eight crossings uh here at 9 there's just zeta of 9 here we're picking up things with these two positive knots zeta giving zeta 11 and zeta 3 5 3 but at 10 crossings we first found a knot to which I couldn't associate uh a counter term that gave multiple zeta values um so what we looked for and so this is lettuce and and you look at the enumeration of multiple zeta values so we can already see that knots are more prolific than multiple zeta values and primitive multiple zeta values so we imagine that only a subset of knots could be associated with these multiple zeta values and actually these families work very well uh by associations when I get up to weight 17 experts might think at weight 15 I would need a five braid but in fact because of push down I have to go to weight 17 then here we were missing a knot but we reckoned eventually we someone could find a knot associated with these two extra multiple zeta values but here you can see why we were led to believe that quantum field theory would outgrow multiple zeta values maybe it was even outgrowing it here at seven loops in five to the fourth theory so what really happens that's seven moves well we found that there were two positive knots with 10 crossings dick can remember their numbers and the uh 10 1 3 5 and 10 1 4 1 5 9 I can't remember them now but we couldn't associate to multiple zeta values and there were three counter terms at seven loops which we couldn't identify and so we concluded at least this led us to believe that multiple zeta values would not suffice for five to the fourth counter terms and I'm saying this in case Francis Brown is listening because Francis is under the impression that we we thought that we would always get multiple zeta values as fine in periods and that was the origin of the concevich concepture on zeros of the of the semantic polynomials and fine over finite fields but in fact here Francis in the published paper is what we did say positive knots and hence their transcendental is associated by field theory are richer in structure than mzv's thank you David I am listening thank you now the great thing is and I said to Maxime it's wonderful that Maxime uh didn't uh uh didn't understand what what our intuition was because he made this very strong conjecture just on on on on on on uh whether or not the uh numbers of zeros of the semantic polynomial of finite fields was a polynomial in q and and the common atorist has found that this worked up to 12 edges for every graph moment talking about five diagrams right every graph on the world satisfied this concevich conjecture uh and they were a bit miffed about that that um but in fact um Maxime already told me said I think something goes wrong with the far no matroid and later uh uh Belcale and Rosnan had a non-constructive disproof of the con-savis conjecture non-constructive in the following sense they said if Maxime was right for all um graphs then he would be right for all matroids but we know that he's wrong for a matroid so he must be wrong for a graph uh but that's completely non-constructive it's a wonderful piece of mathematics but the thing was we were already precisely at seven loops at 14 edges with unidentified things so what did we find if we find some things that weren't multiple zeta values well eventually uh it took me quite a long time increase of computing power uh uh I was able to identify two of these they're called in olivish nexus tables the periods of uh seven loops and the eighth and the ninth and the table down there and they involve this weight 11 combination that we found notice weight 11 uh the knots not associated with these 10 crossing knots and it was related to multiple zeta values but there was a new number this new number zeta phi 3 minus 29 zeta of 8 is not the combination that we've seen before and here it gets multiplied by zeta of 3 so we can claim no credit at all the um uh of relationship between not theory and here we have no new we have a uh a new type of combination of numbers occurring uh in terms of all multiple zeta values uh and the wonderful thing that here is that the these two reductions in multiple zeta values surprised I think uh Francis because what he had predicted was that alternating sums would suff well he actually says for all of the seven loop diagrams uh multiple polylogarithms of roots of unity up to six would suffice and that's exactly what happens but for these the diagnostic that was given by something called the c2 invariant was that they should really involve alternating sums and I had obtained uh some time and I think about 2010 these expressions from here which were not alternating sums now you've heard of the enormous increase in in ability to calculate that has come from Eric Panza's hyper and and Oliver Schnetz's graphical functions and these are quite difficult graphs Eric was able to do one but not the other vice versa for Oliver and both of them attained in their respective domains innate combinations of alternating sums very much as Francis had led them to expect but then they had the multiple zeta data mine which Johannes von Lein and Josper Mastro and I had developed which also includes alternating sums and using that they were able to prove the reductions that I found empirically the remaining seven loop counter term number 11 in the census of seven loop periods uh was predicted by Francis to reduce the polylogarithms of six roots of unity and that was done by Eric in the most amazing feat of analysis and here I comment on something that was mentioned by Oliver Schnetz namely that at eight loops polylogarithms of all type fail to deliver all of the counter terms there's a period at eight loops in five to the fourth theory whose obstruction to polylogarithmic reduction comes from a singular k3 surface wow you know but it's associated with a cuss form of modular weight three that's the most beautiful cuss form you can think of it involves complex multiplication the square root of minus seven it's just the data can eat it of z times the data can eat it of seven z to the power three and it's related to the symmetric square of elliptic curve with conductor 49 so here's now my very subjective summary is that Dirks intuition based on our explanations of relationships between knots and numbers that multiple zeta values would not suffice at seven loops was borne out by later analysis though the really non-polar rhythmic action is here so now I think I have according to my recollection 10 minutes left for the four term relation now this is either right or wrong so let me say what it is I ask you to imagine that you have a graph in which you can draw a Hamiltonian circuit that's a circle that passes through all of the vertices once and once only there are certain graphs called pages and graphs where you can't do that but the majority of graphs that you can so now just take three little bits of those circles down here these three little arcs and find a chord that connects these top two arcs and the four terms we will get is by connecting this arc at the bottom to here here here and here yeah now what about what about the rest of the graph well the four term relations it doesn't care about it it says whatever else is happening how these are joined up in the Hamiltonian circuit what else happens to other points in the Hamiltonian circuit as long as they're the same then there could be a four term relation and what Dick wondered was whether this was the case for the counter terms of quantum field theory and what we ended up at Dick has a published paper in which he asserts that he has approved for a very strong argument that there is a four term relation of the following five conditions are met first of all that each of the terms is free of sub divergences then it would have a unique number associated with it when we nullify the external momentum put all the masses equal to zero and find the coefficient of logarithmic divergence that they should differ only by the subgraphs known the shown that's just saying I've just defined what that means but now there are three extra conditions and I explain where these came from they came from in the first instance from experiment and not from pure thought they should have trivial vertices in other words just constant vertices like in fight of the fourth theory or phi cube theory or or you can't with theory but not vectorial couplings not not like Emma Muse that you get when fermions couple to photons the gauge bosons and they should have no propagator with spin greater than half so there should be no you can't have vectorial couplings and you can't have internal vector bosons and each of these four terms should modify one of the dimensionless couplings of a renormalizable field theory now the paper which I wrote with deck on decks and deck's instructions we said the necessity of these set of provisors is not established but it claims that they're sufficient to derive a four term relation for the counter terms the coefficients of overall logarithmic divergence of each of these four diagrams which I've labeled in cyclic order and the counter terms are easily calculated we nullify the external and then to throw away the internal masses we cut the diagram wherever we please to get a two point a finite two point function which we evaluate to the best of our ability and there's no problem with the R star operation there are no infrared problems that are excluded by the provisors okay so what's I able to test this well here's the full test it's in yukawa plus five to the fourth theory so this double line here is a is a fermion and here is the yukawa coupling there's a loop for the scalar particle and this x is the coupling to some external scalar particle and what I've done down here you can't with theory by the way it's not renormalable renormalizable by itself it generates a a fight of the fourth coupling that you have to renormalize so you can only have yukawa theory as in the standard model in conjunction with fight of the fourth theory so I'm going to imagine I have a fight of the fourth vertex here I've connected three of its valences to the fermion here and the four term relation that Dekas predicting occurs when I connect the fourth valence of this fight of the fourth vertex either here or here or here or here now we were able to do this because then we nullify everything for each of the four terms I've just written them out explicitly down here and of course I cut this here because then it's very easy because when I cut it here I just attached this one loop diagram that I can do and I'm really only left with three loops left to do here I chased a cut down here and you can write down explicit formulas down here you know they come from the Feynman rules but here is an integration measure that I defined for an arbitrary number of D dimensional integrations over internal momentum to make the numbers nice I work in what's called a G scheme where you take out the gamma factors that come from come from simple one loop diagrams and here now is the precise four numbers that Dekas talking about and he's interested in limit epston goes to zero down here and they just involve the two and the three measures here the two ones with this combination of a mentor and you don't need a fancy computer program to do this you can more or less do these you can do these two loop integrals in your head because you've got my result for the wheel within spokes down here and all you have to work out is how that rule is modified you know if I have D mu two giving me zeta of three how it gets modified here and here by scalar products and all that does is multiplied by some thing which goes to a constant as epston goes to zero so that was two that was two of the diagrams the easy ones these ones here the other ones really involve some some real three loop integrals and I had written my own version of what was then called a mentor and I had it on reviews reducer microglator but we didn't need we didn't need that that full machinery we were able to use integration by parts to turn a three loop integral into a two loop integral at the price of introducing fractional powers of the mentor in there and that's where I'll work with John Gracie where John was developed afterwards but the discussion that I went with John Gracie before was very useful because that final step was accomplished by developing epsilon expansions of south chute sin f3 two series to obtain the two difficult terms and dx four-term relation was satisfied so now I've only got two more slides to go I think if we replace the yukawa couplings by gamma mu this was my discovery this was all done in the morning great excitement among Bob DeBorg and Peter Jarvis and Johannes when dx four-term relation had been verified by David in the morning but in the afternoon I just put in vector vertices and it completely fails so the restriction to vector vertices was post hoc but dake explains in his single author paper how that is sufficient for him but then at five loops I could immediately do well not immediately but with present technology because I'd already calculated the the the five loop anomalous field dimension indeed the five loop renormalized propagation of phi to the fourth theory so here now I'm looking at a non-renormalizable theory that involves a phi to the fifth interaction but I still have got a sub divergence free counter term for the coupling of two fermions to two scalars and the four-term relation here it's got just the same as before I haven't considerably more difficult intercourse to do but the important thing is you can see that the four-term relation fails spectacularly so that's the origin of the provider in dx paper that we have renormalizable things so now my final slide I think more or less on time which is the next slide is for modern quantum computation I like this phrase from oliver's talk quantum computation so first of all I've got a question for dick because this is the type of question that I always used to ask him as soon as I had an idea last night I drew this diagram dick and I hope that it's free of sub divergences but I know that you will be able to cast your mind over it very quickly and tell me if I'm wrong but what I've done down here I want to work in yukawa plus phi to the fourth theory and so here I've written a two-loop correction to the fermion propagator but I made jolly well sure to put my yukawa coupling in the middle so that I each of these loops is already convergent and I've only gotten over all logarithmic divergence and then for my five to the fourth vertex I've coupled to make this nice Hamiltonian circuit I've coupled here and the and the four term relation which according to dick's paper must hold but it satisfies all of the provisors will be gotten by connecting to these four points so providing I haven't made a mistake this is one of many I mean the the construction's become quite rick's down here so here's a question for Eric and michie and oliver maybe for Eric because Eric I don't know if you know we should celebrate the fact that he has his second five-year fellowship in Oxford so he'll be an oxford for 10 years now I've been telling him he should be starting supervising some graduate students so here's a nice little master's exercise for someone who wants to do some applied calculations and down here can you disprove dirt primers claim that providing this is free of sub divergence it's a renormalizable quantum field theory it doesn't involve vector particles it doesn't involve gamma mus and then for michie and and oliver what about five q to six dimensions because that was what dick wanted me to work with all of his knots related to tri-valent vertices so here's a little exercise was dick right or wrong and I come to my summary it's a very short and a very heartfelt summary I'm going to read it stop myself cracking up dick is a skillful analyst an inspiring comrader torresist and a deeply influential algebraic thinker but more than all of that he combines these gifts with a quiet self-confidence and a great concern for colleagues this has enriched my life and many others so thank you to her thanks to David thank you to David yes
I report on two adventures with Dirk Kreimer in Tasmania, 25 years ago. One of these, concerning knots, is not even wrong. The other, concerning a conjectural 4-term relation, is either wrong or right. I suggest that younger colleagues have powerful tools that might be brought to bear on this 4-term conjecture.
10.5446/51263 (DOI)
Thank you very much, Eric. Of course, it's a great pleasure to speak at this conference. I'm very sad that we couldn't meet in person. It's been a while since I've seen Dirk and I think it would have been tremendous fun. I think it would have been fun because we could have done some of this. I wonder if you can see. Can you see my screen? Nope, you can't. That's very strange. Okay, let me find a different way. We could have done some of this. At the risk of embarrassing Dirk, this is a photo he sent me of a New Year's greetings in 2011 of a typical crime of family gathering. There's quite an extraordinary collection of bottles there. It's well worth close inspection. Okay, so now back to mathematics. I should say that I was in fact slightly relieved when the conference was delayed. I'm sorry to say because this project wasn't finished and I thought that when the time came I'd have made some progress. Of course, nearly a year has passed and I haven't worked on this for a single minute. This isn't so much. It touches on pretty much every single aspect of it. It touches on work of Dirk's. Dirk has done it over the previous few decades. With that said, let's begin. I'm going to talk about graph homology. We're going to take G, a connected graph, and I'm going to denote the loop number, also known as the genus of the graph, by HG, the standard. The number of edges is EG. The degree here is something slightly funny. It's the edges minus twice the loops. That will be the degree. This will be familiar to you as minus the superficial degree of divergence. It's like a superficial degree of convergence. Next an orientation on a graph will be essentially plus or minus a wedge product of symbols corresponding to the edges in the graph. What that means, it's just an ordering on the edges modulo the action of even permutations. If you flip two edges, you get the opposite orientation. That's an oriented graph. It's not oriented in the usual sense when you put arrows on the edges. It's an orientation on the ordering of the edges. The graph complex is defined by taking the Q vector space spanned by oriented graphs G, E, sorry, G, Eta. We assume that G has no tadpoles, so no self edges like this, and it has no vertices of degree less than or equal to two. Then you impose some very simple relations that a graph with a negative orientation is the negative of the graph with that orientation. That's fairly natural. You also impose a relation that the G of Eta is equivalent to the same graph where you permute the edges according to an automorphism of the graph. If a graph has an automorphism, it's equivalent to the new graph with the new orientation. In particular, if a graph has an automorphism which induces an odd permutation of the edges, then its class is zero. In equivalence classes, it will be denoted in square brackets. Then on this vector space of graphs, oriented graphs, you can define the following differential. This was done by Maxime many years ago. The differential of an oriented graph is a sum of all the edges in the graph. What you do is you contract each edge in the graph and have the induced orientation with the appropriate sign. This contraction, I use this double slash notation. I always use this to mean the contraction where if you contract a tadpole, if you contract a loop, that's zero. That's the empty graph. In case you're not allowed to contract tadpoles. This differential is well-defined and it squares to zero, so it defines a complex. Furthermore, you can check that it has degree minus one with respect to the degree. I remind you that the degree is edges minus twice the number of loops. Great. Here are some examples which I'm sure you'll, with this audience, will find extremely straightforward. Here's the differential at the top. The first remark is that any graph that has a doubled edge is zero in this graph complex GC2. That's because if you take a double edge here with edges one, two, with the orientation e1 wedge, e2 wedge, something else, e2, all the other edges, then you can flip the edges one and two. That gives you an isomorphic graph, but it will reverse the orientation. Therefore, it's minus itself and therefore it must vanish. Any graph with doubled edge vanishes. By the way, from now on, I'm going to drop the orientations. It's boring to keep writing the orientations. They'll just be implicit from now. From the previous remark, we get that any graph with the property that every edge lies in a triangle, it automatically has the property that its differential is zero. That's because if you take a triangle and contract an edge in a triangle, you get a doubled edge. As we've just seen, a doubled edge is zero. Graphs built out of triangles are going to have zero differential in the graph complex. Our favorite example is therefore the wheels. The wheels with n spokes and they satisfy, of course, d of the class of the wheel with any orientation, doesn't matter, is zero. Here's just a hammer at Harmon's shore. You don't need this. Here's another example of a differential in the graph complex. We're going to contract all the edges in this graph here on the left. There are two triangles. Contracting them gives zero, as we've already seen. The only edges that will do something when you contract them are the red ones. When you contract the top one, you get this graph here. When you contract the middle one, you get a wheel with four spokes. When you contract the bottom one, you get this graph here. The first and third graphs cancel out and you're left with a wheel with four spokes. The wheel with four spokes is exact. In fact, more is true. The wheels with even numbers of spokes are always zero because they have a symmetry that's odd and that forces them to be zero in the graph complex. Only the odd wheels survive. Graph homology is HNGC2 is the kernel of this differential, modulo the image. The homological degree is this degree edges minus twice number of loops. It's also graded by the genus, the loop number. We get a bunch of homology groups and they are in turn graded. What do we know? It's known that the homology vanishes in negative degrees for positive genus. It has a lot of extra structure. The first bit of extra structure, and here we see the first interaction with the work of Dirk and Anna, and that's that the graph homology has a leacoalgebra structure and it's induced by anti-symmetrizing the con-crime or coproduct. I'm sure everybody here knows, I've seen this before, you take the sum of a certain class of subgraphs, typically one particle irreducible, and on the left of the tensor you have the subgraph and on the right you have the quotient graph. I apologize, there's a typo here which I can't change. This should be a single slash and not a double slash. I'm sorry that formula is wrong. The single slash means you just contract the subgraph gamma even if it contains a loop, it's not zero. I'm sorry I've used the wrong notation, that should be a single slash. Then Wielwacher showed in 2014 a fantastic result that the zero degree homology of the graph complex is due to the Grotten-Diek-Teichmann-Lie algebra, which is something that's explicitly defined but quite tricky to understand. We don't know much about the Grotten-Diek-Teichmann-Lie algebra at all. What we do know is something predicted by Deline. This used to be called the Deline-Iharra conjecture but I was recently told by Iharra that it should be called the Deline conjecture. It states that there's a free Lie algebra with infinitely many generators, sigma 3, sigma 5, sigma 7 in degrees 3, 5, 7, every odd degree, which injects into Grt. That means that Grt has got this huge free Lie algebra inside it and you can show, I think Wielwacher showed that the sigmas end up being due to the odd wheels. Sigma 3 corresponds to the wheel with 3 spokes, sigma 5 the wheel with 5 spokes, etc. The first puzzle here is that this group is to do with the Motyvic-Gallore groups. This free Lie algebra is the Lie algebra of the Motyvic-Gallore group. We can rightfully ask what on earth have Motyvic-Gallore groups got to do with graph homology? That's the first mystery that I'd like to come back to. Here's a picture of graph homology in low degrees and I'd like to spend a little bit of time discussing this. Up to the left axis is the homological degree, H0, H1, H2. Along the right axis are the number of loops, one-lip, two-lip, three-lips, four-lips and so on. You can draw this picture different ways. You can grade by edges and that's actually probably a better thing to do as we'll see. There are different ways you can represent this. The first remark is that there's this red diagonal line and this red diagonal line is the line where, in fact, just immediately below this line are trivalent graphs. Every vertex has degree 3. Above that, that means on the red line or above, all these graphs have a two-valent vertex. There's zero in the graph complex. Your intuition is as you go up in this diagram, you have lots and lots of edges, very few loops. In quantum field there, that'd be a very convergent graph. As you go down, you get infrared divergences as you go vertically down in this picture. Very convergent. Anything above this red line is zero. Now, going across the H0, that's the only degree in which things are really well understood. So, this is the Grotten-Diegthe-Teichmeler, the algebra. Here, in degree 3, we have the wheel with three spokes and you can show that gives a non-trivial class. Here, we have the wheel with five spokes, seven spokes, nine spokes, eleven spokes. All these yellow classes are the ones that are understood of sorts. The first really interesting class here in degree 8 is G3,5. It's something whose co-product gives a wheel 3 and a wheel 5. It's a complicated linear combination of graphs. You can compute it on a computer. But the reason we know it exists is, unfortunately, because of my theorem. Likewise, with this class at 10 loops and 11 loops, there's an 1G335. Unfortunately, there isn't a purely graph-threated way to get these elements on. Then, it's conjectured that H1 is always zero. This line should always be zero. The next interesting stuff happens in H3. There's some green classes here and a class in H7. They don't always occur in odd homological degree. There's a class in H6 up here. We'll see that later on. These green classes are kind of mysterious. Somehow, the dual of Gottten-Dichter-Hemeler group, many of you will know that this is somehow you can think of this as formal, multiple zeta values, modulo products. Symbol satisfying associative relations, modulo products. Though, of course, we're just doing combinatorics here. There are no numbers anywhere in this picture just yet. The green classes, on the other hand, have no such interpretation. I want you to hold this in your mind if you can. Another reason why this is very interesting is a recent theorem by Chan, Galatius and Payne in 2018. They showed how to relate the homology of the graph complex to the co-homology of the moduli space of curves, mg, in genus G. In fact, the graph complex computes the very top weight graded piece of its co-homology. By Delin, the co-homology has a mixed-hot structure. The graph complex sees the very top piece, which is point-caridual to a trivial motive. The key thing here is that in this equality from an algebraic geometric point of view, the motives on the left are trivial. They're just vector spaces. They just take motives plus the data of a weight. There's no GRT or anything like that that comes out of this picture. That's a puzzle. This will with three spokes corresponds to an interesting class in M3. There are two questions that jump out. One is how do we think of these higher degree classes in graph homology, the green ones? How do we relate the graph complex to mixed motives? If we could relate it to mixed motives, then we would expect to see the Gallo groups acting. We might be lucky and find this Gallo group acting, which would explain the appearance of the Gottten-Dichtegmacher group. This theorem here, unfortunately, doesn't do the job because this weight graded piece of mg is a pure motive. It doesn't do that. Today what I'm going to do is I'm going to define a notion of differential forms on a modular space of metric graphs. That's some variant of the outer space that Akkaren talked about this morning. I want to do DRAM theory on outer space, DRAM homology. Using that, we'll be able to assign numbers or motives to classes in the graph complex. After explaining that, I will explain some conjectures about the meaning of these higher degree homology classes. Here's a battle plan for how the talk will proceed. We're going to take this graph complex and promote it to a modular space of metric graphs. A metric graph is a graph where all of its edges have a length. The contraction of edges has a real tangible meaning. You're letting the length go to zero. There's a space of all possible lengths on a graph. That gives a modular space of metric graphs. You can embed that as the real points of an algebraic variety, just a projective space in fact. Doing some shenanigans, some compactifications, which we first learned in work of Dirk, joined with Spencer and Ilen some years ago. This is a variant of that. You can glue all these algebraic varieties together to form a huge infinite dimensional co-simplical scheme. On that, you can take the Duram complex. What does that mean? That means that a differential form will be a collection of infinitely many differential forms for every graph. For every graph, you're going to get a differential form. They are going to have to fit together in a nice way that reflects the way you glue these metric graphs together. A differential form is an infinite collection of forms that satisfies some compatibilities. The problem is, how do we construct such a thing? Such a differential form, it's not obvious. The idea here is to imitate the Torelli map in algebraic geometry. There's a map from the space of metric graphs called the tropical Torelli map to a space of symmetric matrices. This came up this morning in Cairns talk. On the space of symmetric matrices, we know how to construct invariant differential forms. This goes back to work in the 1930s, I think. I think Brower, in the book of Elman Vial, he mentions papers of Brower in 1935. It may go back before that. We write down invariant differential forms on the space of symmetric matrices, and then we pull them back to the space of metric graphs, and then we check that they satisfy all the properties we need. That's the battle plan. First, metric graphs. A metric graph is a graph, it'll be a connected graph, plus the data of a length, positive length, to every edge. That length is L sub e. You normalize it so that the total length, the sum over all the edges, is 1. That means if you fix your graph and you allow all the edges to vary, the space of possible metrics is just a simplex. It's just the set L1 up to Ln, positive numbers such that the sum is equal to 1. It's just a hyperplane. The idea is that if you send a length to 0, that's the same thing as, that should topologically correspond to contracting the edge, which is a different graph, of course. That'll be a different simplex associated to a different graph. Of course, when you contract edge, well, I'll come to that in a minute. Here's a picture. Let's take this graph here with a sunrise diagram with two vertices and three edges. It has two loops, genus two. The edges have lengths L1, L2, L3. They are subject to the condition that their sum equals 1. That defines a Euclidean triangle, two simplex, and Euclidean three space. It's the open triangle. It's the interior of this triangle. Now, as one of the lengths goes to 0, that means we travel, I don't know if you can see my mouse, but let's contract this length L2, let that go to 0. We travel down the simplex and end up on the boundary L2 equals 0. The boundary is not in the simplex. It was an open simplex, but this is the boundary in its compactification. Now we get a new simplex in Euclidean two space where L1 plus L3 equals 1. That we identify with the simplex, the space of metric graphs of this type, now with two loops in which the middle edge has been contracted. Now, it's the space of all possible values of L1 and L3. We think of that as being a degenerate version of this graph for which the second edge has zero length. You take all these open simplices and you glue some of them into the boundaries of others. What you get then is this open simplex and the three faces, but you don't get the three corners. The three corners would correspond to contracting a tadpole, contracting a loop, and we're not allowed to do that. In outer space, for example, all these faces fit together in a stratified way, but you don't have the corners. That's very important. I've written here that you can assemble all these simplices to form what I think is called reduced outer space. I'm not 100% sure on the terminology here. There's lots of different variants. A caveat here is, of course, in outer space, you need marked graphs. The markings actually play very little role here, so I'm going to just ignore that. The next stage then, so that's the space of metric graphs. Now, to make it algebraic, it's very easy. We can just identify this simplex as the real points in a projective space. The set of LI positive reels that add up to one, we can just view that as the positive real coordinates in simplex inside projective space. Here comes an approximate definition. It's actually no K definition, but it's not going to be what we need, actually. Then an algebraic differential form then on the space of metric graphs, so a form of degree K will be an infinite connection of algebraic differential forms, omega g, for every graph. It's going to be a form on this open simplex. When I say it's algebraic, I actually mean it's a form on this bigger projective space that the simplex sits in. I'm going to allow it to have poles because we're going to need that. What we want is for it to extend, sorry, we want it to be smooth on the space of metric graphs. So it's smooth on this real Euclidean simplex. Then what we want is that we want it to extend to the boundary and in such a way that on the boundary, it agrees with the differential form corresponding to the contracted graphs that sit on the boundary. Finally, we want this to be functorial with respect to automorphisms of the graph. If the graph has an automorphism, then the differential form, so if you have isomorphic graphs, then that differential forms should be isomorphic. Here's a picture. The same simplex as before. If we want to, what will a differential form look like? Well, it'll be a differential k form here on the interior of the simplex. That form is indexed by the sunrise graph. It's a form in the three variables L1, L2, L3 corresponding to the three edge lengths. On each face, we have differential forms for each graph. The property should be that this form when restricted to the boundary should line up, should match with the forms corresponding to these caution graphs. Now, if you look at these three faces, in fact, they all correspond to the same graph. It's just that the edge labelings are different. Therefore, these three forms, in fact, should all be the same. They're just obtained from each other by changes of variables, and that's the third property. Now, the key point here is that it's not obvious to construct such a thing, because if you try to do it inductively, you start with the boundary of your simplex, and you might have defined something on the boundary, then you need to extend it into the middle of this simplex. You might be able to do that, but then you need to extend it into the larger cells and so on and so forth, infinitely many times. You've got to do that in a functorial way. It's not obvious. The way we'll do this is using the Torelli map. Let me skip that. Basically, we'll replace the graph with its Laplacian matrix and define an invariant form on matrices. Let me quickly remind you of the graph Laplacian. I take a connected metric graph, and we have the usual complex that calculates the simplicial homology of a graph. Let's say you have z to the number of edges, to the space of edges, z to the vertices, and there's a boundary map that to an edge gives you the endpoint minus the source. The kernel of this map is the homology of the graph. Because the graph is connected, the co-kernel is h0, which is z. What we do is define an inner product on the set of edges. This will be extremely familiar to everybody here. You just say that the inner product of two edges is zero if they're distinct edges, but it's the length of that edge if it's the same edge. The norms of the edges are the lengths. That's an inner product on this space. It restricts to an inner product on this space, on the homology. That's the graph Laplacian. Another way to say it is that you can just write it in terms of matrices in a very straightforward way. It's probably easiest if I just show an example. Here's the wheel with three spokes. It's got three loops. Here's a basis for loops. This loop, 156, this loop, 246, and this one, 345. You write down this incidence matrix where the rows are indexing all the edges. The loop 156 involves edge 1, edge 5, and edge 6 with appropriate signs, cross pointing to choice of orientation. The inner product I defined, I mentioned a minute ago, that gives the length l, i to edge i is just a diagonal matrix in this basis. The graph Laplacian, you just take this matrix, the incidence matrix transpose times a diagonal times epsilon. When you do that, you get this matrix whose entries are the lengths of the edges. If you think of the edge lengths as variables, this is a matrix whose entries are just polynomials, just linear forms in variables. This is called the graph Laplacian matrix. It's very, very classical. It depends on a choice of basis. What we want to do is construct invariance of this that don't depend on the choice of basis. As many of you know, you could take the determinant and that would give you the graph polynomial, which comes up all over the place in quantum field theory. That's not good enough. We want forms. Now we turn to this classical theory of invariant forms. How do we do invariant forms? This is, again, very old stuff. I'm just going to take an arbitrary graded commutative differential graded algebra. You just think of this as, in the examples, it'll just be polynomials in some determinants and they're cated differentials. If I take any matrix whose entries are, say, polynomials, and if the matrix is invertible, then you can do the following thing. You can do x inverse dx. Excuse me. Now x inverse dx is a more carton form. You raise it to the power n, you take its trace, and that produces a form of degree n for every n. Now very elementary properties of the trace shows that this vanishes when n is even. It's always closed. It's a closed differential form. It behaves nicely with respect to transposing matrices. So if you transpose a matrix, you get a sign. That shows in particular that when your matrix is symmetric and our graph Laplacian matrices are always symmetric, then half of these forms always vanish. Another key property is that it's bianvariant. If you multiply by a constant matrix on the left or on the right, then the corresponding invariant form is unchanged. That's what's called invariant. Another thing that's quite important is that if x is a k by k matrix, then all these forms vanish once the degree is bigger than twice the size of the matrix. Here's a plea. You can notice that in fact more is true. It's not just the trace, but the matrix itself identically vanishes in that range. If anyone knows a reference for that, please could you let me know. The first such form is just the logarithmic derivative of the determinant of the matrix. Then we have all these odd forms, 3, 5, 7, 9, 11, 13. If x is symmetric, which is our case, 3, 7, 11 vanish and we're just left with 5, 9, and 13. These in fact give you the classes in algebraic k theory, which came up in the last talk, interestingly enough. Here's some examples. Let's take a 2 by 2 generic matrix. Then the beta 3x, you take x inverse dx, multiply by itself three times, take its trace, and you get this nice differential 3 form. If you do the same thing with a 3 by 3 matrix, we're going to get a big mess. We have to assume x is symmetric, otherwise the formula won't fit on, well it'll fit on a sheet of paper, but it's kind of ugly. If I take x as symmetric matrix, of course beta 3 now vanishes, but we get an interesting beta 5. Here it is. It's something over the square of the determinant. You notice when you do that you get massive cancellations. Basically you're getting what's called condensation of determinants. Those of you who have been around this subject for a while will recognize that phrase due to Dodson. Here's a theorem that I'll prove in the write-up of these notes. I don't know, I'm sure it's sort of thing that must be known, I can't find it anywhere. That's that the denominators are much smaller than you expect. If you take this beta to the odd power, because you've taken an inverse of a matrix and you've raised it two n plus one times, you expect the determinant to appear two to the n plus one times. But actually it only appears with half that power. In the case when x is symmetric, it's even more spectacular, you only get one quarter of the power. Now the reason I say this is because we see something very similar in quantum field theory. So if x is the graph Laplacian, which it will be in a minute, what you're getting is actually if you write out a formula for this thing beta, you're getting what are called Dodson polynomials in the numerator divided by the graph polynomial to some high power. You're getting massive cancellations between the numerators and the denominators. This is exactly what happens in quantum electrodynamics in parametric form, which was formulated by Dirk and worked out in the thesis of Marcel Gohls. So I think that's an interesting connection to quantum field theory. Okay, canonical graph forms. So now we apply invariant forms to graphs. We take a connected graph and take lambda g, the graph Laplacian. As I mentioned before, the determinant is just the Kirchhoff graph polynomial. But now we want to define the canonical graph form, not the graph polynomial, is one of these invariant trace forms. So it's the trace of lambda inverse g, d lambda g, where lambda is just the graph Laplacian matrix. And then we can take exterior products of these forms and we get what I call the canonical algebra. So these forms are a function from graphs to forms. To every graph, every such form is a map, which to every graph assigns a differential form. So it's an infinite connection of forms. So here's an example for the wheel with three spokes and you will recognize this. If you take a graph with edges, you want up to E n, we typically call the edge lengths alpha now in the physics contents, not l, it'll be alpha. So the graph Laplacian has entries which are linear functions of the alphas. It's determinant is a polynomial, it's called graph polynomial. And you can work out this this first form of degree five and it gives exactly what many people will recognize as the Feynman integral Feynman differential form for the wheel with three spokes. Okay, first theorem then is that for any graph, this canonical form is well defined, doesn't depend on choices. It is closed. It is projective. And it has poles only along the graph hypersurface, which is the zero locus of the graph polynomial, which is the vanishing locus of the determinant of the graph Laplacian. It is functorial in G. So if you have a nice morphism of graphs, it induces nifl morphism of these forms. And it's compatible with edge contraction. So if you contract an edge, you get the corresponding canonical form of the quotient graph. So that is exactly what we want to define a form on outer space or on space of metric graphs. So in fact, the theorem is true for any any wedge product, take any exterior product of these omega five, omega nine, omega 13, and so on. Those of you who know graph homology will be pleased to see that it has nice vanishing properties. So when the when the graph, if you restrict this form to a graph of the right, the right dimension, whether the graph is simplex is the right diamond of the same dimension. Then this form vanishes when it on the graph as a two valent vertex, when it has multiple edges, when it has a tadpole, when it is one vertex reducible. That's exactly what you see in the graph complex. So that's very nice. It has lots and lots of other nice properties that I shall skip. Okay, so now on now we have an infinite family of differential forms on spaces of graphs. So the natural thing to do is to integrate them. So to do this, we view the space of metric graphs. Remember this this simplex, which whose points, whether the sets of possible lengths of edges on a connected graph, sigma g, as I said before, we embed that in the real points of a projective space. So the sigma g here, which is the same triangle we had earlier, it's this the the set of points in projective space, alpha one, alpha two, alpha three, in homogeneous coordinates where all the alpha is positive. I apologize. Again, I put bigger than or equal to zero. That's a mistake that should just be strictly bigger than zero. I apologize for that. So that's that's a typo that should be strictly bigger than zero because the simplex was the open Euclidean simplex, which is very important. Okay. And as many of you know, of course, that these forms have have have poles along along the boundary. So there's a whole tricky business here that was initially in the case of financials worked out by Dirk and Spencer and Ellen, where you've got to do some blowups. And there's a whole business that I'm not going to talk about for reasons of time. But many of you know that very well. Okay, so the first term is you take any connected graph and any canonical differential form, such this trace of invariant trace of the Laplace matrix. And I suppose that I assume that the form is of the right degree to integrate it. So it's a it's a k form on a case simplex. So that means the number of edges of the graph is one more than the degree of this form. Then you can try to integrate this form over the simplex. And at this point, all the physicists brace them adopt the brace position and take cover because as we know, Feynman integrals never converge. They are always infinite. Pretty much in all the interesting cases. But this is the opposite. It's always finite. So even if you take a graph with the most horrible sub divergences you can imagine, the integral will always converge. So that's kind of amazing because we don't see that very often in quantum field theory, ever, in fact. And in fact, this integral is a period of fight the fourth theory, because this integrand, the differential form is some numerator, some very complicated numerator over the graph polynomial, the first semantic polynomial. So it defines some period in fight the fourth theory. Okay, so here goes. I'll just a quick message to the organizers. I think I started a bit late. I started terminus late. So I'll take the liberty of talking till 20 past. Sure. Yes, please. Is that okay? Yeah. Okay. Thanks. I'm nearly done anyway. Okay, so here's some examples. The wheel with three spokes we've already seen. If you compute this canonical form, you get in fact 10 times the residue, the coefficient of one over epsilon in dim reg in fight the fourth theory. And so everybody knows that that is six, Zeta of three. I had that explained to me first of all by Dirk and David many fond years ago. This is sort of the cornerstone, the thing that got me interested in all of this in the first place. But now it gets interesting. So the wheel with five spokes is not what you think it's a complicated thing. It's the Feynman integral. Some omega g says just some, you know, some alpha i d alpha one dot dot dot and you omit a D alpha. So just the standard projective form. Plus some multiple times alpha one to alpha five is a product of all the edge variables corresponding to the internal spokes. And this integral is not Z to seven the Feynman period of this graph, which is integral of just this first time. So the final integral is just this piece and that would give Z to seven. But this integral gives a multiple of Z to five, it's weight drop. And this coefficient exactly conspires to cancel out that coefficient of Z to seven and gives you a Z to five. So again, this is incredibly reminiscent of quantum electrodynamics, where we know that the highest weight parts in the quantum field theory cancel out. So it strongly suggests that quantum electrodynamics or other gauge theories might have some matrix theoretic formulation in this spirit. Wheel with seven spokes now gets seriously hard to compute these forms. I watched it out. Sorry, I didn't didn't check all the signs. It's something times Z to seven. Another class of graphs that we know how to compute are the complete graphs. And you know then that this integral is some multiple of a product of odd Z to values, two to three Z to five Z to two minus one. The reason you know that is that because this is literally the Borel regulator in algebraic K theory. And this calculation, the calculation that this integral gives a product of Z to values goes back to Z goal in a very beautiful paper in which he invented the what's called the unfolding technique for which is used across the theory of modular forms. So it's the unfolding for maybe an orthogonal group or special linear group. And it's the beginnings, the birth of the whole subject of Tamagawa numbers is in this calculation. So it's great to be that we can give a motivic interpretation. You can write down a motive associated to this Borel regulator, and it fits in this whole graph complex story. Okay. Stokes formula. So you can tell what's coming next. If I take a canonical form, so actually the canonical forms form a hopfile to Brown. Don't worry about this in the first approximation. You just need to know that if you took one of these basic forms like omega five or omega nine or omega 13, they're primitive. So Delta of omegas does one tensor omega plus omega tensor one. Okay. And a lot of times will simplify on this formula. But in general, it's a hopfile to Brown. So and it's the obvious hopfile to be so omega three, so I'm gonna five wedge omega nine would map to omega five tensor omega nine minus omega nine tensor omega five. Okay. And by applying Stokes them to this compactification of the simplex in this blow up of projective spaces. And using properties of these forms, you show that you get that the, that the sum of integrals corresponding to every face of this that the compactified simplex is zero. So that means that the sum of graph integrals for every possible contraction of your graph plus a bunch of products corresponding to sub and quotient graphs that some vanishes. So you'll recognize on the right this co con climber co product. And the proof is once you've said everything up is very straightforward. So we can rewrite this Stokes formula by taking this co product here and writing it into the trivial part, omega tensor one plus one tensor omega plus the reduced co product. And when we do that and rearrange this, what we find is that the Stokes formula has three terms. It has the first term, which is literally the differential in the graph complex. It has a second term, which is what is sometimes called the second differential in the graph complex, which is the differential where you don't contract edges, but you delete edges. And that's also differential in the graph complex. And the third term is exactly the reduced co product. It's the co algebra structure on graph homology induced by the con climber co product that I mentioned way at the beginning of the talk. So this Stokes formula has a, gives a very nice geometric interpretation of all these structures that are built into the graph complex. Some applications, you can use these integrals to detect non vanishing graph homology classes. That's harder in practice than it, than it, than it looks. But in principle, it's possible and you can do it in some cases. We can associate a motive to any graph homology class. So that explains perhaps why we get Grottendiktachemala, Mettific Galois groups appearing in graph complexes. And this third point here, I'm a bit embarrassed by, but it is actually true that the cosmic Galois group acts on the differential forms on outer space. And it sounds like a big joke that I've concocted this. I promise you I didn't come up with the phrase cosmic Galois group. It was due to Cartier as we learned in the last talk. Outer space, it was coined by, by Kahn, Vokhtman. It just happens that these things are in fact very closely related in a meaningful way by, by this machinery. So here's an example of the Stokes theorem. So let's go back very quickly to, to this picture of graph homology. Remember there was this, this class here, which is the first non trivial sort of Liebracket across G3,5 and weight 8. And using it, we're going to calculate an integral on this class, this green class way up here. And the way we do that is we, we take this, this graph here was a linear combination of graphs, and we integrate zero over that graph. We apply the Stokes theorem to, to that. And what it does is it will produce, because its co-product is a wheel 3 and a wheel 5, which gave z to 3 and z to 5. We then do this, this differential D and delta, these different, these two differentials in the graph complex. And we zigzag our way up to this class. And using Stokes formula, we deduce that the integral of omega 5 wedge omega 9 on, on this class here is in fact z to 3 times z to 5. And that proves that this class, this green class is non vanishing in homology. So, sorry, that was a bit quick. But here it is. So, this class in H3 here, let's call it xi, you can show that this integral of a, of omega 5 wedge omega 9 over this class is a non trivial product of z to values. Okay, so now we, now let's look again at this picture of graph cohomology. So graph homology, I've actually switched to graph cohomology because I want to dualize because we're talking about forms. So it's, it's, it's easier to relate to graph cohomology, but it's the same. It's really the same thing. So now I've redrawn the conjectural, semi conjectural, or anyway, in any case, the computer calculations of non classes in graph cohomology, which is just the dual of graph homology. And I've, I've given them names and the names are these differential forms and lead brackets of these differential forms. So what we have omega 5 corresponds to the wheel with three spokes, omega 9 the wheel with five spokes, I'm the 13 new with seven spokes, etc. Then we have wedge products. And the first wedge product is this class in H3. We have a triple wedge product up here in H6, and so on and so forth. And then we have lead brackets of the original forms. And we also have lead brackets of the wedge products of these forms. And these classes line up exactly with all the known graph homology classes. So here's a conjecture. I conjectured that the free Lie algebra on this canonical algebra of differential forms injects into the graph cohomology. It does it in a funny way. So the grading here is the number of edges is the degree and it can correspond to the number of edges. We have to forget, I don't know they line up in funny, common, common logical degrees. I don't know how to predict those obvious question one is, is this map an isomorphism? I, I because I can later deny it. I will conjecture here during this talk that it is an isomorphism, but I may, I may deny that later on. I reserve the right. But anyway, it's a conjecture, but it's a weaker conjecture, a more tentative conjecture than the previous one. And I was going to say something about the, the dual since I'm out of time, I'll stop and, and leave you with this, this picture so you can, you can, you can contemplate it if there are any questions. Thank you very much for listening. Thank you for answers. This was amazing. Thank you. I have a million questions, but if anybody else wants to go first, I'll go first. So the G three five linear combination, which you said was hideous. Do you have a handy, I'd love to know just. Oh, it's known. I mean, I think it's, it's written down in some papers. That that one's okay, you can compute that one. But I think, I think the limits of the computer calculations are this column. I'm not 100% sure. And I'm out of date. As I said, I got this from, from slides of talks a year ago. I don't know if things have moved on, but I think the 10 column and everything to the left of it is checked by computer and this column 11 is possibly not. So, so you'll get your three five graph, but you'll be in a mess when you, if you want to go much further than that, you, you're going to have to give up. I think if you just want to compute the graph, you can take G three G five and just compute the graphically bracket. I mean, the non-true question is that this is a non zero class. Yeah, but also it's, so there's difference when you work in graph homology or graph homology. So if you work with graph homology, the classes are not wheels. They're wheels plus a long tail that's not known. And you've got to take the lead brackets of those. But you don't have a question of David. Yes. I didn't mind you throwing away half of the wheels. I thought that was quite fun. You got back the zeta values decreased. But as it's the week of the expert take, do you have any thoughts about the zigzag diagrams? Those are after all the only infinite class of diagrams on quantum field theory, which have a closed formula, which they can I conjectured and you and Oliver proved that that's a great question. Actually, when I'm new to this subject, when I first learn about this at a conference a year ago, I asked exactly the same question to the experts and they said they didn't know. That's a great question. Thank you. I wanted to ask a question about the weights. I mean, you mentioned these weight drops and you have these. I found it quite striking that you had these pure weight evaluations of these wheels with these complicated into grants and you just got a zeta seven or just got a zeta five. So is there like a understanding from the structure of the blow up that you get some bounds on the weights that would predict this? It's not obvious from an algebraic geometry perspective. Not that I know of, but I kind of have the feeling that all these these integral come from this Burrell regulator. So you know that morally, beta five wedge beta nine corresponds to zeta three zeta five. And so somehow what I would like to be able to do, but I can't do it, is relate this complete graph using Stokes formula and identities to wheel three tensor wheel five. And that would give that would give it immediately. But I don't know how to do that. So I did that argument in the opposite direction to compute the one of the examples. Thank you very much. Let's thank friends again. I'll see you in other questions at the moment. Thank you.
Kontsevich introduced the graph complex GC2 in 1993 and raised the problem of determining its cohomology. This problem is of renewed importance following the recent work of Chan-Galatius-Payne, who related it to the cohomology of the moduli spaces Mg of curves of genus g. It is known by Willwacher that the cohomology of GC2 in degree zero is isomorphic to the Grothendieck-Teichmuller Lie algebra grt, but in higher degrees, there are infinitely many classes which are mysterious and have no such interpretation. In this talk, I will define algebraic differential forms on a moduli space of graphs (outer space). Such a form is a map which assigns to every graph an algebraic differential form of fixed degree, satisfying some compatibilities. Using the tropical Torelli map, I will construct an infinite family of such differential forms, which can in turn be integrated over cells. Surprisingly, these integrals are always finite, and therefore one can assign numbers to homology classes in the graph complex. They turn out to be Feynman periods in phi^4 theory, and can be used to detect graph homology classes. The upshot of all this is a new connection between graph cohomology, Feynman integrals and motivic Galois groups. I will conclude with a conjectural explanation for the higher degree classes in graph cohomology.
10.5446/51268 (DOI)
Well, thank you. Thanks for the invitation to speak at DERC's birthday conference. I should say that he had a preview of this talk already, probably the last, well, the second to the last time I traveled in the last year. So I went to Berlin in December of 2019. I made one other trip and since then I've never been more than a mile from my house. Anyway, so he did have a preview of this talk. I hope he'll still find something interesting in this talk. So what am I going to talk about? Many of the talks in this conference have talked about non-commutative and commutative versions of algebraic structures. And I'm normally a topologist and a group theorist and for me, the these universes, commutative and non-commutative universes, the inhabitants, the free inhabitants of these universes are free groups and free abelian groups. So in both cases, free groups and free abelian groups, the inhabitants exhibit a lot of symmetry, namely any permutation of the generators generates an automorphism of the groups. But what I want to do today is break this symmetry and talk about some sort of intermediate universe that kind of lives between the non-commutative and commutative worlds, which is I'm going to talk about partially commutative universes. So, whoops, if non-commutative universes are blue and commutative ones are yellow, then I guess partially commutative ones should be green. Right. So the inhabitants of this universe to meet for me would be, it turns out that there's a whole zoo of free inhabitants of this universe, namely groups that are free except for their partially commutative, which these days are more commonly known as rags. Rags stands for right angled art and groups and their right angle because they're closely related to right angled coxeter groups, which are reflection groups where the hyperplanes are at right angles to each other. So instead the zoo here contains right angled art and groups. Okay, so the, whoops, I forgot the right angle art and groups. So it turns out that breaking this symmetry results in really amazing, weird and wonderful things happening to these groups. In particular, they have a surprisingly rich set of subgroups. Of course, free groups, any subgroup of a free group is a free group, and commutative group, free abelian groups, any subgroup of a free abelian group is a free abelian group. In fact, it's even restricted in rank, it can't have rank bigger than its parent group. But a subgroup of a right angled art and group can be quite wild. And they've come into the focus recently because you can find every hyperbolic three manifold group has a finite index subgroup, which is a subgroup of a right angled art and group. So this was part of some very spectacular work of agel and wise that helped to settle the last of Thurston's conjectures on the structure of three manifolds. So these groups are very trendy now in group theory and in low dimensional topology. So each of these partially commutative groups, each of these rags, G has its own set of symmetries. And the way you visualize this is you draw a graph where the vertices are the generators, the free generators, and you get an edge that goes between any two vertices that commute. So you write vertices for this is a right angled art and group on five generators. And you might have maybe those commute and the others don't. So you would call this group, the associated group is called a sub gamma because you can the graph gamma completely determines the group. So notice that any automorphism of this group is going to have to preserve the commuting relations. So said in terms of this graph, it's going to have to be an automorphism of the graph. Any automorphism of the graph given by a permutation of the generators, any automorphism of the group is an automorphism of gamma. So that gives you some automorphisms of this group a gamma, but in general, there are many more automorphisms. For instance, if gamma didn't have any edges, then the automorphisms are the entire automorphism group or the free group. Right. So there's lots of automorphisms in general. Right. So that's the story with these free inhabitants of these universes, and how you describe them and what they are. I do topological and geometric group theory, so I'd like a topological model for these groups. So let's stick for the purposes of this talk to finitely generated groups. So free groups are Fn. And for as a topological model of this free group, I'll take a graph, a finite graph, and what makes it a topological model is pi one of the graph should be the free group isomorphic to the free group. For free abelian groups, on the other hand, on the other side of the spectrum, I have a natural candidate for a topological model, namely a torus of dimension n. So again, the fundamental group of the torus is isomorphic to, I'm only talking about finitely generated free abelian groups. So I get that. So I'm going to need something, if I'm going to think about this whole universe, I want to think about a topological model for partially commutative groups as well. And so those are going to be gamma complexes. So I want these things to be in particular, things that are isomorphic, whose fundamental group is the group A gamma. So I'm going to also be interested in metrics on these models. So if I take a finite graph and put lengths on this edges, I can make it into a metric space. So I'll call that a metric graph. If I take a torus over here on the right hand side, I can put in I can put lots of metrics on a torus. I don't, the nicest metrics to use are metrics that are flat. So the metric models for free abelian groups that I want to consider are going to be flat tori. And so then I have to, if I want a metric model for right angle art in groups, I need something that I'll call a flat gamma complex. Okay, so what's the point of having a metric model instead of just a topological model for these groups? Well, the point is that if I have a metric model, then for my group, then I can, I can vary the metric slightly in in Gromov's space of metric spaces. And I get another metric model. So I can make a whole space of metric models. And it turns out what's nice to do is given a metric model and an isomorphism from the fundamental group of my model to my group a gamma, I guess I've been calling these a gamma now. I can make a space by deforming the metric as I said, and this isomorphism will get pulled along with this deformation of the space. And then if I have an automorphism of the group, I get an action of the automorphism group of my group on this space, which is just changes the isomorphism. So if I have a particular point in my space, that's an isomorphism from the fundamental group of x to my group, and I take an automorphism of the group, then I compose my old isomorphism with this automorphism, I get a new isomorphism of the fundamental group of x to my to my group. So that's just written down here. The automorphism group acts on this, the space of marked metric spaces by just changing the isomorphism with the fundamental group. So the action does not change the metric. Now, so I had metric spaces. Yeah. Looks like I've just duplicated the same slide. Sorry. Okay. So I can add another row to my to my column to my table here of what belongs in the non commutative universe and what belongs in the commutative universe. I have groups, I have graphs that are topological models, I have topological models for the groups. I have metric spaces that are models for the groups, and I have a space of marked metric spaces. So it turns out that in the commutative case, the space of flat tori is something that's been studied for many, many years it's just the symmetric space SLNR mod SON in the, and it's a contractable space. It's let me morphic to a Euclidean space in fact. In the non commutative universe, there's an analog, which is called outer space. And it's a space of marked metric graphs. And the quotient by the action is just a space of graphs, the action remember just changes the isomorphism of the fundamental group with the with the model. So the quotient after if you mod out by that action, you just get unmarked graphs you get the space the modular space of graphs. And it turns out that this space is also a contractable space. The action is proper. So that means that the stabilizer of a point is a finite group. And by standard results in algebraic topology, you can the invariance of this quotient space, such as like co homology for instance, is actually an invariant of the group. You want to study invariance of the group, and your topologically or geometrically inclined. You can also study you can study this space instead. And you can, there's an interaction between the group theory and the topology and geometry that lets you go back and forth. You can figure out things about the space by knowing things about the group and you can also figure out things about the group by knowing about the space. Okay, so why am I telling you about this. This is a conference on on quantum field theory, perturbative methods in quantum field theory. Well, turns out that these spaces have variations with for graphs with leaves. And of course graphs with leaves are just Feynman diagrams. At least that's they underlie Feynman diagrams. So you can decorate them with a lot more things and you get a Feynman diagram. These spaces collect Feynman diagrams with a fixed loop order and a fixed number of external leaves in a single object, which is geometric. So, when I visited Berlin in 2015, I met, I started talking to Dirk, and told him how I think about these spaces. And he saw lots of connections between the spaces I was talking about, and perturbative quantum field theory. And I should say that he's been patiently trying to explain Feynman integrals, Kukowski rules and renormalization to me ever since. And I'm making progress. I'm not quite there yet, but I hope he hasn't lose doesn't lose patient hasn't lost patience with me yet because I think I'm making progress. But meanwhile, Dirk, and his collaborators and students have made a lot of have been exploring connections between the commonatorics and the geometry of outer space. And the tools of perturbative quantum field theory and as I just said they found connections between parametric representations of Feynman integrals and distributions on the modular space of graphs between renormalization hot valgebras and the structure at infinity of this space. And between Kukowski rules and the commonatorial structure of the partially ordered set of graphs. So, I don't know whether the partially commutative universe will also find applications in perturbative quantum field theory. But there are some I figure this is the right audience. I've noticed that, for instance, we've heard talks about relationships, but of these various hot valgebras to structures such as sub spaces of a vector space. Those are things which show up at the boundary at infinity of the commutative picture the symmetric space. So that that was in the right hand column. And, right, as I said, Durkin, his colleagues have been exploring connections between outer space and the non commutative picture. Right. So, I should say that it's, it's a, it's a good thing to do to try to peak Dirck's interest. It has many, many benefits. In addition to him finding connections between quantum field theory and outer space. Right. There's one of them. One of the benefits is pictured on your screen there. That's the view from my window at Les Usche, a couple of years ago. In the other direction. At Les Usche, I met a lot of interesting, very interesting and very nice people, many of whom are attending this conference right now. There's some you'll recognize you probably recognize this one. And right. There's one. There are many people here that you will recognize. There's also this one up here. I'm sorry Karen, I don't seem to see your, your photos. Really. Oh, what do you have? What do you see on my screen is this slide non commutative partially commutative commutative your table. That was several slides ago. Uh oh. I see your video well, and I can hear you very well. Yeah. Let me try sharing again. Yeah, maybe just reshare. Yeah. Okay. Yeah, that was different. Okay. So I think the last slide we had on the screen was when you finished the table. And to talk to about it. Well then, yeah, so this slide is when I was talking about what Dirk and his collaborators have done with these spaces. Please tell me again, please tell me if you stop seeing and I was saying that it's a good thing to get Dirk interested in what you're doing. And this is one of the benefits. Okay, this is a view from my window at Lezush. And this is another one of the benefits, which I met a lot of very interesting people. Well, there's, there's Dirk, and that's me. And here's one of the interesting people I met. This is Michi, Michi Barinski. And I found out that from this conference, I asked Michi a question that he answered a few couple months later using perturbative methods and this question answered, solved a 30 year old conjecture on the Euler characteristic of out of FN. And that there were benefits from quantum field theory to out of FN, as well as, yeah, hopefully benefits in the other direction. Okay, so that was right. So it's probably time to explain to you what these things actually are. These gamma complexes that are associated to partially commutative groups. So let's just look at some examples. If I take gamma to be have three generators, all of whom commute, then the group is Z to the n, in this case, Z cubed. So if gamma has three generators, none of which commute, then a gamma is free group on generators. Anything in between here's an example here, a and b commute B and C commute A and C don't so a, A and C generate a free group on two generators and they commute with a group generated by B. In general, if I have a graph. I got some generators that commute. And the group here is a gamma. You can't really say anything more about it in this particular case. Okay, so those are some examples. So, I want to talk now about spaces with the right fundamental group. So in the case of z cubed, I have a three tourists, s one cross s one cross s one, which is hard to draw a picture of, but instead I can draw just a three cube. I'll fill in three cube all filled in. And then I take the quotient by identifying opposite sides. What do I do for three for F three I want to take a rose. So that's a wedge product mess one, or since it's a rose I should draw it in red. Three circles joined together at a point. But I can also think of this similar late to the tourists here this this rose actually lives inside this tourists. It's just these three edges. What about the next one. This is our to cross s one. There's a two dimensional rose and a circle. A rose for the free group a circle for the Z here. So what I really have here is to Tori joined along an edge. And there's the a loop. Be loop. And this too I can I can think of it as a sub complex of this tourists. Where I just use the a B face here and the BC face and identify opposite sides. So this is supposed to give you an idea of what you do in general how you get a space with the fundamental group of partially commutative group a rag. Namely, you just take a sub complex of the the n dimensional tourists. This last example the tourists should be five dimensional so I'm not going to be able to draw it. But I just take a tourists for every complete clique. So there will be a three dimensional tourists for that clique, a three dimensional tourists for that clique, they'll intersect along a two dimensional tourists. There's another two dimensional tourists here that intersects that tourists in a circle. Okay, so so this is this complex that I've just described is called a salve d complex. And this is what I just said about it. It's the sub complex of the n dimensional tourists consisting of the K Tori spanned by the K clicks and gamma. This salve d complex is kind of the most basic example of a gamma complex. And in we call the such things cube complexes in particular. It's a cube complex because it's made of cubes glued together along faces. It can some of the faces, some of the cubes can have their faces identified with the same two faces of the same cube can be identified or or a face of a cube could be identified to a different cube. So I should say one remark here is that if I just want to get a complex with the right fundamental group, I only need to look at the two skeleton. In other words, I only need to put in a tourists for every pair of commuting generators. What's the point of putting in all of these higher dimensional Tori. Well, if I use Tori for all the K clicks I get I can guarantee that s gamma has a metric of non positive curvature. So this is a theorem of Gromov, which means that in technically the universal cover is a cat zero space. So that's the kind of metric version of non positive curvature for spaces which aren't necessarily manifolds. And the point is that geometric triangles, geodesic triangles in these spaces are thinner than at least as thin as Euclidean ones. Right. So remember what I was trying to do with these complexes. An isomorphism. Remember that an isomorphism between the fundamental group of my salvelli and my group is called a marking. And out of a gamma acts on the set of marked salvelli's by changing the marking. And this is just a remark if I don't want to specify a base point for these salvelli's salvelli's have a natural base point. But other gamma gamma complexes won't have a natural base point. And so if I don't want to specify a base point. I only get an action of the outer automorphism group. And inner automorphisms basically change the base point. So what do I want to do. I want to make a space of marked gamma complexes. I mean, I want to make a space. And I also want to prove the space has properties that are useful for group theory. And I want to make the conclusion that the salvelli's are the same. So I want to make the conclusion that the salvelli's are the same. So let me go through that briefly. Okay, so let's start with free a billion groups. So I want to make a space of marked gamma complexes. And I want to think of that Taurus remember as a metric space. And how do I think of it as a metric space. I want to think of it as Euclidean space. Modulo a lattice. So that gives my Taurus a flat metric locally it looks like Euclidean space. So I want to make a fundamental domain for a standard lattice R to the N mod Z to the N. And if I identify opposite sides I get the standard Taurus. If I take an element of the autumn outer automorphism group of Z squared, which is otherwise known as GLNZ. Then what happens to this to this Taurus will absolutely nothing happens to the Taurus. If I take that vector and that vector, they go under this element that I've chosen to that vector and that vector. But I get they generate the same lattice. So nothing happened to the lattice. What changed when I acted by my element of out of Z squared was the basis for the lattice. So, remember these. So these red and blue vectors are actually loops in the Taurus, and they give a basis for the fundamental group. So I have the same flat Taurus but I have different isomorphisms of the fundamental group with of the Taurus with Z squared. So, that's the same for us, but they have different markings. What about the free group. The, the salveady complex here is a rose. And here's an automorphism. Here's a two dimensional example to the rose with two pedals. And here's an automorphism of the free group on A and B. So, for instance, if a is this generator and B is this generator. Then this automorphism should send the red loop around a and B. And the blue loop should go to itself. And the blue loop and the red loop or a basis for the fundamental group acting by this automorphism changes the basis. I have the same metric graph. And, but I have got different loops. So I get a different isomorphism with F2. And if I, if I take a salveady and start acting by the group, I get, you know, a bunch of dots in my space, I get a discrete set, a discrete orbit of this space. How do I get from one point to another point. In either of these cases, let's start with the torii. How do I get from this guy to that guy seems obvious what I do is I start sharing the tourists. So I just skew gradually and on the way. So at the beginning at the end the Torah I have the same metric. At in the middle, these Torah are also flat Torah, but they're different metric spaces. They don't have the same shortest loops, for instance. How do you get from that rose to that rose. Well, this is a little bit maybe harder to see what I want to do is insert a new edge. And then collapse one of the old edges. So I'm going to collapse that old edge. So this is the old edge. And I'm going to, I just, it's, I put it over here. And then I'm going to collapse it and I'll get that. So it's a metric graph and I want to preserve the volume one thing. So what am I going to do. So, right. What I've shown you how to do is connect to points in this space to particular points in fact, but it turns out that if I look at the space of marked flat Torah. With volume one, I get a contractable space. On the other hand, I just showed you how to connect to roses by a graph with two vertices. And to get an actually contractable space, you need to allow graphs with more vertices. So now I can tell you what a gamma complex is in general, it's always going to be a cube complex. So that there's some way of collapsing. There's a collapsing operation which is standard in this in the theory of cube complexes called a hyperplane collapse. And that should give me back my salve. So you should be thinking about the free group case. If my group is a free group, then a gamma complex is just going to be a graph with the right fundamental group. And it's, it also the way I've defined it it won't have any separating edges so it'll be actually one particle irreducible. So I'm just collapsing operation. In this case, what am I doing I'm just collapsing a maximal tree to get back to a rose. But let me show you an example that's not of. It's not just a graph. Here's a, here's a here what the gamma complexes look like for this particular graph. So there's the salve. And there's this one, which I'm calling as gamma PI P. So collapsing any, if I collapse this central cylinder down to its waist curve. And if I collapse this circle, then I get the salve at e back. Yeah. So that's what a gamma complex is it's a cube complex. And there's a standard collapsing procedure that will collapse it back to a salve. There's a common editorial description of what a gamma complex is in terms of partitions, which I thought, listening to some of these talks I thought people might like. So, what do you do you form a graph gamma with vertices. So I start with my graph over here gamma, and I form kind of I kind of double it, I get. There's my original graph. I take new vertices for the inverses of these generators. And I connect to things if they commute but aren't inverses. So, a commutes with the inverse inverse commutes with be, etc. So that's kind of a doubled graph. And that this P that shows up in this picture, it corresponds to this edge, this extra edge that I've added in my gamma complex. And it gives me a partition into three sets that's that are determined by this new edge P. So let me draw this on the next page. Right. So how do I figure out what this partition is. Right, I had my graph here. We bring it over. Well I look at this edge P. And I notice that the front of a and the front of be are attached to one end of P. So I'll put them in one piece of the partition. Front of edge and a in the front of C. And the back of a and the back of C are at the other end of the partition. And be and be inverse or at both ends of that edge. So, I get a partition into of this, the vertices of this graph into three sets, one's called the link. And that consists of the half edges that are at both vertices. One's, the half edges at one of the vertices that just have a half edge at one vertex and one's a half edges at the other vertex. And I claim that given this partition, as long as it satisfies some basic rules I can reconstruct this space. So, here's my edge P. And I know that be and be inverse are supposed to be and be inverse are supposed to go at both edges of P. I know that a is supposed to go from one end to the other and see is supposed to go from one end to the other. And all I have to do the rules for this partition are going to allow me to fill in the rest of the picture. And I don't have the rest of the tubes in this gamma complex. And I don't have time and you probably don't have the patience to see exactly what the rules are, but they're easy to state. So, that tells you how to construct gamma complexes with two vertices. So, all you're going to need to construct gamma complexes with more vertices. And so you need a notion of compatibility of partitions. And then given any set of compatible partitions. I can construct a gamma complex. And collapsing hyperplanes corresponds to removing partitions from this description. So if I collapse them all I get back myself Eddie. Right. I have two more minutes, five more minutes. Something like that. Just a couple minutes. Anyway, what this what this looks like if my graph if my group is a free group. So, what's a partition. The, I'm not going to have my partitions the link part is always going to be empty. And I'm just going to have a partition. So, the sets of gamma that this is this is this is what the double of gamma looks like. And a partition is just going to be partition into two pieces. And two partitions are compatible. If they can be drawn by cert they can be described by circles that don't intersect. And then there's a picture of three compatible partitions. And whenever you have such a set of circles that don't intersect, there's a dual tree. With one vertex for every component of the decomposition into sets. And that forms a map the dual tree to that set of partitions is a maximal tree in a graph. To get the rest of the graph, you connect you add edges a b and c. So a goes from there to there. See goes from there to there. So that's a familiar picture I think to common a toriolists associating a maximal tree to a set of partitions and a graph to that. So, right. So I've described what a gamma complex is to get a space. I want to put metrics on these gamma complexes. So the cubes that I was describing won't be actually cubes. They'll be isometric to Euclidean parallelotopes. But in order to keep some control on this space, I'm going to I want these metrics to be flat. So locally cat zero. So I can finally tell you what my space is a point in my space. It's going to be a locally flat metric space. It's going to be isometric to some flat gamma complex. And it's going to be marked by an isomorphism between the complex and my right angled art and group. And the theorem. That I want to that I was trying to get to is that this is a contractable space and the action is proper. So it's quotient is a good model for this group of outer automorphisms in variants of this quotient or in variants of the group. And this is a very recent theorem of Corey, Breguin, Ruth, Charny and myself. It depends on an earlier theorem, which just talks about the combinatorial picture of gamma complexes. So, if you look at, if you make gamma complexes into a partially ordered set, the hyperplane collapse, then the geometric realization of that partially ordered set is as long as you restrict the kind of isomorphisms you allow is a contractable set. And this theorem, this recent theorem builds on that theorem. It turns out that we published this paper in 2017. And that wrote a paragraph at the end saying, well, we'll publish another short paper, adding all of the metric information and using arbitrary markings. And it turns out, as it says down here that adding the metric arbitrary markings and metric information was much harder than we anticipated. But we've now done it and are very happy that it works. So, there are various issues to be dealt with. We had to start while there's the originally combinatorics of the gamma complexes. There's straightening twisted gamma complexes. Then there's determining all possible decompositions of a metric space, all identifications with a gamma complex. So those all turned out to be major undertakings. So, that's pretty much all I wanted to say I presented you now with a new toy, Dirk. And so, I should play with it. Thank you. Thank you very much. Thank you, Kevin. This is the lovely final slide. And I have something to play. I promise I do. Okay, good. Great. It's something that you have to get to back to Berlin. You have to get you back to Berlin. Because I'm glad you haven't lost patience with me. I'm still learning. Okay, good. I have one question right away. So, as I understood it for each combinatorial graph, you define this particular group with a corresponding commutation and then to that you associate this geometric space. That's right. Now, if I have a graph and I look at a subgraph or an induced subgraph, the associated space sit in some nice way in the, in the gamma complex associated to the bigger graph. It does. So the first of all, the group is a called a special subgroup associated to the subgraph is called a special subgroup. And you can embed this space into the larger space. And there are actually several embeddings of this space, having to do with the fact that we're only looking at outer con outer automorphisms and conjugates of this subgroup give different copies of this subspace sitting inside the bigger space. Okay. And then the follow up question would be. So if I have a sub subgraph, and it's quotient graph. Is there a product of these two spaces sitting in the biggest is there some operatic structure that mimics the graphical relations. Yeah, that's not so clear. In the case of outer space. That structure shows up at infinity. So there's this way of boardifying the space by adding these quotient spaces and products of spaces and quotient spaces at the kind of it. Yeah, you don't compactify outer space you compactify the quotient the modular space of graphs. And by by doing this sort of contraction deletion contraction operations. And this is such a compactification operator, but also for the gamma complexes more generally. This, this space is very new. And that's one of my one one question I would love to answer. Yeah, I don't know. Are there other questions for Karen, you can type them in the chat or just speak up if you can have a question. So I was wondering if the graph theory tells you something interesting. So, for example, to your graph gamma, you can associate them independence complex or click complex. You get some topological spaces built sort of body independent sets in the graph or so. And yeah, this is in some way, answer the picture or tell you something about the group a gamma. The click complexes. Yeah, they have to do with, if you look at the universal cover of one of these gamma complexes. The cliques correspond to Euclidean subspaces flat flat Euclidean subspaces of the universal cover. And the clique complex. Well, I mean, this is, this is not really saying anything that it. It describes how these flat subspaces intercept. That's just translating the fact that they're sub their cliques in the graph into the, the geometry of the spaces. So, yeah, we did use this clique complex graph in the proof that this outer space is contractible. It helps you, it helps you kind of divorce yourself from the marking it's something that's inherent in the graph, it doesn't depend on the isomorphism, the fundamental group with the graph. So, yeah. I'm not sure I have a real answer to your question. I'm sure they're relevant right. Any graph theoretic constructions are probably are probably reflected somewhere in the geometry of the space. Yeah. Okay, thank you. So, if there are no further questions right now then let's thank Karen again for her beautiful talk.
Spaces of finite graphs play a key role in perturbative quantum field theory, but also in many other areas of science and mathematics. Among these is geometric group theory, where they are used to model groups of automorphism of free groups. Graphs can be thought of as 1-dimensional flat metric spaces.In higher dimensions, spaces of flat n-dimensional tori model automorphism groups of free abelian groups.There are very interesting groups which interpolate between free groups and free abelian groups, called right-angled Artin groups. I will describe a space of “Gamma-complexes”, which are a hybrid of tori and graphs, and which model automorphism groups of right-angled Artin groups, by recent joint work with Bregman and Charney.
10.5446/51270 (DOI)
First of all, happy birthday to Dirk. So I'm going to start with a monoid or a group, which is proto-algebraic, so it does not really matter what it means, but it just has a good ring or algebra of polynomial functions. So this algebra just reflects the topology of the group, and you have more, you have a co-product, which sends one polynomial function, two pairs of polynomial functions, which is the co-product data, which you can see here. We just reflect the composition of the group. So this is what is called a bi-algebra if you have a monoid, and if it is a group, it is a hop-algebra. And you can recover, it's quite well known, the group of a monoid from the bi-algebra just by taking its characters, the characters are just bi-algebra morphisms from your algebra to the field, which is for me the complex field. And there is a product on character, which is the convolution, which is in some sense this dual of the co-product. So I'm not satisfied with only one group or one monoid, I need two groups, and in fact I want to do some semi-direct product or thing like this. So for this, I take two groups or two monoids with a good bi-algebras, and I suppose that the second one, g-prime, acts on the first one by monoid andomorphisms, which is exactly what I need if I want to do a semi-direct product. So what does this mean? I have first a monoid g, so it has a hop-algebra, which I call or bi-algebra, which I call a. I've got a second monoid g-prime, which has also bi-algebra, which is b. g-primes acts on g, so b co-acts on a. So there is a correction, which is a map, an algebra map from a to a tensor b. So this is just to reflect the action of g-prime on j, and if I want to translate the fact that the action of g-primes is by monoid andomorphisms, this means that a is a bi-algebra in the category of b-commodules. So what does this mean exactly, this fourth point? This means this axiom. So the first one just means that this is a correction. This is the axiom of a right correction. The second one means that the product of a is a morphism in the category of co-module over b, and it also means that the correction is an algebra morphism. This is the same. The fourth one means that the co-unit of a is a co-module morphism from a to the base field. And the last one, which is the Mohr-Schwisting, means that the co-product of a, the big delta, is a co-module morphism from a to a tensor a, which is also a b-commodule. So this means that if you do first the correction and then the co-product on the left, this is equal to do the first the co-product, then the correction on both sides of the co-product, and then to regroup together the two terms, which belongs to b. So this is this product m1324, which case four elements, and we group the second and the fourth one at the end. So for example, let's just take a very simple example. You can consider the group c with the addition, which is a group, a Boolean group. And on this, the group c star with the multiplication naturally acts by group automorphisms. So this is exactly what you have before. So the first group g is c plus. So it's algebra, it's hop-algebra is the polynomial algebras on c, which is the polynomial ring with one in the terminate. With the co-product, which is additive, the co-product of x is x tensor one plus one tensor x, this is hop-algebra. The second group g prime is c star with the multiplication. So the polynomial algebras on c star is the Laurent polynomial algebra, c of x and x minus one. I have to add an inverse to x because there is no zero in c star. With another co-product, which is multiplicative, which sends x to x tensor x. So for the first one, the big delta x is a primitive element. For the second one, it's the group line element. So this is also hop-algebra because there is an inverse. And b co-acts on a because c star acts on c with this correction, rho, which sends also x and x tensor x. In fact, more or less the co-action on the co-product, the second co-product, are more or less the same. So I don't act very much this x minus one. I would prefer to have cx. So I just forget it and I obtain a hop, not a hop-algebra, but a set-algebra, which is b, which sees the same algebra as a with another co-product. So it's no more hop-algebra because x has no more inverse. It's just a bi-algebra. Then it co-acts in the same way on a with the same correction. And for this, the correction on the co-product is the same. So this is the frame I will use now. So what I have is an algebra a with a single product and two co-products, big delta on small delta such that a m delta is a bi-algebra, a m small delta is a bi-algebra. And the second one co-acts on the first one with the correction rho, which is also the second co-product. So what I have is an object with one product and two co-products. For both co-products, it's a bi-algebra. And moreover, there is a compatibility between the two co-products, which is something like this. Replace just rho by delta and you obtain the second, the compatibility between big delta and small delta. So if you want to be complete, this should be called bi-algebras in a category of co-modules of another bi-algebra, which is quite long. So I just now called them double bi-algebras or co-interacting bi-algebras and things like this. And the first example we have is just the polynomial ring, c of x, with these two co-products, big delta which is additive and small delta which is multiplicative. We know more examples. So for example, the well-known cone-cramer hop-algebra of trees, which is based on rooted forest, and I won't belong on the remainder on it. So it's just based on rooted trees. For me, the trees, the roots of the trees are at the bottom. So if you have the trees with two vertices, three vertices, four vertices, the product is the Dijon union, so it has the basis of forests. And the first co-product is the cone-cramer one, just given by admissible cuts. So you take your tree or your forest, you just cut some branches, you put the branches on the right and the trunk on the left. So for example, for this tree, you can cut nothing and the tree turns to one, or you can cut everything, one turns to the tree, or you can cut a leaf in two possible ways. You obtain two turns to the trunk, turns to the leaf, or you can cut the two leaves which go on the right and three minutes on either roots. And the same for this, you can cut nothing or everything, or the trunk after the root, or the trunk just before the leaf, or you obtain these four turns. There is a primitive part, the first two turns, which means that the cone is very simple. The cone unit of a forest is one, is the forest has no vertex and zero otherwise. And you can observe that it is graded, obviously, by the number of vertices. If you cut a forest, you don't lose any vertex. Some goes on the left, the other on the right, but you don't lose any vertex, so this is graded by the number of vertices. So this is the most famous co-product on it, but there is a second one, which was first described by Damien Calac, Koro Shebrahini-Fart and Dominique Monson in 2008, I think, which is given by the process of construction on extraction. So what does this mean? For example, for this tree, you can separate your trees into disjoint sub-trees. So for example, you can disjoint it into three sub-trees, which has only the vertex on the left to contract these sub-trees, but nothing appears, and on the right to put these sub-trees. So here you don't do anything. You can contract the edge on the left, so this sub-tree, so this gives this tree in two possible ways. You can contract the whole tree, it only remains one vertex, and put the other tree on the right. So this is another co-product, which is also co-associative. It's not co-commutative, you can see it. And it's not a hop-file algebra because you have a group like, which is the tree with only one vertex with no inverse, so you only obtain a bi-algebra for this co-product. And they prove that this is a double bi-algebra, so this means that this co-product really co-acts in a good way on the first cone-crimer co-product. And there are similar constructions on post-sets, finite post-sets, you can see trees as post-sets just by taking the order, the partial order to be higher in the tree. So if you have a tree, you have a post-sets. And such a construction also exists on finite post-sets, or more generally on finite topologies. So this is the first example of, not the first, but second examples of double bi-algebra. I'm going to give another one based on graphs. So for this, the basis of my hop-file algebra of graphs is this whole set of graphs. So these are just simple graphs. So here you have all graphs connected or not with one, two, three, or four vertices. There is a simple product on it, which is the Dijon union. The unit is the empty graph, which is here. So for example, this graph is the product of this by itself, something like this. There is another simple product. There is a very simple co-product, which is just given by separating two graphs into two parts. So take your set of vertices, you put some of the vertices on the left with the edges between them, the other vertices on the right, also with the edges between them. And you obtain a nice co-product, which was first defined, I think, in a paper of Schmitt on incidence hop-file zebras. This is an example of incidence hop-file zebras based on a set of graphs. This is just a set of graphs with a given set of vertices with expression of edges. So this is a co-product. This is a co-associative. It's really not difficult to see. And it's co-commutative. This is very different from the co-crime or co-product. This is co-commutative. There is a second one, which is also can be found in the paper of Schmitt with another incidence by a zebra, and which was also described in a paper of Dominique in 2011, with various examples on graphs, oriented graphs, acyclic oriented graphs, something like this. So this is the same idea as for trees. So for trees, the first co-product is just cutting the trees into two parts. So this is the same for graphs. The second co-product was given by extraction and contraction. So this is the same for graphs. Just take a graph. You can take some equivalences on the set of vertices. So this means that you just do a partition of your vertices. On the right, you just contract your equivalent cases. So this means that you contract some sub-subgraphs of your graphs to vertices. And on the left, you just forget the edges, which are not between, which are in vertices, which are not equivalent. So I'm just going to give an example. So for this one, you can contract everything, just remain a vertex. And on the right, you obtain the world graph. You can contract only one edge, for example, this one. If you contract, you just obtain a graph with two vertices and one wedge between them, this. And the extraction is given by this edge and the other vertex, so something like this. You have three possible ways to do this. You can just extract the vertices, just go on the right, and if you contract the vertices, nothing happens. So the graph stays itself. So this is also a second core product. This is also by algebra. It is not a hop-adjibra because there is a group like, the graph is one vertex, and it has no inverse, so you don't have any anti-pod. It's not a big problem, but well, it's not a hop-adjibra. And the code is very simple. In fact, this is because of these parts. For if graph, you obtain a vertex and some of the graph plus a graph tensor vertices and more terms. So the code unit is just given by sending any graph to one. If your graph is totally disconnected with no edge or zero otherwise, and you can prove that this is really a double by algebra. So the second core product, this extraction contraction core product really coax on the first one. And the last example, which is the hop-adjibra of quasi-symmetric functions. So as a algebra, it's based on the set of compositions. The composition is just the finite sequences of positive integers. With these products, which was used just before by Dominique, this is the quasi-triple products on composition. The core product, the first core product is given by decon-catination. You just cut yours into two parts between two letters. And there is a second one, which is given by extraction and contraction. So for example, for this, you will cut your words into two, into several parts. One part, two parts, two parts, and so three parts, two parts, two parts, and one part. You just contract the parts. So contracting just means that you sum all the letters of your word. Because the letters are integers, you can sum them. So this gives the terms on the left and on the right, you just quasi-suffer your parts. So this is another core product. You can find it into the papers of Thibault, Noveli, and the other on this subject. I'm not totally sure they proved this is really a double by algebra. I'm not sure they proved the co-interaction, but well, it's true. And they prove it with a trick, which is based on manipulation of alphabets. So it's not really obvious, but you can do it without too many combinatorics, just with algebraic tricks. So well, there is a co-unit, I forgot it, which is also, she's given by this. You take a word, a composition, if it's of 0 or 1, the co-unit is 1 and 0 otherwise. And it turns out that this is a character of Cucine, which appears in another paper by Agar, Bergeron, and Sotile. In this paper, they define the category of combinatorial hopp-vagebras, which are graded, connected hopp-vagebras with a character, which is their pairs. And they prove that in this category, Cucine with this character, epsilon prime, they don't mention that this is the co-unit of a certain co-product, by the way, but this pair is a terminal object. So this means that if you take a combinatorial hopp-vagebras, so a connected, graded hopp-vagebras with a character, you automatically obtain a morphism to Cucine, compatible with this co-unit. Okay, so that's nice. These are nice objects, but the question is, what you can do with this, what you can deduce on this construction, and well, what will it give on your, on these examples of graphs and trees and things like this? So first of all, take a double bi-algebra, A, with one product and two co-products, another bi-algebra, B, and you're looking at morphism of hopp-vagebras or bi-algebra from A to B. So it turns out that the monoid of characters of A for the second co-product acts on the set of bi-algebra morphisms with the help of the co-action of the second co-products. So this means that if you obtain a bi-algebra morphism from A to B, in fact, you have a lot more bi-algebra morphisms from A to B. You can deform any bi-algebra morphisms with the help of characters of A. If you have this one, you will have everyone's just by using the actions. So let's try to do this for forests. So forests form a double bi-algebra. So this means that there should be a unique morphism from a forest to with both co-products. You can prove that you can compute it uniquely. For example, let's just start with the first tree, the tree with only one vertex. Well, it's primitive for trees. So its image should be primitive for polynomials. The set of primitive elements of k of x is one-dimensional, generated by x. So this means that phi 1 of this tree should be a multiple of x. Moreover, phi 1 is compatible with a second co-product and with its co-unit epsilon prime. Epsilon prime of this tree is one. So epsilon prime of its image should be one. So epsilon prime of lambda x is lambda. So lambda should be one. So you entirely determine phi 1 of this tree, this should be x. For the second one, you do the same. Let's first compute the co-product of this tree. This is this. This is only one admissible cost, non-trivial. Let us apply phi 1 to this. Phi 1 is compatible with back delta. So you should find something like this. Phi 1 of this is this. So this means that phi 1 of this tree should be x2 over 2 plus a primitive element. So lambda of x. This morphism phi 1 is compatible with the co-unit epsilon prime. So epsilon prime of this polynomial should be epsilon prime of this tree. So this should be equal to 0. And you obtain that lambda is equal to minus 1 over 2. And you obtain that phi 1 of this tree. This is exactly this. So what you are doing now is to put that this morphism is unique. What is not clear is that it is really compatible with the second co-product. I only use that it's compatible with the co-unit. But you obtain for free, but it is in fact compatible with the second co-product. Just for free. So you can continue like this. For this tree, you obtain something like this, which is a real bad polynomial. Quite famous. For this tree, this is normal, real bad polynomial, but something like this. Maybe you recognize it. This polynomial counts the sum of the square. This evaluate to n is 1 square plus 2 square plus, etc. plus n square. So these are quite special polynomial. You can do more in fact. Well, this is a nice way to compute the variant phi 1, but it's quite long. And in fact, you can do better. You can put some formula like this. If you take an element of your double by algebra a, you can compute phi 1 of a in this way. First, you can compute all its reduced co-products. So the reduced co-products is just by forgetting the primitive parts. So this means that for trees, you just forget the trivial cuts and the cuts of everything. You can compute it and iterate and iterate and iterate and something like this. And you know that at certain point, it will stop. The iterated co-products should be zero after a certain point. So you compute all of them. You obtain tons of trees and things like this. You just apply the co-unit on the left and then you multiply the tons by your Hilbert polynomial. And this means that your invariance really counts something. If you evaluate x into an integer, this is a binomial coefficient. So really phi 1 of a just counts something for forests. And this counts something like this. This is quite a well-known construction. If you take a forest with k vertices, well, just choose an index session of the forest. It does not really matter. Then you can associate with a polytope of dimension k. In fact, this polytope is defined by some inequality. If you take one vertex, which is another one, x1, xi is below the vertex j in your tree. Then you associate to it an inequality xi times smaller than xj. So this defines a polytope. And you want first to delete it by an integer, i minus 1 times the polytope. So just do an amortesy. And you want to count the number of integral points inside. So it's quite a famous result that this is, in fact, given by a polynomial sequence. So this defines a polynomial, which is called the Erard polynomial. The strict Erard polynomial is the same. But just count the number of vertices inside your polytopes. So this is quite known that this defines two polynomials, which are related, I will say later. So just to mention a problem, usually in the literature on this, the Erard polynomial in n, just count the number of integral points of the dilated of the polytope by n. Here, I have a problem. If I do this, it does not work. So I have to do some translation by one. So for example, for this, so you have three vertices, which I did an index by one, two three from bottom to left. One is smaller than two, two is smaller than three. So your polytope is defined by x smaller than one, smaller than z, between zero and one. So the polytope associated to f is just a simplex. So if you want to count the number of integral points into the dilated of f, what you count is the number of points x, y, z, which are integers such that x is smaller than one, is smaller than z, is smaller than n minus one. And it's not very difficult to count them on two pools, but this is n, n plus one, n plus two over six. So this is for the Erard polynomial. For the strict Erard polynomial, you count the point inside. So this means that you replace your smaller or equal by just strictly smaller, and n minus one by n plus one. So these counts things like this, and it's not difficult to prove that this is this polynomial, which by the way is in fact phi one of f evaluated to n. You can do the same thing for this tree. So here you can try to draw the polytope. This is a pyramid with a square basis. You can count the number of points inside, which is this. And so this is the Erard polynomial, the strict Erard polynomial, which is again phi one of f evaluated to n. And in fact, this is exactly this. The unique morphism compatible with the product and the two co-products is in fact the strict Erard polynomial. OK. So if you look at this, you can observe that the strict Erard and the Erard polynomial are very similar. More or less they are the same coefficient, and things like this. And in fact, you can prove that there is another morphism from the conch primer to polynomials, which is compatible with the product and two co-products. This is not the Erard polynomial, but more or less itself. You just replace x by minus x, and you have to correct the same by multiplying by the power of minus one. And it's not very difficult to prove combinatorially, but this is compatible also with the product and both co-products. But there is only one morphism compatible with the product and the two co-products. So this means that this morphism phi is the same as phi one. And what you obtain is an algebraic proof and the 180 principle for Erard polynomial. In fact, the strict Erard polynomial and the Erard polynomial are really closely rated. This is just by evaluating x into minus one, minus x, and replace the same. So here, as an application, you have a proof of the 180 principle for Erard polynomial without, more or less without any real combinatorial stuff. The usual proof uses some Mobius inversion in some posets. So these are really combinatorial. Can I ask a quick question here? Sorry. So this, you exemplified this as in the case of these polytops you get from trees via this post-order. Can you also use a similar kind of argument to get this Erard polynomial duality result for arbitrary polytops? I did not manage to do it. In fact, I would need a hopper algebra structure on polytops. A good algebra structure on polytops. I have a product, just the usual product of polytops, but I don't find the core products. In fact, those coming from forest are very special. You see they are defined by some equations which are nice. If you take any polytops, I don't know how to quote any polytops. So. Yeah. Thank you. So let's do it for graphs now. So for graphs, I can apply the formula for phi 1, which I gave before. So what does this mean? I have to do all the iterated core products of graphs. So this means that I have to cut the graphs into any number of parts I want. So the iterated core product just means that I really cut the graphs into a lot of parts. And then on each part, I will apply the co-unit for the second core product. So the co-unit is one if the part has no edge. And otherwise, it's zero. So this means that in phi 1, which appear, only decomposition of your graphs into parts with no edges. The decomposition of the graph is equivalent to a coloration of a graph. So a coloration is just associated to any vertex of the graph, a color, which usually is a number. So a partition of the graph is just the same as a coloration. And the coloration happening in my phi 1 of A are just the coloration which are called valid. So this means that if two vertices have the same colors, they should not be neighbors in the graph with no edges between them. So this means that in phi 1 of A, I just take in account valid colorations. So this is a polynomial with constant like this, and which is called the chromatic polynomial. So in fact, what I find with graphs, the unit morphism compatible with both core products and the core product is the chromatic polynomial, which is perhaps not a big surprise. Because if you work with graphs, you know that chromatic polynomials is an essential tool for such an event. So this is perhaps an explanation why it's so important. In fact, it's a unique polynomial invariance on graphs, which will be compatible with all these structures of extraction or contraction and extraction of subclass. OK, so something else, no, another application. So I'm going back to a theoretical result. In fact, I'm looking for the antipode. So in all my examples for the big delta, this is a hop algebra. So it has an antipode. And for the second component, it's just a bi-algebra, so no antipode. In fact, I can prove that if I want to compute the antipode of A, I just have to compute the inverse of a special character, which is the coordinate of the second coproduct. So the coordinate of the second coproduct is the unit for the convolution of the second coproduct. But for the first coproduct, it's just a character with no spatial property. Maybe it's invertible. If it is, well, you know that A is a hop-algebra, and you have a nice formula for your hop-algebra. Just apply the second coproduct, and then the character and the first components. So for double bi-algebras, if you want to compute the algebra, you just have to compute a special character. There is something more. In fact, maybe you observe that all my examples of double bi-algebras are commutative. In fact, what you obtain here, like this, is that S is a composition of algebra morphisms. So this means that in double bi-algebras, the antipode of A is an algebra morphism. But usually it's an anti-algebra morphism. So double bi-algebras are special bi-algebras, so that the antipode is both an algebra and an anti-algebra morphisms. So this means that essentially, if you have a double bi-algebra, it should be commutative. So this is the reason why all my examples are commutative. In fact, you cannot obtain any double bi-algebra, which is not commutative, because, this is one of the reasons the antipode should be an algebra morphism. You can do more. In fact, you have to compute the inverse of a character, which is not so obvious, in fact. You can do it inductively, but it's not so obvious. But if you know how to compute phi 1, well, it's very easy to find the inverse of a character. Just take an element of your algebra, a small a, just compute phi 1 of a, this is a polynomial, and then evaluate it into minus 1. And this is really the character alpha of a. So for example, for rooted forests, if you want to compute the antipode, you need to compute the error polynomials of any forest evaluated to minus 1. And it's very easy, just a problem of minus 1. So alpha of s is just a problem of minus 1. And you obtain this formula for the antipode, which was proved by Conan Kremer, not in this way at all. It's just an indicative proof. I can obtain this, something like this, with no induction. It's more interesting for graphs. For graphs, for a long time, the antipode was not known, not really. You can compute it inductively, of course, but it was not so clear. Last year, in fact, formula was proved by Ben-Yaditi Bergeron on match-ad-sec, I'm not totally sure of the pronunciation, with a combinatorial method, which was quite complicated. There is a Mobius inversion, and curiously, the number of acyclic orientation appears. And in fact, this is obtained with my method by this. If you want to do this, you have to compute phi 1 of the graph evaluated in minus 1. Phi 1 of the graph is chromatic polynomial, and it's quite a famous result in graph theory that the graph polynomial evaluated in minus 1 counts the number of acyclic orientations. And to prove this, it's not so difficult, it's just a combinatorial proof by induction of a number of vertices. So what you obtain is the formula by Ben-Yaditi Bergeron on match-ad-sec, with no more combinatorics, more or less. It just applies the chromatic polynomial evaluated to minus 1. OK. You can do better for the chromatic character. In fact, there is a very simple half-algebra morphism from graphs to polynomials, which is just sending a G to a monomial, thanks to the power of the number of vertices of G. This is really easy to show that this is half-algebra morphism. Of course, it's just compatible with the first core product, not with the second one. The unique morphism compatible with the second one is phi 1, the chromatic polynomial. And I mentioned before that any half-algebra morphism from graphs to polynomials should be obtained from the chromatic polynomial by the action of the character. So we can write that phi 0, this very simple polynomial, very simple invariant, should be obtained from chromatic polynomial by action of the character, which I denote by lambda, which is very easy to compute. You just send any graph to 1. So lambda is a very simple character. But which is more interesting is that it is invertible for the second convolution. So this means that you can obtain the chromatic polynomial from this very simple morphism just by acting a certain character, which is not so easy to find. So this means that you obtain a formula for the chromatic polynomial. In fact, the chromatic polynomial is the sum of all possible contractions of your graph thanks to the power of the number of classes of your contraction and then a scalar, which is, which can be objectively computed. So lambda, this is a chromatic character. You can compute its values just by induction. And you can observe on this example that it's never 0. The chromatic polynomial never goes to 0. And it only depends on the number of vertices. With one vertex, it's positive. Two vertices, it's negative. Three vertices, it's positive. Four vertices, negative. And you can prove it just by induction. So just by something like this. I don't have any more time, so just cut it a little bit. So just by this, and with this formula, you can prove that in the chromatic polynomial, the things of it are alternating, which is a result proof by Rota in the 70s, I think, with complicated combinatorial methods. Here you obtain each just by a small combinatorial tool, which is the construction extraction of edges. And then this formula for the chromatic polynomial related to the chromatic character. Okay. I think I still have five minutes. So for the moment, I talk about morphism with values to polynomials. Now I'm going to talk about morphism with values in the quasi-symmetric algebra. So I mentioned before that by Agar, Bergeron, and Sotile, I know that there are a lot of morphism to it because it's a terminal object. If I want a morphism to Q-sym, I just have to choose a character. And then I will obtain homogenous morphism compatible with the product and the first co-product. And I've got a formula for this, which is similar than the formula for the polynomial invariant. If I want to construct a morphism from A to Q-sym, I just take all the iterative co-products. I apply some projection on it. So I project on the homogenous parts. Then I applied the co-unit on all the parts. I take in account the degrees of the parts with a composition, something like this. So what can I prove is if I'm looking for a morphism from A to Q-sym compatible with both bi-algebraic structures, so compatible with the products and the two co-products, this is the only possibilities. If I'm looking for something compatible with the products and the co-products, this is the only one. We should work. And happily, it does not work any time. I need another condition. I need another condition of the gradation. In fact, I need that the gradation more or less just respect the first components. If this condition, this technical condition is not satisfied, well, phi 1 is not compatible with the second co-product. So this means that I won't have any morphism compatible with both structures. And this is what happens for flow rates. In fact, my condition means that on the second co-products, I should obtain only things with three vertices on the left. And this is not the case. There are some red trees. We should have three vertices if I want to be compatible with the second co-products. And unhappily, well, it has only two vertices. So in this case, I won't have any morphism compatible with both structures from trees to cussing. So I have to cheat a little bit. When I just replace forests by decorated forests, so this means that on any vertex on my forest, I had a decoration, which is an integer. So if I do some contraction, for example, for this, I'm contracting the edge between A and B. Well, I don't forget A and B, and I just replace the vertex, the decoration of the vertex, by A plus B. So for this, now the co-product is homogeneous. Here, the weight is A plus B plus C. And on the left, and also on the right, I only obtain trees with weight A plus B plus C. So I can just put them into black. And now, with this trick, the technical condition on the second co-product, which I changed, is satisfied. So this means that I obtained for free a hop-edge-abramorphism from forests to cussing, which is homogeneous for this godvation by the weight, incompatible with all structures. And this is a generalization of the Erhard polynomial, which I call the Erhard-casisimetric function. It is something like this. And the same for trees, so for graphs, sorry. And what I obtain is a generalization of the chromatic polynomial, which is called the chromatic-casisimetric function. But in fact, it's not quasi-symmetric. It's symmetric for a reason of co-computativity. And this is an object which was already known by combinatoricists on graphed-heresists. Well, they know it, but they didn't know that it was compatible with a zero structure on the quasi-symmetric functions. And the last word, I said before that double bi-Agebra does not run very well with non-computative bi-Agebras. In fact, you can do some non-computative bi-Agebra, you can replace by index tree or planar tree, something like this. And you can also define two co-products. Okay, so two co-products. They are no more compatible as before, but they are still existing, so why not? And you can generalize your chromatic series or Erhard series. All the morphisms I mentioned before still exist in a non-computative way, but you cannot choose anymore the formalism of double bi-Agebra. There are no more double bi-Agebra. So if you want to do this, if you want to explain it, you have to work in another category, which is in some sense bigger, which is the category of species. In fact, all my objects are traces, this means that they are imaged by the functions of something which exists in a category of species. In the category of species, with two functions, which will give, in part, the co-productive objects, which are double bi-Agebras, and non-computative objects, which are not double bi-Agebras, but which are traces, non-computative traces, or double bi-Agebras in the category of species. So we stop here and thank you very much for your attention. Thank you very much, Loic. Thank you. Are there questions? Okay. Yes. Did you actually... Yeah. Did you actually upgrade all your formalisms in the species setting? Yes. In fact, in the species settings, you can do the same. So the QC on KX is replaced by the same object, which is species of composition. Yeah. Which is also double bi-Agebras, something like this. And it turns out that you have two functions, the fog function, the full fog function. The one will send you to commutative objects, and the other one to non-computative objects. So in fact, you have bi-Agebras, you obtain... The double bi-Agebras in the setting of species are commutative in the species settings. Yeah. It's sort of strong commutativity, so with the second function, there are no more commutative in the user space in the setting. So you can do it all this in the set of species. More or less the same proof, something like this. More or less, there are some technicalities in some sense, but these are the same ideas. If I may, I'd like to ask or make a comment or ask a question. Thank you, Loïc, for your nice talk. You were wondering about co-product on polytopes. There is one on cones, and polytopes can be seen as intersections of cones, and that's the way, formally, on polytopes of the kind you were talking about and be derived. So I think it should be possible, maybe, going that way, following the path of Bavinoc, Berlin, then I wonder whether it's possible. So I can say that I found some co-products on polytopes, but more or less they are useless. There are stupid co-products, and you don't obtain the era of polynomers as an invariant. You obtain stupid things just by sending, for example, a polynomers to the number of vertices or something like this. I just found stupid co-products. Not interesting one. Okay. The one I'm thinking of is very geometric, and it serves similar purposes. It's four counting into two points on cones, so it's made for that. And it's implicit in people's work in toric geometry. So it could be helpful, maybe. It's a geometric one. Okay. So it should be better than I wanted. I will look at it. There's a question from Janek Vargas in the Q&A, and I have said him so that he can unmute, but he asks, is there a polytope associated to double-post it the same way you define the polytope associated to a forest? Yes. In fact, everything I did for forests can be done for post-sets or topologies. And you also obtain some era polynomials with the same properties. I just speak myself to forest because it was easy to describe and post-sets or topologies. David, do you want to ask your question? Yes. I wanted to make a comment because it's directly related to the title of this conference, an algebraic structures and quantum field theory. Now, look, you know very well, Dex, work on the hot algebra of renormalization, but I hope you're also aware of his recent work on the co-action associated with the monodromy of the functions we get from the phymer diagrams, and that's associated with cutting lines. It was an interesting question as to how that relates to co-action and multiple polar logarithms that we obtain and functions beyond that. So there is, facing us at present in quantum field theory, a compatibility question. It might not be directly related to your talk. And that is what about calculations in which we both cut lines to discover analytic structure, but we also have sub-divergences that we have to renormalize. Yeah, yeah, that's precisely the right question, David, but I think there will be an answer pretty soon. And it has a lot to do with Schloich's co-interacting biologmas. Excellent. Thank you. Okay. Thank you. So I think there's also some simpler connection that just when we were looking at the general tree-phyman rolls, then having the co-structure on trees, the co-interacting structure on trees, it picks out a particular one, a particular choice for the lower order terms. So that's a more trivial observation than what David and Dirk are getting at, but it's another connection to the quantum field theory situation. I see Mischi has his hand up. Yes. Thanks, and thanks, Loik, for the nice talk. I also have a question regarding the co-products on polytopes. And it's about, I mean, if there's a relation or if you're aware of this stuff, this newer stuff by Agia together with Agia on the hopped mono structure of generalized parameter hydra, which are polytopes, and on them you can define this co-product. Is this too specific for your purposes? At least they asked some questions about erhard polynomials on the polytopes. Yes, but the parameter hydra are very specific polytopes. All right. But in this case, it's easier to define some co-products on these sort of objects, which have a strong, they have really a strong combinatorial structure than them. If you take any polytopes, not the case, just the problem. The problem is really the generalization then, too. So this only works for these specific polytopes. Yes. I think that for a special family of polytopes with strong structure, you can define a co-product which should give you the erhard polynomial or things like this. All right. So any polytopes, in fact, this co-product is quite, you can see it easily on the post set, you just cut or things like this. On the polytop, geometrically, it's not so clear for me. This is the section of faces or things like this, which is really something more complicated. Okay. So I don't know exactly what it is geometrically. Okay. But, okay. I see. Thank you. So the problem for me is that I can understand polytopes in three-dimensional, but no more. But this is only a few, three or four examples, which I can manage. So it's not enough for me to understand what happens for bigger polytopes. So I don't know exactly what the co-product is for polytopes. All right. In the interest of time, why don't we thank Loic again now? Thank you. Thank you.
Pairs of cointeracting bialgebras recently appears in the literature of combinatorial Hopf algebras, with examples based on formal series, on trees (Calaque, Ebrahimi-Fard, Manchon), graphs (Manchon), posets... We will give several results obtained on pairs of cointeracting bialgebras: actions on the group of characters, antipode, morphisms to quasi-symmetric functions... and we will give applications to Ehrhart poylnomials and chromatic polynomials.
10.5446/51271 (DOI)
I want to start off by saying that, yeah, my work has been, although I don't personally, I haven't personally interacted with him very much. My work has been quite heavily motivated and influenced by his work. And yeah, my PhD advisor is Karen, who was advised by DERC, so sort of my academic grandfather, you could say. And yeah, obviously, one way in which influenced is going to be displayed in this talk. So I'm going to sort of a high level, I'm going to talk about some tree like equations. So these are going to be generalizations of certain Dyson-Schwerger equations. And then I'm going to describe how some core diagrams come into play when solving these. All right. So I mean, I'm sure most people here know the Kongkrammer Hopp Algebra. So it's a combinatorial Hopp Algebra introduced by DERC in the context of renormalization. It's the free commutative algebra freely generated over, so in this case, field F by the set of rooted trees. The product is concatenation of forests, and the coproduct looks like this. Yeah, you're basically putting a tree at the vertices of an anti-train. And so what sort of the relevant property here is the universal property of Kongkrammer. And this is one of the reasons why Kongkrammer is important in certain contexts. So it has this universal property, and it is the only Hopp Algebra up to Dyson-Schwerger with this property. So what is this property for those who may not be familiar with it? So if we have a coalgebra A, we first have to define something called the Hawkshild I co-cycle. This comes from Hawkshild co-homology. And it's a linear map, so a linear map from the coalgebra to itself, where it satisfies this equation, which basically says what happens if you take the composition of the coproduct and the linear map L, then you basically get two terms. You have the identity tensor L composed with the coproduct plus this extra term of L tensor 1. Well, this is the identity map. This is the identity. Yeah. So that's what one co-cycle is. And the current primer, the add a root operator. So this is the operator that takes a bunch of trees, and you basically add a root and then stick each tree as a child of that root. So this operator is a one co-cycle. You can check that this equation is satisfied. And then this is the universal property, proved by Conn and Kremer. We have a commutative algebra A over a field F, and we have some linear map from A to itself. Then there exists what the universal property says is that there exists a unique algebra homomorphism from Conn-Kremer to this commutative algebra A, such that if you compose the homomorphism with B plus, that is the same as composing L with the homomorphism. Basically, this homomorphism is turning B plus into L, essentially. And if you assume a bit more about A, you get some stronger properties on the homomorphism. So with A is a bi-algebra, sub-open algebra and a co-algebra in a compatible way. And L is a one co-cycle. So that's where the one co-cycle comes in. Then this homomorphism, real well, is a bi-algebra homomorphism. And if A is also a hopped algebra, then the homomorphism becomes a hopped algebra homomorphism. So these are both just saying that it's compatible with that structure. Yeah. So that's a really nice property. And then slightly differently, where are the tree-like equations going to come from? Or how are we going to motivate them, at least? We're going to motivate them by these sub-algebras. So these are some pop-cell bi-algebras of Cone Primer that Fawcée looked at. And they're generated by a family of recursive equations of the following form. So we have T of x is equal to x times, then we're applying B plus to phi of T of x. So phi is some formal power series. And it has basically a non-zero constant term, setting it equal to 1. So this gives some formal power series solution with coefficients in Cone Primer. And so if we write P sub n for the nth coefficient of the solution to this equation, then Fawcée characterized when the sub-algebra generated by those coefficients, by the T n's sub-algebra of Cone Primer, is hopped. So when it's a hopped sub-algebra, and the theorem is this. So it's a hopped sub-algebra if and only if the formal power series phi has this simple form. Yeah, so it's basically just specified by two constants, a and b. Yeah, and it's basically just this simple linear polynomial raised to this fractional power. So we're going to basically apply the universal property to get these tree-like equations that we want to look at, which are motivated by these, what you call tree equations here. So in this case, you could apply, you could think about applying this to sort of any algebra or co-algebra or bi-algebra structure. We're going to apply it to the polynomial algebra in a linear map from polynomial algebra to itself. And that gives some algebra homomorphism, be it the universal property from Cone Primer to the polynomial algebra. So this polynomial algebra is just the algebra polynomials. And so yeah, you get this algebra homomorphism, and then we apply this to the tree equation. So we apply it to both sides, and that gives this two-variable equation. Yeah, where L is now, so b plus has become L, this linear map. And so one way you can think about this is that, in this case, the algebra homomorphism corresponds to the Feynman rules, that means Feynman graphs associated with Feynman integral via the tree of sub-divergences. And this is going to, ultimately, you can sort of sometimes think of this as a kind of Dyson-Schringer equation, or like a generalization of Dyson-Schringer equations. So here we're just working with any arbitrary linear map. That's not necessarily going to give you much, that's all that interesting. But the universal property sort of points the way to what would be interesting to look at, which is one-co-cycles arising from co-algebra structures on the polynomial algebra, f of y. Yeah. And just to comment, you could also not look at the polynomial algebra, you could look at some other algebra and co-algebra structures on that algebra, and then consider one-co-cycles for those, which may be of interest, but we're going to focus on the polynomial, really bi-algebra here, or co-algebra at the very least. And there are actually two classic graded co-algebras on polynomials, so it's the binomial co-algebra and the divided power co-algebra. And they're isomorphic, but which one we look at does matter here, because we're keeping the algebra the same, but varying the co-algebra. So we'll get different results. So for the binomial co-algebra, the co-product is the following. Yeah, you get this binomial coefficient, you're breaking y to the n into y to the k, tensor y to the n minus k. Now if it's, what we get then is that we get this limit defining or describing the one co-cycles for the binomial co-algebra. Oh, this should be f of y. So what this says is that there is, so for each such one co-cycle, there's a power series f of z defining it, and the action of l on y to the n is given by, so you substitute in a differential operator into f, this power series, this formal power series, and then that is applied to t to the n, and then we're integrating all that from zero to y. Yeah, so that gives the one co-cycle for the binomial co-algebra. And then the divided power co-algebra, so what does that look like? It looks basically the same, or we're just dropping the binomial coefficient from the co-product, and then the corresponding one co-cycles look also very similar. They're also defined by some formal power series f, and in this case, you replace the differential operator here by a slightly different sort of operator that sort of behaves similarly. So if you look at its action on, or how it acts on y to the n, instead of dropping the exponent by one and then multiplying it by n, you just drop the exponent by one. So all this operator does is drop the exponent by one, basically, or if you have constants, they go to zero. So we're substituting that into f, just like we did up here, and then instead of integrating, we're just multiplying by y. Yeah, so that's what the one co-cycle is looking for, the divided power co-algebra. Yeah, so we have these two co-algebras, and we can look at the two like equations for each of them. So now we have the equation we were looking at before, and L is now, it could be a one co-cycle for the binomial co-algebra or one co-cycle for the divided power co-algebra, and we're sort of interested in solving both of these. And we can think of these as a Dyson-Schringer equation, in particular the form or corresponds to a Dyson-Schringer equation for a class of Feynman graphs generated by, or recursively inserting, at one place in a single primitive graph, if we keep the Feynman integral unspecified, then we get exactly this. We're unspecified just means like you consider an arbitrary, like, Lorentz expansion, Lorentz series expansion of that, of whatever the Feynman integral is, then you get basically this. This first equation from the binomial co-algebra. The divided power co-algebra doesn't really correspond to a Dyson-Schringer equation, at least as far as I know, in the same way. But it's sort of motivated by the same background. So what I was kind of interested in, what we were kind of interested in, along with Karen, and what others have looked at as well in the past, is solving these via some like, or as some like weighted generating functions indexed by some nice combinatorial objects. This has already been done for this first equation, with the binomial one co-cycle, but not for the second one. No one's, as far as I know, no one's looked at the second one before. And so how was it done for this first one, and how did we do it for the second one also, is, well, the weighted generating functions are going to be indexed by core diagrams. So let me briefly explain some of the background here. Core diagram is perfect matching. It's at one up to two n, so that just means you're pairing these elements up. So we might represent it by this sort of diagram, where the chords are just these little lines. Yeah, and we have to look at, we wanted to find some parameters in order to specify the solutions to these tree like equations. And those parameters are going to be defined in terms of the core diagram and its directed intersection graph. So the directed intersection graph has the chords as vertices, the direction of this core diagram, the chords as vertices, and two chords are adjacent, if and only if they cross. So, like literally you look at the diagram and the two lines crossed and the chords crossed. Yeah, so that's an intersection graph associated with each core diagram. And it's directed in the sense that, so one chord that comes earlier in the diagram, so it's a source. First endpoint comes earlier than the first endpoint of the other chord. Then we put a directed edge from this, from say this chord to the next chord. So from earlier chords to later chords, we directed that in that way. So we're going to be interested in certain special chords that are in a sense terminal. So the terminal in the sense that there are no outgoing edges in the directed intersection graph, but they have no outgoing incident edges. So in particular what that means for the diagram is a terminal chord has no chords that cross it to the right. So for example, this chord diagram right here has three terminal chords, basically these last three chords. Because they have no chords crossing them to the right. But everything else does have a chord crossing to the right, so they are not terminal. So those are going to be important for defining the solution to these tree-line equations. And a diagram is, so one particular type of diagram that's especially relevant are one terminal diagram. So they're diagrams that have exactly one terminal chord. And if there's only one terminal chord, or there is always one terminal chord, and that is the last chord. So the chord whose endpoint comes at the very end, whose second endpoint, whose sink comes at the very end. One of the things that we need, which is, so there's one standard way to order, to give a total order on the chords of a chord diagram. The most standard way is just to look at the first endpoint of each chord. So the source of each chord. And that's just ordered by that. There's another, there's a few other ways to define a total order on chord diagrams, at least a few other ways. And one that will be important here is called the intersection order. And how this order works is you label the root chord, so that's the very first chord, this very first chord, one, and then you remove it. And when you remove it, you'll get a bunch of nested connected components. The nested connected chord diagrams, so they're connected in the sense that the intersection graph is connected. And then what you do is you label the first connected components recursively, and then you label the second one and so on. And the labels determine the order. Yeah. In general, that order is not going to correspond to this standard order by the first end points of each chord. They're actually going to diverge quite a bit in general. Yeah. So, okay, what does the solution look like to these tree-like equations? We will focus on the divided power one. So first we have to mention something that will need to define the solution to the divided power tree-like equation. So one thing that one might notice is that there are exactly two chord diagrams whose intersection graph, if you forget the directions, is an induced cycle. So that's a cycle with no chords. So a cycle that is also a whole. And those two induced cycle chord diagrams are what we're calling the top cycle and bottom cycle. So this one right here and this one right here. These are the only two ways to make an induced cycle of any particular size as a chord diagram. So, okay, then we can define the solution. Well, mostly define it at least. So we look at the divided power tree-like equation that we had before. It is uniquely solved by the following power series, formal power series in X and Y. So formal power series is indexed by top cycle free chord diagrams. So these are chord diagrams that don't have any top cycles as sub diagrams. So this chord diagram doesn't appear. It's forbidden to appear as a sub diagram of these chord diagrams. So you can think of it as sort of like halfway to diagrams that are just trees. If we forbid both top and bottom cycles, then we just have trees. But it's sort of halfway there. So those are what's indexing the formal power series solution. And then we have sort of the X variable sort of counted by, as it's counting the size of the chord diagram. And the Y variable is counting the position of the first terminal chord. These are these terminal chords. There's going to be a first one in the intersection order. It's index in the intersection order. That's going to be this T1 of C. So Y is sort of counting the index of the first terminal chord minus one. And then someone unusually we're sort of applying the divided power one cosine of two this. So yeah. So really for each chord diagram, you're getting a Y to the I for all I up to the index of the first term report minus one is how that works. Just the index of the first term report. Yeah. And then to this, we have a few different weights. So one of the weights phi sub C is just some weight that's determined by the coefficients of the the full power series phi defines tree like equation. Those I'm not actually going to define, but they're relatively simple and they're just determined by C. And then sort of more importantly is this FFC. This is a weight that's determined by the co. So these are the these are the coefficients fi of that formal power series. Find the divided power one cost cycle. So this L div. So this these FIs are the coefficients of that home power series finding the one co cycle. And this weight is defined by. So basically we're indexing we're taking a product of these coefficients indexed by differences in the consecutive adjacent terminal chords. So the indices of these terminal chords in the intersection order. We're looking at those and then we're taking the differences of consecutive indices of these terminal chords. Those differences are going to index these terms in this product of the fi coefficients. And then we also get an F zero raised to the size of C minus K. Yeah so basically you can just think of this is just some weight that's determined by the one co cycle and the positions of the terminal chords in the intersection order. Yeah. So that's the solution. It's actually nice. And okay so now what we're sort of interested in is thinking about how we can analyze this solution and what sort of common tutorial questions. Does this motivate and one of them is plenty obvious ones is counting these top cycle free chord diagrams that haven't been counted before. And in particular we would like to determine the number of top cycle free diagrams of size n and with the first terminal quarter having index K. So those are those are precisely the diagrams that index this solution. So the rest of the talk I'm basically going to describe what that count is. How we get that count is we're going to we're going to find a bijection. So I'm not really going to describe that describe how to prove it. But the way you would do this is you find a bijection to some other common tutorial object that has already been counted or some other common tutorial objects that have already been counted. That's the most simple the most straightforward way at least and the sort of most common tutorial illuminating way. And what are those other objects they're going to be triangulations. So triangulations in this case are plane graphs in which every bounded face is a triangle. So the unbounded face is not necessarily a triangle. So a triangle is just three vertices, all of which are mutually adjacent. And yet we're forcing every bounded face to be a triangle. But the unbounded face may not be a triangle. It might have more than three vertices. And we're actually specifically going to look at rooted triangulation. So we're rooting at a boundary edge. So this is an edge on the boundary face on the bounded, the unbounded face, the exterior face. And we'll call vertices that are not on the unbounded face, interior vertices on bounded faces those were to be in interior vertices. And the vertices on the unbounded face will be exterior vertices. So this is just what one of these triangulations look like. All the bounded faces are triangles. We're rooting at this edge up here. And yeah, the unbounded face is not a triangle, because there's more than three vertices. Yeah. I mean, it could be a triangle, of course, but in this case, it is not a triangle. So then what do we get? Well, there's an old result from the 60s by William Brown that counts, that counted the number, he counted the number of rooted triangulations with a given number of interior vertices in this case, n and m plus three exterior vertices, there always have to be at least three exterior vertices. And the number of these is given by this fraction of factorials. So that's a relatively nice, you know, explicit expression. And it turns out this is exactly the object. These are exactly the objects through in bijection, the top cycle free core diagram. So the following theorem, one can find a bijection between connected top cycle free diagrams. Oh, this is one thing that I forgot to mention actually. The solutions here, oh, did I skip over it? I did. This is actually, these are, I should have said these are connected top cycle free core diagrams. So they have to be the intersection graph of these diagrams has to be connected. Yeah. So there exists a bijection between connected top cycle free diagrams with n chords and the first terminal chord having index K in the intersection order. T one is K and root of triangulations with n minus K interior vertices and K plus one exterior vertices. So in some sense, this is almost a more natural object to work with. Because instead of, you know, dealing with this, the intersection order first terminal chord, we're just dealing with interior vertices and exterior vertices of a triangulation. Yeah. And that was it. Thank you. Thank you, Lucas. All right. I see a question in the Q&A. Marcus asks if there's any hope to calculate the values for the top cycle free diagrams of size n, but then where you have information on the other terminal chords or whatever the analog of the other terminal chords. Yeah. Yeah, you would certainly want that to know more. I think it's, yeah, I think it's probably possible. It's also by looking further at the related parameters in the triangulations and getting another bijection or just looking directly at the diagrams and explicitly counting them. But I haven't looked at that. You know what the number of other terminal chords is going to correspond to in the triangulations? That is a good question. I'm not entirely sure. Yeah, you'd have to, there'd be some sort of recursive thing. Yeah, I'm not entirely sure. I haven't thought too much about that. Are there other questions? I had a quick question just in general, the divided power out of hop-valued above versus the binomial one. I mean, if I think about it naively, why does this exponential connection? I mean, one x is primitive and the other is group-like. So can you somehow transform, transport this relation between the hop-valued brass to the different results for the two cases? Or do you have to do independent calculations for both? Generally, if you do independent calculations, there may be something there. It would be really nice if there was some way to transform that connection between the co-algebra, the hop-valgebra to these core diagram solutions. I don't know of such a way, but that would be nice if there was. Thank you. Well, and as people are thinking of remaining questions, maybe just as a comment, the side that you didn't emphasize is that you have a more conceptual way of proving the original core diagram result that my original proof of was ugly, quite frankly. So there's value on both sides, even if the connection between them is not completely clear. Any other questions? Yes, I have a quick one still. Can you hear me? So with this universal property, then can you just go out to hop-valgebra and core diagrams this way too or not? Is there a hop-valgebra and core diagrams? That's another way to phrase the question. Yeah. That would be nice. I thought a bit about this, but I don't know of such a hop-valgebra. It would be nice if there was something like that, and that was relevant here. I guess the follow-up question would be, if a hop-valgebra of core diagrams could be set up, then what would be the common total objects indexing the formal solution to the data as a swing-er equations in that hop-valgebra? That's a good question. All right. If you have any further questions, let's thank Lucas again.
Tree-like Equations from the Connes-Kreimer Hopf Algebra and the Combinatorics of Chord Diagrams We describe how certain analytic Dyson-Schwinger equations and related tree-like equations arise from the universal property of the Connes-Kreimer Hopf algebra applied to Hopf subalgebras obtained from combinatorial Dyson-Schwinger equations in the work of Foissy. We then show how these equations can be solved as weighted generating functions of certain classes of chord diagrams and obtain an explicit formula counting some of these combinatorial objects.
10.5446/51273 (DOI)
Well, thanks for the invitation and the introduction. And yeah, thanks Karen and Eric for making this happen and the team at ERJS against all odds at least to meet and celebrate digitally. So yeah, what I'm going to talk about is on a theorem of Chima rather or you would say in the future theorem of Chima. But before that, let me go back to this point of being one of the first PhD students of Dirk in Berlin. So we were a group of six people. We were, we'll check, known as the Chima gang. So here's a picture of us hanging out in Berlin. And something that I want to mention is that half of us didn't have like a real background in physics and certainly not in mathematical physics and quantum field theory. So I had never seen a Feynman diagram ever before in my life. Markus had done any physics whatsoever and Lutz even had started out as a butcher and then studied some kind of food science and then did his diploma in some polymer physics and then eventually switched over to quantum field theory. And I think that says a lot about Dirk and his approach and his sort of laid back style giving us the chance and forming this group and not only worrying about the science in this group, but also about a good atmosphere and some sort of, yeah, a good group vibe which we certainly had. So yeah, and then we also had these nice field days together and he would invite us at least once. I think in the first few years, even twice to his house with his lovely wife Susanne cooked a fantastic dinner for us and Dirk served the finest wine and yeah, these were always like big, big feasts, really, Lekulian meals. Yeah, so it's been very much a joy ride being in this group and then later continuing to work with Dirk. And yeah, so part of my work has been with Dirk on his vision. So I said maybe future theorems, but for now it's his vision and this vision is that Karkowski rules in outer space, which is a slogan arguably on the same level as the cosmic Galois group acting on outer space, which is also quite nice, I think. Okay, for reference, I think the story started at least to be written up by this article here with Spencer and Dirk, which contains basically all these ideas of Karkowski outer space and the cubicle chain complex. And then there's also here this paper by Dirk himself alone, which actually has this title Karkowski rules in outer space. And then most of the stuff I'm mentioning today is based on this article here together and that we did this year and yeah, at least my point of view, my take on these things. And regarding this term vision, something else that I feel is very special about Dirk is he's a really visionary and as a PhD student, sometimes it was a bit frustrating or challenging. And you would ask him a question and you always get this sort of musical answer. And so there was always a guide to where to find the truth, but he lets you find it for yourself and over the years I have really cherished and liked this oracle-like work of Dirk. And as we heard throughout the talks yesterday and on Monday, he has inspired many people by this. So thanks for being our oracle in regards to Feynman diagrams and physics. Okay, so let's get serious. Here's a baby example. So we would like to study the analytic structure of a function that is defined by an integral where the integrant depends on some parameter here, just a complex variable t. And so for this argument, let's take gamma to be a circle around one. You see it here in the figure and then the integrant has two poles at plus and minus square root of t. And so, for example, if we plug in t equals one, we see that the integral is fine because this singular point doesn't hit our integration contour and we can do the integral, for example, using residue theorem. And then we could ask ourselves, okay, for which values t is still well defined. And the point is that if I move my t around, these singularities will move in the complex plane, but I can also deform gamma to stay away from these guys unless t goes to zero because then I have these two singularities approaching and there's no way for gamma to be deformed away from this. So it has to be hit by the two singularities and what happens if I get a singularity in my function t, f of t, and okay, and here you can easily deduce the kind of behavior around this singular point because if you let t and circle zero, what happens is the two square roots, they exchange their position going half a circle and in order to analytically continue, I have to deform gamma along the way. So what happens, I travel, so to speak, from one over square root of t to minus one over square root of t. Okay, so basically that's an example of a function defined by an integral. Here's a more sophisticated one, which we all know, and that's the Feynman integral. So we have a graph G and edges, loops, and SLX. And yeah, so here's the momentum space representation. We have these product of propagators, one for each edge, and these di quadrics, qi squared minus some mass squared plus some i epsilon term. And the qi's here are always linear combinations of the loop momenta and the external momenta which are associated to the legs of my graph and MD is the dimensional Minkowski space. So the point being, this looks in some way really similar to our baby example, but it's more complicated. There are more singularities and many more dimensions to deal with. And then there's also the problem of this non-compact integration domain and sitting in a non-compact complex manifold, at least in the first place. So here's an example I think everyone knows. So the point is just that we can view this as a function of these kinematical data, the p's and the m's. And in the following, we will fix the m's and then just view it as a function of p. And of course, there's also momentum conservation between the p's. So that's why no p3 appears here, but for this talk, I don't want to worry about this. Yeah, so the problem is we want to understand the analytic structure of this function, iG, or iG as a function of the dp, the momenta p. In the mathematical case, vastly generalizing this baby example that I gave, there is a nice mathematical account. There's a book written by Friedrich Fahm. And of course, many others have worked on this. And the problem is it's just almost covering the case of Feynman integrals. So there's some technical problems which make the case of Feynman integrals way, way more complicated. And so one could ask, OK, let's maybe very first, where are the singularities of this map? And then in the 60s, or actually in 1960, Landau gave us some necessary conditions when such singularities occur and where. And yeah, quite remarkably, there has been no proof in the literature for 60 years now, although they are commonly used and well agreed upon, I'd say. And Max Mübauer, PhD student of Dirk, at least that's what he promised me on Monday, told me that he'll upload his paper to the archive this week. So Max, I hope you did. Yeah, so the precise conditions are maybe not so important for this talk. But so Landau's equations state, so for now, let's put some formal variable to each edge in the graph, internal edge of the graph. And so the first condition simply states that some Di, some of these propagators produce poles when Di vanishes. And then the second one is a formulation of this pinching condition. So these singular hyper surfaces have to meet in such a way that I can't deform my integration contour. And so this translates in this sum over the edges in a loop numbered from 1 to L, so loop number J, and that's these xi times qi have to vanish. And then you can solve the set of equations in depending on, or you get some subset or variety in P in the space of external momenta. And then a solution where all these Di vanish is called the leading singularity, and the others are referred to as reduced singularities. And so whenever we set one of these xi0, we can think of the graph where the edge that has the label i is collapsed to a point. So that's the reduced diagrams of G and its singularities are then the reduced singularities of G, or rather of the function iG. Regarding, okay, so the idea is that if we knew where these singularities are, and if we knew what kind of behavior this iG has around these singularities, then in principle, theoretically we could reconstruct the whole function from this data. The magic word is here some Hilbert transform, and certainly it's debatable how doable this is for very complicated expressions, but it's also important from a practical point of view for numerical calculations and stuff that real physicists do in their real life applications. Okay, so Lando tells us where are the singularities. Now Kodkowski also in the same year formulated his theorem or conjecture, depending on the point of view, that we can compute the associated discontinuity to such a Lando singularity by doing this kind of Feynman integral that I have written here. So if D1 up to DK vanish, so I have these propagators producing poles, then I have to compute this discontinuity integral which tells me, or I have to compute this, I get the discontinuity by computing this integral, and Kodkowski tells me you leave all the other propagators as they are, and then you set these guys who produce poles, you evaluate them, you take the residue at their polar hypersurfaces, which is expressed here by these data plus operators, so to speak. So for today, the precise form is not so important, but I put the definition down here. So it takes some sort of residue with respect to the positive energy part of this propagator. Yes. Okay, so the message here is there is a formula, and it's not proven, we would like to prove it. There is an unfinished proof, at least as far as I understand, maybe Spencer can correct me on that, but in this draft that I mentioned by Dirk and Spencer, so I think there's some details missing and it's also not covering the most general cases, but as far as I know, it's also the only really modern account of this problem in the class of Feynman integrals, so if you're interested in that, I advise you have a look in this paper. So apparently it's hard to prove, so maybe we could look for an alternative approach, and one idea that's also been around in this whole Ampli-Tuhitron business is to maybe think about regrouping the Feynman integrals or re-expressing them, finding some other way of expressing the amplitude in which we eventually are interested, and maybe start from there, study the singularities of this guy, maybe even there's some cancellation of singularities which then would make our life more easy. In that regard, here's a theorem from the paper with Dirk, and it says that I can write the integral I g, and yeah, so renormalization can be covered here, but I don't want to comment in this talk on how to do this here, maybe later on a few words. So I can write I g as the sum over the spanning trees of g, and I write it as a sum of integrals which now depend on g and the spanning tree t, and the sum goes over all spanning trees. And it's a sort of Katkowski integral. I do, for the edges in t, I do nothing, I just leave the propagators as they are, and the propagator is not in the spanning tree, I put on the mass shell by this data plus operation that I introduced before. So one says the cut integral or one cuts the edges. So here's an example, in terms of diagrams, here's a nice graph, two loops, it has five spanning trees as you may check for yourself, and then, so it means I have to sum all these five integrals, and the edge is not in the spanning tree, I depicted here with this red slash which means they are cut or put on the mass shell by these data plus operations. Okay, and if you want to see a formula for one loop graphs, it's rather straightforward. What you do is you basically do one integration using the residue theorem, and you get precisely this cut formula. One has to be a bit careful with these IFC lines, but in the end it works out, and then going from one loop to two higher loop as here, it's quite nice because you somehow want to write your integral as an iterated integral of one loop integrals, and the proof then goes over carefully the expression of these integrals, and it's done, what's doing this for you is the iterated application of the core product, the core co-product of the Hopf-Algepa, which then disassembles your graph into pieces of one loop graphs, and then these you put into this integrated integral procedure, and do the stuff that I did here for this one loop example iteratively, and you get this formula, Ig equals to Ig of t, sum over t. And okay, so maybe it's clear, but just to remark, we have some trade-off here. I have one integral on the left, and on the right I have more integrals, but they have less integrations because I have this delta plus acting, which basically take care of as many integrations as there are edges, not in the spanning tree of t. So I have more integrals, but maybe simpler, and then one could go and even check to try to repeat this whole Lando business and Katkowski business for these guys here. Yeah, so a priori, if you try it straightforward, it's not so easy to see because there's some linear parts now in the post, but maybe with some clever coordinates one can make progress even here, so there's maybe future work and future stuff to exploit here. Okay, and I want to sort of embed the theorem into a new picture. So let's go to outer space. Here's a nice picture of Dirk in front of outer space. Here, this guy is outer space in rank two. And the way to do this, or one way to get from Feynman integrals or embed these ideas into outer space is the parametric representation. And so basically, Karen has told us this morning already what's outer space and the modular space of graphs, and then Francis made this even more precise, but I still just repeated here. So the basic idea is you have these parametric Feynman rules. Now the integration domain is this positive piece of projective space, and we associate there for each edge in G a variable Xe or Xi. And so this data G is just homeomorphic to an n minus one dimensional simplex. And the integrant after doing this transformations is then, as you all know, expressed by powers of graph polynomials. And okay, and just to make the point here, all the data that appeared first in the momentum space representations, of course, to here the dimension D, number of edges, number of loops, and the kinematics, so the masses of the momentum are absorbed in this second polynomial here, the data G. And yeah, so I don't want to give the precise formula of these graph polynomials, but they have some crucial identities which transforms into the identities for this form omega G. And Francis also mentioned this before. Yeah, so if I set one of these edge variables to zero, and think about, so it describes the boundary of the simplex data G, and I can associate to this boundary simplex, or I can think of this boundary piece as a simplex associated to the graph G, where I collapse the edge E. And so the identity here says that if I restrict omega G to the boundary, then it's actually the same as the form that I can associate to this graph that represents the boundary. Except if you have a tadpole, then this is not working, but yeah, let's not worry about this. And the other, I guess also well known to everybody here, says that, of course I can repeat this procedure, but when my edges form a subgraph which is divergent, so the condition on its topology, then omega G has a pole, and but the nice thing about this pole is that if I look at the residue, then it has this product form. So I have omega of this divergent subgraph times omega of the co-graph G, where gamma is contracted. And when we've already seen this couple of times, that's one piece of the renormalization hopf-algepa co-product. And yeah, so this remark, maybe it's clear later, but for those who know, similar identity poles for the blow-up of the cell or its compactification, which also Francis shortly described. So why are these crucial? The first, as I want to show later on, actually allow to compare Feynman integrals of different graphs, or more precisely to compare the singularities associated to these functions. And the second one is, yeah, it's the starting point of cornerstone renormalization. So yeah, so let's think about if we wanted to formulate an amplitude, not only a single Feynman integral, but say all the Feynman integrals, all that's contribute to an amplitude for fixed number of loops and legs, then we could try to embed this integration procedure into a whole space. And for that, let us fix the masses once and for all to a finite set. So we don't want to change the masses. We think of these IGs as sort of a family of functions. The graph determines the shape of the function, and then these MIs are certain constants that appear there. And possibly we could even have further restrictions on this coloring map, so to speak. Think of quantum electrodynamics, where you only have certain vertices with combinations of colored edges and directions, even, of edges allowed. So in principle, we can model all this with a coloring and or direction of the edges. And then what we do, I mean, it's also been described by Francis and Karen. We build a topological space out of these guys. So we take these integration domains, we take the interior points, one for each graph G, and this capital G is just a set of all one PI, Feynman diagrams. All vertices have to be at least three valence, and C is determining how the edges can be colored. And so we take the disjoint union of all these open simplices, and then we identify these guys by relation induced by edge collapsing. So if I have a big graph G and a smaller graph H, and I get from G to H by a sequence of if there's a forest in G, and I collapse this forest to get to H, then I identify the corresponding phase of data G with the simplex that I can associate to H. So whenever two graphs are related by some forest collapse, I can identify the phase of the larger simplex with the simplex of the smaller graph. And then because we have colors around, so then we also identify points that are related by graph isomorphisms and even by colored isomorphisms. And this is actually what makes this space rather interesting from topological point of view. Here's an example. So the notation here means this inch asks for these coloring maps to be injective. This is just a toy model case where we want all the masses to be different. And then if I have one loop graph with three legs, then I can have at most three legs. So I need three colors to color them differently. So I can build this space out of this data. And there's six ways of doing graphs where all vertices are three valent. And then if I collapse here the blue edge, then I walk into this phase. If I shrink the length of the black edge, then I walk into this phase. Now the black edge has zero length. And by this procedure, I described the boundary of the simplex gets identified with this simplex associated to this graph here. And then I can reinsert or re-blow up slowly the black edge again, but I can also interchange the position of two and three. And this would lead me to another simplex or another cell going here. And so what you get if you walk around and check this out, you get the torus actually. On the other hand, if you forget all the colors, then first of all, all these points here in the interior of each simplex are related to the corresponding points in the other simplices. So these two simplices just get all squished, folded onto each other. And also, if I these one-dimensional cells here, they also get folded in half. Because if there's no different colors on these edges, then I can't distinguish whether, so to speak, upper edge gets smaller or the lower, because it's just an automorphism flipping the edges. So that's actually the same point. So in that case, without colors, you have just one sphere as the space. And if you like these things, then you can try to figure out the case with two colors. It's quite a fun exercise. Because in more than one loop, something interesting happens. And this is the fact that I now have some missing cells in my space. So here's the cell sigma that you can associate to this graph here on four edges. And I can shrink the length of any edge to zero. And I'll end up in this two simplices here at the boundary. I can also shrink two edges. And I get to these one cells here, except the edges number by three and four here, because that would drop the loop number. I'm not allowed to do that in this space. So everything that's read in this picture is at infinity. So it's deleted. It's missing from the space. And then if you are looking for a compact space or want to study renormalization, you can do the following. You truncate this cell, or this semi-open cell, before infinity. So cut off this piece here. And so in that case here, it would look like this. So these new faces appearing, I've depicted in orange. And again, we have this Hopf-Algibra structure appearing in the description of these new faces. So the cell associated to gamma, which is this graph on two edges, is just a line of one simplex and the same for the co-graph. And so that's describing this square that I have sort of inserted into the space. So the upshot of all of this is when we recall our form of the Feynman integrand, that now we see that the integration domain of IG is just or can be viewed as a cell in this modular space of graphs. The integrand, you can think of as some sort of volume, or let's say Feynman volume of this cell. Or more technically, it's a compactly supported distribution density on this space. And this compactly supported is all into the fact that I can only integrate without going to infinity. And then how to regularize this is I use this compactification procedure, which is sort of a Borrel-Seer compactification. And if I do this, I have this nice structure at these faces, at infinity, or these new boundary faces. And so it's a natural setting to study renormalization. And so whether I worry about renormalization or not, the other point that I want to introduce here is now that I can think of amplitudes as sort of semi-discrete volumes of these spaces. And semi-discrete here means just that I'm... So if in my amplitude, there are graphs participating with different numbers of edges, then I have to... Then they belong to different dimensional pieces of the space, because the dimension of the cell is number of edges minus one. So then I would need to calculate the amplitude. I would need to sum the k-dimensional Feynman volumes of the k cells in the space plus the k minus one dimensional volumes of the k minus one cells and so on. In some sense, this is a finite dimensional picture of what Francis just described. And so we cut off everything else that's appearing above a finite dimension, which is determined by the number of loops and legs. And we also, in contrast to what he did, we only look at one form, which is dictated to us by the Feynman rules. So this omega g is also a collection of forms on this space. But it's... And that is given by Feynman to us. Yeah. So apart from being a nice background to study Feynman amplitudes, this space is also interesting in its own right, just as a topological space. And so together with Max Mübauer, we studied it for a bit and following the vision of Dirk, who told us, like, to study the space and play around with Feynman integers and see if you can find something. And what we did find out about the space is that if all edges have to be colored differently and we look at one loop graphs with S legs, then we can calculate the integral homology is given by this z to the power s minus one factorial over two. If we do the same calculator homology, the rational homology in the same dimension, but allow arbitrary colorings, then this is much more difficult. And what we got was a polynomial bound on the Betty numbers here. And they grow with the number of colors as a polynomial in degree number of legs. And the third point is that for all lower Betty numbers, they are actually independent of the number of colors, which I find quite remarkable. And, okay, here to be honest, this bit of a stretch, so we don't have a full proof of this statement. But yeah, we have a geometric proof that's not quite working and an algebraic proof, which we maybe don't fully understand. But yeah, since this is a physics conference, maybe let's accept this as a theorem for now. And another interesting fact here is that you have plenty of maps between these modular spaces for various colors, various number of legs and loops. And these maps sort of given naturally by changing the number of colors, permuting colors, forgetting colors, chopping off legs, adding legs, gluing legs together, gluing graphs along their legs together or inserting graphs. So there's a whole zoo of stuff to explore. And in the uncolored case, this has already been done in the study of the automorphism groups of free groups, where this uncolored space computes, so the modular space of graphs with L loops and S legs computes the rational homology of groups gamma LS. And these guys are if S is equal to zero, it's the outer automorphism group of the free group on L generators. And if we have one leg, then it's the full automorphism group of the free group on L generators. And then the sequence continues, but think that these higher groups don't have such classical interpretations. Anyway, let's go back to physics and let's review our theorem that this loop tree duality theorem, because there's similar theorems in the literature about how to rewrite Feynman integrals as a sum of integrals that are indexed by trees or cut integrals and these all go under the slogan of loop tree duality. And so I think, so let me just describe some abstractly some nice reformulation, but to make this precise and work this out in detail is actually quite hard. Probably everyone knows this, try to integrate some or play it around with parametric integrals. So the fact is here and Ralph yesterday quite remarkably produced this out of his Feynman categories that there's this deformation retract that sits inside this modular space of graphs. And this is in particular nice because we have the space where these points at infinity that dismissing cells. Yeah, here's an example of two loop graph and these three corner points are missing because they correspond to metric or edge length vanishing on two edges. So to get something more finite more that we can handle better, we could, or there's a procedure of deformation retracting onto a simplicial complex that lives inside of this modular space. And in fact, these simply sees they group together to form cubes. So we have a nice cubicle complex that sits inside this modular space of graphs, which is of quite lower dimension and has nicer structure. And then the point here I want just to quickly remark is that you can think of this this integral as some sort of fiber integration. So you view the cell as a as a vibration over this cubicle chain complex that sits inside of it. And then this this theorem theorem is just simply can be thought of as doing doing the fiber integrations and then you're left with an integral that's only over this one dimensional subspace is here. And okay, and that's obviously that's more than one way how to describe this vibration, how to put the fibers, so to speak. And then this could produce these these or explain the different notions in this loop tree duality or different formulations of tree duality that they are around. And okay, last remark is what's also interesting is that depending on how you describe your vibration how you put the fibers, different fibers might walk into this these points at infinity. Yeah, so then you have to accustom your normalization program to how you fiber integrate, so to speak. Check the time. So how much time do I have left? Well, I mean, 25 would be the ideal for the next talk. So, okay, I can manage. So the last point is, yeah, so let's let's consider a theory with only cubic interactions, which means all graphs participating in the amplitude are three-valence, which in this geometric picture I just described is that I'm integrating over these highest dimensional cells in my space, because they are represented by the graphs where all vertices are three-regular. And so you might say, okay, but then I just forget all the other stuff in lower dimensions. I don't need to worry about this. But my claim is here, they carry some information and this information is related to the singularities of these these fiber integrals of the high dimensional cells. And yeah, so this and the way to extract this information is the incidence of graphs in this or the incidence relation in this cell structure. And one way to put it, put this is, so let's take a subgraph gamma of G and we map it to the lambda of variety associated to the reduced graph G where I collapse everything in gamma to a point or to its individual components if gamma is not connected. And here, yeah, without going into too much detail, you could also just think about the poles of the integrant, but you can also make this more go one step further into the solutions of the lambda equations associated to the points where the I vanishes and I is in some subset that's not in the edges of gamma. So I forget the poles associated to gamma. And yeah, as I take just the union of these guys, then there's an obvious partial order. And then, you know, on the left, I have to order by reverse inclusion, but that's just the technicality. And then, yeah, I have this this post set structure of singularities. And it tells me in some way if and where to find my integrals actually have singularities in common. So in terms of the space, I have a cell here, a cell there. And if there's some collapse of edges that leads into a lower dimensional stratum, which is shared by both, then they have all these singularities that live up until this point in this is lower dimensional stratum, then they have all these singularities and below in common. And then you might ask, okay, how do I how do I how do I probe this? Or can I somehow study this incidence relation? And there's there's more than one way to do this. We're studying the Hasse diagram of this this post set, or even trying to to to look at these incidence algebras that you can associate to to post sets. And but but in the in this this this whole outer space graph complex, but what we tried to do was, or we yeah, so we had to this, this idea. So is there a simple graph complex that that tells us something about physics? And, and so here's here's a baby graph complex. If you compare it to the one Francis showed us before. So what you do is you take just that two coefficients, as we take the three set to module generated by all these these graphs that that form my my cells in my modular space. Yeah, so again, see is the number of colors and L number of loops as number of legs. I grade them by some convention. Yeah, let's take edges minus one to stay with this dimensional picture that I had in the modular space. And I define a differential exactly as Francis did. So I collapse E if he is a tadpole, I set it to zero. And yeah, so you see is the coloring. All the time I carried its implicitly around here and Kate just says if if he is collapsed then forget the data of the color on the as well. So why is this so simple, because it's if you want to to study graph complexes and see how they interact with Feynman integrals, the science that you have to introduce to make this differential work say over Q and the symmetries that we saw. So if a multi edge appears, then the sky is zero and so on. It's very hard to make make sense of this in the Feynman integral setting. So and sort of this set to coefficients get rid of all these problems. Yeah, so the idea was then just play around and see what you can get out of this. And okay, and here, so the message is that if I take the top rank homology, which is the homology, homology related to the graphs where all vertices are three valent, then they give me some sort of partition of the set of graphs in the amplitude amplitude. And this is sort of a nice partition. And it's hard to or I can wrap many words around this, but let's take a simple example. So again, it's relax one loop, basically the only example that fits on the slide and just two masses. So then you can compute and see this guy is closed. He applied this combination vanishes and get another class by simply exchanging m1 and m2. And also, according to the rules of how to form the amplitude, this is all the graphs that you can build a one loop three three leg amplitude. And if you do the solve the lambda equations, you get this set for the reduced singularities. I think for one one loop graphs, you can also take the full set. I don't have to only take the reduced one, but let's stick with this. And so, so the point I'm making here is that the full amplitude, I can now write as one function of P is P1 up to P3 plus a second one. And these two functions are given by these summing the five minute girls of these homology classes, and they have this obvious symmetry here. So permuting and wanting them to translate the guys here as well as their singularities. And of course, in that case, it's a really simple statement. But what we can show is that in this time model setting where all legs, all edges are colored differently, this property holds for all one loop graphs and every number of legs. And how do we show this because we know the topology of the modular space. And here there's a direct interpretation of this graph complex as a chain complex associated to this topological space. For arbitrary colorings, it's not so clear. There are some obstacles. So this sort of some cases it's it computes the relative homology of these modular spaces that I introduced. There are some there's some difficulties. And okay, just to finish for higher loops, we don't know anything about the homology. So this is there's still a lot to understand. I have some some some stuff I calculated by hands, but it's only working for zero and one legs. So exactly the case that's not so interesting for physics. And so we might have to put this on a computer or we switch to the cubicle world, because as you've seen, this is a much lower dimensional space that has a simplisher complex structure. So that might be easier to to make sense of this this comparison between graph complex and topology of the space and in turn reduce something about sharing or incidence relation between five and integers. And of course, Duke has always preached to us here, consider the cubicle chain complex not the simplisher one and of course, it was total agreement with his prophecy. And yeah, thank you for listening. And once again, happy, happy birthday, dear Duke. Thank you, Marco. Amazing graphic skills, I must say. Yeah, thank you. Yeah, sorry, Duke for the funny pictures, but I think I think they are pieces of art. Can I can I make one quick comment about my paper with with with this is totally my my fault. The method say the Tom isotope the theorem and and also the the work of thumb. Yeah. They just are not they don't tell you the answer in kind of in situations where where the propagators the zeros of the propagators intersect in complicated ways. Yeah. So it's not clear the boundaries of how far the methods in that paper go, but they certainly don't cover all the complicated possibilities. So I'm sorry. I early on I kind of communicated to Dirk more optimism than was warranted. Yeah. So you know that paper, their methods are there and it's interesting. I suppose you should talk to Max Milbao then because he's also been working on understanding this this geometric situation to prove the Londoner equations. Yeah, well, the job was when you get old and you've worked on many things in your career. It's hard to get everything back in your head at the same time. So anyway, I did want to set the records straight. I mean, I think they're interesting things in the paper, but it doesn't go all the way. I think the problem was this farm in a way that he wrote including papers, but never did the physics calculation at the end healthy minus scenes. That's the thing is a bit more complicated. Well, that is one thing that the paper does do. I mean, farm doesn't consider the important issue of sign that is important from the physics. It really cares that it's a many casky metric and not the positive definite metric. So that we do do. But the issue of how far you can push cut casky is open as far as I can tell. Thank you. Are there further questions for Mark at this stage? I was just wondering on a very general level, this idea of, I mean, you have presented a formula for the Feynman integral in terms of cuts. And then there is this whole idea of lambda single address and expressing a monotony or a discontinuity in terms of something with cuts. So I'm just somehow getting confused in my head. So somehow we have representations to this delta plus is also for the actual integral. So is this just reflecting relations in the homology of the integration cycles that we have many different ways to get the same integrals and that we can play around? No. I mean, this disappearance of the delta pluses in cut casky's formula are maybe more mysterious. But this is in this loop tree duality. And this is simply a residue theorem applied to this energy integration. That's actually a remark in heat-six and sewer already. And it's even colder than that in a way because you could do all energy integrals by every loop just by contouring integrals. And as heat-six and sewer observe, that's the same as doing them by delta plus because the contour integration picks up the pole and the delta plus concentrates on the pole. And actually the positive seat energy part works out by itself just because the integrand when you do it for the first time is even under exchange of case equals to minus k0. So it's a complete accident that this works, but it's a very useful accident. And then because we are only cutting one edge per loop, we are never cutting the graph into two parts. It's still connected by the mirror that edges because the complement of these edges is precisely the spanning tree. So it's not yet cut casky. So somehow it seems that the people in the 60s, they all knew the things and we're just rediscovering it. Yeah, what makes me kind of hesitant is that there's only so many homology cycles you can write down with delta pluses. I mean, you only have each edge you can put a delta plus or not. And if I think of a very complicated graph, I mean, I have no reason to believe that the homology should be so small that I can really grab every cycle that I might need to compute a monotony. But let's discuss this individually. Thanks, Marco again for this nice talk. Ladies and gentlemen!
I will report on joint work with Dirk on his vision of exploring quantum fields in outer space. Our expeditions have so far uncovered an exciting wonderland of algebraic, geometric, and topological relations in a magical galaxy, populated by Feynman graphs. Its inhabitants seem to play a mysterious game of hide-and-seek, hopfing around singularities of various kinds, all ruled by the mighty King Cutkosky.
10.5446/51274 (DOI)
Thanks a lot for the invitation. It's too bad we can't be there in person, but that's life I guess this year. I hope it's going to be better next year. Yes, so as you maybe know I'm really not an Algebraist at all, but I'm more of a probabilist slash Analyst, but in the course of my research, well the type of analysis and probability that I do is led naturally to the type of Algebraic structures that you actually see in Protobative Quantum Field Theory, but the context is slightly different. So the first half of my talk I want to basically show you in what sort of context these type of structures show up in probability theory, and for this I want to focus on one example. So here one way you can link if you want quantum field theory and probability theory is by this procedure of stochastic quantization, and the basic idea which was potentially introduced by Parisi and Wu back in the 80s, the idea is that well if you want to build a sort of Euclidean quantum field theory and so that would formally be given, be described by some kind of measure of this type where this d phi would be some sort of you know the big measure on the space of fields which doesn't really exist, and this would be some kind of action functional, and the idea is if everything were finite dimensional, so of course you know your field configurations don't belong to a finite dimensional space, but if you just suspend this belief for a second and pretend that they do, then you can actually write down a stochastic evolution equation which is essentially a gradient flow, so if you just divide by dt on both sides here, this is just a standard gradient flow, so phi, so you introduce a time which has nothing to do with the time of your quantum field theory if you want, so it's a purely algorithmic kind of time, and you take a gradient flow, so this guy simply tries to minimize that action, but then you add an additional noise term to this, so the dynamic here is going to try to minimize s, but then it keeps on being kicked around by this noise term, so here this w, you should think of it as a Brownian motion, which means that the dw by dt that somehow formally shows up on the right hand side here should be thought of as white noise, and white noise you just think of it as being you know kind of independent random variables for every instant of time, so it's kind of as random as possible in a way, and then I mean of course if you have a gradient flow that means that you need to give yourself some kind of metric on the tangent space of your configuration space, because well the differential would take values in the cotangent space, and so you want to turn that into something in the tangent space to get an evolution, and so you need to fix a metric, and the important thing here is that the metric that you use in order to define your gradient should be the same as the one that determines the covariance of that Brownian motion, so it turns out if you can think about it well the covariance in terms of its sort of tensorial behavior it actually behaves like the inverse of a metric, so you will take the inverse of your metric as being the covariance of this Brownian motion, and then in finite dimensions it's a you know very simple kind of elementary theorem that you learn in sort of introductory courses on stochastic analysis, that if you take this evolution here and you start with an initial condition that's distributed according to this measure assuming that you know everything is finite dimensional and s kind of grows at infinity in such a way that you can normalize this and everything is smooth enough, then the solution to this equation has the property that this measure is left invariant, so if you start with an initial condition that has that distribution, then at all subsequent times the solution has the same distribution, and furthermore if you start with a basically arbitrary initial condition and you look at the solution after a long time, then the law of the solution actually converges to this measure, and so then one idea is to say well one way of building this measure is to kind of go backwards and to say what you could do is to actually build the dynamic and then try to show that this dynamic has an invariant measure and then that invariant measure would be the measure that you actually after. And the reason why you want to do this is that well there's this hope that you know these kind of divergences that show up in quantum field theory and sort of all the problems that you encounter when trying to go to pass to the limit for some kind of discrete approximations, this measure, all these problems might actually be somewhat easier for the dynamic. The reason being that when you make sense of a dynamic you actually do so sort of for a very short time and then you try to sort of extend it for longer times, and so you have automatically a small parameter which is your small time parameter without having to have a small parameter in here, so like you don't need to do a perturbation in beta here. So your small parameter is not beta, your small parameter here would be like the time step for the dynamic. So the idea is that it should be easier because there's kind of a small parameter that comes in sort of for free even if there's no small parameter that shows up in this measure that you're trying to build. Now, well, so Parisian who sort of had that idea in the 80s, but it actually took quite some time for this to bear some kind of fruit. And while the reason being is that essentially the theory of stochastic PDs that you would need in order to build these type of dynamics, well, wasn't sufficiently developed at the time and actually took quite a long time to catch up. Now the example, the specific example I want to focus on for today's lecture is that of the 1D sigma model where your fields you want are just loops with values in a Riemannian manifold. And so that's sort of interesting, it's an interesting example because there's no, since the target space is not linear, it's a Riemannian manifold, there's no somehow Gaussian reference measure if you want. And so in this case, the energy if you want or your action functional is just the usual Dirichlet energy, right? So your field configurations are loops in a manifold. So they're just curves from the circle into some Riemannian manifold. And the energy of a curve is just given by the usual Dirichlet energy, right? So you just parametrize your curve. The curve comes with a parametrization because I really view it as maps from the circle into the manifold. And you just run along the curve and you take the tangent vector at every point to your curve, you stick it into the metric at that point and you integrate this along the curve. And the minimizers for this are close to the basics. Now you can actually just stick this into a computer so you can discretize, you can write down formally the corresponding sort of long-run dynamic if you want. And you can just discretize it in some kind of brutal way and stick it into a computer and see what you get. So I can kind of show you a little movie of what this looks like. So this looks like something like this. So here in my target manifold is just the two-sphere and you see this curve that sort of wriggles around on this two-sphere. And well, okay, so this is sort of the type of dynamic that you're interested in constructing here. I'll just like to go back to the talk. Now in this particular example, one actually knows how to build the measure in the sense that there's at least as a natural candidate for this measure. And so it's sort of known if you want that the Brownian loop measure, so it's just you take the diffusion that has the Laplace-Bertrami operator as a generator on your remaining manifold and you condition it on returning to its starting point after a fixed time. And so that gives you a measure on loops. And that measure on loops, at least in some sort of formal way, it's been known for some time that at least formally can be written precisely as a kind of Gibbs measure like this, right? So when you have this Dirichlet energy showing up, except that you have an additional term which involves the scalar curvature, sort of integrate the scalar curvature of your remaining manifold integrated along the loop. And so here's sort of an interesting thing is that if you go and look at the physics literature from sort of the late 70s, early 80s, where people derive these kind of results, what you see is that actually depending on the papers you look at, you get different values for this constant C. So there's a whole bunch of different values that show up in the literature. And they essentially show up because there's an ambiguity of how you actually interpret this kind of Lebesgue measure here, which again doesn't really exist. Now you can write down this gradient dynamic for the Gibbs measure. And if you do this, you get the following kind of equation. So you see some sort of a nonlinear heat equation. So now u is a function. So it's an evolution, time evolution with values in that loop space. And so it's a function of two variables. There's time and there's still space, the x, which is somehow the parameter of your loop. So x here takes values in the circle and t is just the positive reels. And here you get the covariant derivative of dx u in the direction of dx u. That's some type of heat equation. And then you have here this sort of gradient of the scalar curvature showing up, which comes from this term. And then you have a noise. And in front of the noise, instead of having a constant, the natural thing to have here is the square root of the metric. The reason being that, well, the natural gradient with respect to which you get a nice expression like this is the intrinsic gradient in the tangent space of your manifold. And so the natural metric is really the metric of your manifold at every point. You can write that in local coordinates. So you get some kind of horrible looking PDE. Details don't really matter. So you have the Christoffel symbol showing up here. And here the way you take the square root of the metric, one way of doing it is you take a bunch of vector fields, which I call sigma i, which generate the metric in this sense. So if you want the sum of sigma i, tensor sigma i gives you the inverse metric tensor. And so, well, you get this stochastic PDE. Now, one thing that you see is that here, even though my field consists of perfectly continuous kind of functions, so that these loops, they are continuous as a function of their parametrization. If you remember the little movie that I showed you, they're continuous, but they're actually not very smooth at all. So actually what you can show is that the typical regularity of x goes to u of x is typically held at alpha only for alpha less than a half. So it's basically hold a continuous of all the half. So they're pretty regular. And that means that here, this stochastic PDE here doesn't a priori have an intrinsic meaning because you have these nonlinear terms here involving the derivative of the solution. And the solution is not differentiable. So even though u is a continuous function, the space derivative of u is a distribution. And so here you have this product of distribution and then also multiplied by some quite irregular function. And so you have the same type of problems as typically show up in quantum field theory where you have this problem of not having a canonical way of multiplying distributions. So in this case, so in this context for these type of stochastic PDEs, so it's not just for this particular equation, but for a large class of equation of that type and that just some kind of power counting condition. Essentially the power counting condition says that you should look at an equation which is such that it only has sort of finitely many elementary divergences if you want. So it's some kind of sub-criticality condition. So there's now a sort of general result which is sort of a combination of a number of works of myself with various collaborators where we give a kind of black box showing that for these type of equations you can regularize them in many different ways. So here you don't have sort of nice analytical expressions. So for example, things like dimensional regularization don't really make sense here. The natural regularization would be for example replace this white noise by some kind of smoothened out version of white noise. So white noise formally has a covariance that's a delta function so you replace your delta function by some kind of approximate delta function. So then you have a small parameter epsilon, you're trying to send epsilon to zero, the usual thing. So in this case you get a number of, so the theory tells you that you have a finite collection of symbols. So these symbols here they are essentially the analog of Feynman diagrams. You can kind of think of them as being sort of half Feynman diagrams where the actual Feynman diagrams would be obtained by sort of taking two of these trees and sort of gluing the leaves together or several of these trees and gluing the leaves together in various ways. So they are sort of like partial Feynman diagrams. And on these there is a sort of number of interesting algebraic structures that are very similar to well what we heard already in the previous lecture for example, where if you look at sort of the space of these kind of symbols they naturally have a, they don't themselves form a hopf algebra but they have a co-module structure actually for two hopf algebras in this context. And one of the two hopf algebras encodes the renormalization which you can kind of view here as some form of re-centering in probability space and the other hopf algebra so I think codes like a re-centering in real space actually where you somehow, where you perform kind of local Taylor expansions in real space in some sense. But for each of these symbols you also have a valuation that goes with them so each of these symbols you can actually interpret them for this equation as a kind of a vector field. So the way this valuation works is well you're given the Christoffel symbol so you're given a connection and you're given this collection of vector fields sigma i. And now what you do is well when you see these symbols they're basically trees where you have different kind of nodes so there are nodes that are these kind of fat green nodes and then there are kind of small red nodes and the fat green nodes they come paired up so here so either you've got just two of them and then they're paired up or here you have four of them and so I drew them in two different colors to show that they kind of form two pairs. And now every pair of green nodes like that you should think of it as representing a sum over i of these sigmas and you should kind of think of each of these nodes since it's a tree as having sort of outgoing edges and these edges these outgoing edges you can think of them as representing the free indices here. So here you have two free indices alpha and beta they correspond to the two outgoing edges here and then the red nodes so you have these red nodes that always have two red edges that go with them so you should think of them as representing the Christoffel symbol that has three free indices one upper index and two lower indices and that represents the fact that while here you have two free incoming edges that represent the lower indices and then again you should think of it as having here a free outgoing edge which represents the upper index. So outgoing edges at the bottom represent upper indices here and incoming edges represent lower indices and then you can create new incoming edges by taking derivatives. If you take a derivative of an expression like this that creates you an extra free index and it would be a lower index so that will correspond to an incoming line above and you can join lines by contracting indices right so for example if I take that epsilon gamma sigma if I apply this procedure to the simplest one of these trees well what does it mean here well I have these two guys so that represents a sigma i alpha sigma i beta but then one of them has an extra incoming line that means that it has a derivative and that incoming line is contracted with the outgoing line of the other guy that means that this derivative the index of the derivative should be the same as the index of the second guy and then there's a free index here which is the free index of this expression right so you have this this correspondence that sort of allows you to turn each of these little pictures into a function that's built basically some kind of multilinear expression of the sigmas and the crystal for symbols and their derivatives which sort of suddenly automatically satisfies Einstein convention and has one free index left at the end and then the general result so again as I was mentioning this is sort of really a whole secret it builds on a whole series of works with well even but now who's going to speak just after me and H. A. Shandra who's also at Imperial in the Ashore if who's now in Edinburgh and Lorenzo Zambotte in Paris and that black box sort of says that you can find randomization constants and so here I view the randomization as an element just off the free vector space generated by these guys so there's just one constant per symbol so that if you take the so you take some regularization of this equation and then well you do the usual thing so you add a counter term so here the counter term would essentially change the value of this vector field H right so you add to H some linear combination of the expressions corresponding to well all 54 trees of that type that you can draw then there is a way of choosing these constants and that's again you know some type of book of decomposition type thing which allows you to actually compute these constants so that if you take the regularized solution with the more for the modified equation and then you send epsilon to zero you can get a limit and the limit is independent of the approximation procedure. So here the important thing is that you can really prove so this is a purely these are really analytical statements right so these are not sort of sergeant right constructions they are actually analytical statements so I'm not going to go into detail of the sort of topology in which these limit takes place but these are really analytical objects here that converge to a limit and the limiting guy you can show that it's very stable under approximations in the sense that you can approximate this guy in pretty much any way that you want along it's stationary and has some kind of moment bounds you always get the same limit and the limit is also very stable as a function of the data here. Okay so you can so these guys if you want you can make them depend on epsilon as well and you're actually going to get the same limit. Now in this example well the problem that you get now is that you get a priori well a fifty five fifty four dimensional space of possible limits so it's not terribly canonical. So you would like to exploit symmetries in order to kind of reduce your space of nice limits or admissible limits right so you would want to have some kind of you would want to essentially say well I have this class of equations here if that class of equations satisfies at a formal level some kind of identity then I would want the object that I build here to also satisfy this identity right for all choices of gamma sigma and H and there are two such symmetries actually so there's a sort of meta theorem that one can prove so I call it a meta theorem because really these symmetries this of that doesn't seem to be one sort of good formulation that really covers all possible cases you can imagine but you know for all cases that we've encountered you can prove a theorem of that type and it's just slightly different proof every time. And essentially it says that if you have a symmetry and you can approximate your equation in a way that your approximation preserves that symmetry then there's a way of renormalizing it so that the renormalized limit still satisfies the symmetry. The important thing here is that in general if you cannot find an approximation which preserves the symmetry then it may just not be true that any of the renormalized limits satisfies all of your symmetries. And I'm going to show you an example. So here in this case there are two natural symmetries so the first one is changes of coordinates in the target manifold. So if you do a you perform a change of coordinates in the target manifold then in the way that I wrote things down you know in a coordinate system because you get a completely different equation and what you would want is that the solution to that different equation would be the same as the solution to the previous equation simply pushed forward under the different morphism that gives you the coordinate change. And so in the case of usual stochastic differential equations there's a solution theory which is called Stratonovich which has precisely that property. And so in our case one can prove the corresponding theorem here and what that tells you is that you can impose some restrictions on your renormalization procedure so instead of having 54 degrees of freedom you can kind of cut it down to 15 if you want to impose equivariance under coordinate changes. And there's another symmetry which is that well remember I chose my in order to take my square root of the metric what I did is I chose a bunch of vector fields so that the sum of sigma i tensor sigma i is the inverse of the metric tensor. Now of course there's lots of there are lots of possible choices for these sigma i's right. So if I'm just given g that does not at all determine the sigma i's and well at the formal at the formal level one can sort of convince oneself that actually the law of the solution shouldn't depend on the choice of square root of metric here. And what one uses for this is something that's called Eto's isometry and again for usual sort of stochastic differential equation there's a solution theory which has the corresponding property which is called Eto's solutions. And if we in our case sort of prove the corresponding theorem we can show that you know you can reduce your 54 dimensional space of solution theories to something 19 dimensional. And so now you could say well you know I have these two symmetries of course you know if you have two symmetries for something you can always mash them together it gives you one big symmetry well not always but in this case the two symmetries can be actually mashed together so there's a kind of skew product of these two symmetry groups that kind of acts on the whole thing but we don't know of a good approximation that actually preserves that big symmetry. So we have an approximation that preserves this symmetry we have one that preserves that one but they're not the same type of approximation and we don't know of any approximation that preserves both at the same time. And so well so there's a natural question is can you have both. And in the finite dimensional case so if you don't talk about stochastic p's but just stochastic differential equations there is a completely analogous question and the answer there is actually just no so there is no solution theory for stochastic differential equations that has both of these symmetries simultaneously. In our case it turns out that you can actually have both at the same time. So that's this theorem we obtained with Yvonne from Gabriel and again Lorenzo Sambati which is that actually these you know so you have this 15 dimensional affine space of theories that satisfy equivalence on the change of coordinates you have this 19 dimensional space of theories satisfying E2Sometry there are two sort of affine subspaces of a space of dimension 54. Generically since 15 plus 19 is 34 which is much less than 54 generically you wouldn't expect these two affine subspaces to intersect but what we can show is that they actually intersect and they have an intersection which is actually of dimension two. And well it turns out that you can actually even so now you have a natural two parameter family of sort of notions of solutions that have all the symmetries that your class of equations satisfies. There's a sort of more analytical property that I don't want to really go into that you might want to impose as well and we can prove we can show that you can actually impose that more analytical kind of property also simultaneously with these two symmetries that kind of reduces things by one more degree of freedom. So at the end of the day you end up with a one parameter family of sort of very natural solution theories that in some sense behave as nicely as you could possibly expect. And furthermore in the case that we're actually interested in so in the case we're interested in this gamma and these vector field sigma i they are not unrelated right because the gamma is under Christoffel symbols for the Levy-Civita connection that comes from your Riemannia metric and the sigma i's are some kind of square root of that Riemannia metric. So the two are related and it turns out that the way in which they are related is such that in this particular case this one parameter family of solution theories they actually all coincide. So at the end of the day you have a completely canonical kind of notion of solution. And well still there was one this last degree of freedom which I eliminated which I didn't spend much time on it which was more of an analytic nature rather than a geometric nature so there was no actual symmetry involved somehow. That one you can still ask yourself you know what's the effect of you know changing this last parameter and that turns out to actually just add a term to the right hand side of the equation which is proportional to the gradient of scalar curvature. So that so it's kind of cute because in a way that gives you a different perspective on this fact that was that people had figured out back in the 70s which is that if you formally try to write this Brownian loop measure well you always want to write it like this but you don't quite know what the constant C should be. And so here are sort of some of the different possible values of C that appear in the literature. And so here the way this is interpreted is that this constant C is sort of the remaining degree of freedom in my solution theory for this stochastic PDE which is not fixed by purely symmetric considerations. So now the main step in the proof is to show so we want to show that these two spaces sort of intersect right so we have a space so we have the spik space S which is essentially you know just the vector space that's generated by all these symbols. Then we have a subspace which corresponds to if you want those linear combinations of symbols that can be written just in terms of G rather than in terms of this square root in terms of these sigmas so that's the subspace S-e-to and then there is this sort of geometric subspace which is the one that corresponds to those linear combination of symbols that actually give you a vector field right because all of these symbols give you an expression that satisfies sort of Einstein convention and has one free upper index but such a thing is not necessarily a vector field because the Christoffel symbols they are not a tensor of type 21 right the Christoffel symbols they are well they determine a connection but they are not a tensor of type 21 and so just contracting it with other tensors in a way that satisfies Einstein's convention doesn't guarantee that you actually get a tensor in the end. So here you have a subspace of this space which is those guys that actually give you a vector field in the end and what we can show is that if you take one of the solution theories that satisfies e-to isometry and one of the solution theories that behaves correctly under changes of coordinates then they differ by a counter term that belongs to the space of linear combinations such that if I take two different square roots of my metric and then I look at the difference between these evaluations corresponding to these two different square roots from the metric then what I obtain is a vector field for every choice of sigma and gamma. So now obviously this space as both contains both the geometric ones and the e-to terms because these terms are precisely the ones so that this difference vanishes because they are those terms so that if I choose two different square roots for my metric I actually get the same thing so it depends only on the metric and not on the choice of square root whereas these guys are the ones so that each of these guys separately is a vector field and therefore in particular their difference would be a vector field as well and so each of these guys certainly belongs to that space so their sum belongs to that space and the non-trivial fact is that this sum is actually equal to this space here. Okay and that's not obvious so in the case of stochastic differential equations the analogous you could actually try to do exactly the same proof everything works up until this point and then what you realize is that your space actually consists only of one single symbol and the evaluation of that symbol is well the expression that I already wrote down early on and that expression is not a vector field and it also really depends on the choice of sigma so it is not just a function of the metric so it belongs neither to this space nor to that space so these spaces are both zero but if I take two different choices of signals that give me the same metric then it turns out that this difference is actually equal to this difference of covariant derivatives because the term involving the connection actually drops out and therefore that's a vector field. So these two spaces in this case are zero but this space is non-zero but one-dimensional and so the proof fails and well the conclusion is actually known not to be true in the case of stochastic differential equations. Now in the case of PDEs you see if I just look at the trees that have two leaves there was only two of them then it turns out that this guy here well if I hit it with my evaluation you actually get essentially just a contraction of the Christoffel symbol with the sigmas so that's that red guy represents the Christoffel symbols and these two green guys represents the two instances of sigma and the fact that they're connected represents that contraction here and there of course this here is nothing but just g alpha beta and therefore this guy belongs to this Eto space and similarly you have this term here you can actually show that if you apply this evaluation map to this term here this actually gives you the covariant derivative of the sigma i in the direction of sigma i and so that's a vector field and it turns out that there's no other vector field you can build in this way and so in this case while these two spaces are both one-dimensional and the sum is actually just everything and well and you can actually show that in this case this both of these elements actually have the property that they belong to this space okay so in this case well this part of the argument works once you know that this difference is of the form of a sum of an Eto counter term as a sum of a geometric counter term that means that you actually know that these two affine spaces have to coincide because you can actually you can just move that's there's one space in which you can move with terms of this type and the other space in which you can move with terms of that type and if the difference is of the form of a sum of these two terms that means that you can actually move both of them in such a way that they meet right and the problem now is to actually prove this in general so so in our case for the trees that have two leaves it's kind of easy to just check it by hand and those with four leaves well there's 52 of them and so you have to somehow figure out what these subspaces are and well it's not so easy to kind of figure out what subspaces of a 52-dimensional space look like if you don't have something more systematic that you can do right so you cannot just turn it here you could just about turn it into a sort of simple kind of linear algebra problem but if you look at the sort of trees with four leaves that's not really doable anymore so you want some sort of more systematic way of looking at it and so here it turns out that the natural way of abstracting the sort of the natural algebraic structure that actually shows up of which these trees with these different various decorations that showed up are an example what we call a T algebra so I don't know if it's we didn't find it anywhere in the literature so but maybe well I think we don't really know this literature either so maybe some of you have seen this already somewhere in the literature and then we'd be you know very help very happy to have a pointer but we haven't been able to find this so this is essentially an abstraction of the notion of functions with multiple upper and lower free indices and so what do we so how do we define this well so we define it as a vector space with a gradient but sort of a double gradient so it has two degrees and these degrees here you should think of it as being the number of free indices right so the u is the number of free upper indices and l is the number of free lower indices so a vector field would be something with one free upper index so it would be an element of v10 and then you have three additional pieces of structure the first one is you want on each of these vul's you want an action of the symmetric group it's actually two copies of the symmetric group one that acts on the sort of u upper indices and one that acts on the l lower indices and you should think of it as you know corresponding to permutation of indices then of course you have a product right if you think of these as functions with a number of free indices well you can multiply two such functions and it's essentially a tensor product and so the number number of upper indices should add up the number of lower indices should add up so you should have a product which kind of preserves degrees in this sense and in terms of permuting indices of course now if you multiply a with b and b with a well it's not quite commutative right but it's sort of almost commutative in the same sense as the usual tensor product this kind of almost commutative so in this case what you want to impose is that multiplying b with a is the same as multiplying a with b but then sort of permuting so this is the permutation which corresponds to you know taking a block of size u1 and u2 and kind of swapping the two blocks and the same for the lower indices you have a block of size l1 and l2 and you kind of swap the two blocks right so that's somehow the natural property that you would expect the you would want to impose this product to have and then of course it should you know in some sense commute with the action of that symmetric group right so in the sense that if you first flip indices around and then multiply the two guys it's the same as first multiplying them and then kind of flipping the indices around in a natural way um and then the final operation that you want is a sort of a partial trace and the partial trace corresponds to contracting an upper index with a lower index and so you would always view it you would view it as an operation from say v u plus 1 l plus 1 into vul you've contracted an upper and a lower index so that both of them go down by one and it should have to now you would want to sort of contract two arbitrary indices but we can actually always reduce it to the case of contracting the two last ones because we have this operation that commutes indices right so we should we should think of that operation as actually just being the operation of contracting the last upper index with the last lower index and then it should have if you interpret it like this then this property is of course very natural and then you can again sort of think a little bit about how you know how this should sort of interact with the symmetric group one important property is this one which basically says that if you you know if you look at sort of the last two upper indices and the last two lower indices and your first contract that's what you have say the last two upper indices the last two lower indices and you apply this trace operation twice so that means you first contract these two guys so then they disappear then you contract these two you say well that should be the same as first sort of flipping these two indices around yeah not sure I have a good way of drawing this here but you so just sort of first flip the last two upper exchange the last two upper indices and you exchange the last two lower indices and then you somehow contract both of them right so it's just corresponds to sort of contracting them in the reverse order and you want that to not make any difference so so one typical example would be to just take us the ul space for example take some fixed vector space v and then take l copies of the dual and u copies of v itself and so then you have a natural you know product and permutations and you have a natural tracing operation as well right which sort of takes the last copy of v star and contracts it with the last copy of v and that has exactly all the properties that we just formalized um so I think I'm running out of time so I'm uh so let me just sort of give you one sort of little result that we have in this direction which is very useful because so the point here is that we're using we're using this algebraic structure sort of as a language but at the end of the day we want to use it to prove an analytical result so we have to go back and forth between the algebra and the analysis and so in particular we have to prove that at the analytic level you don't somehow end up with kind of spurious identities that you know you don't see at the algebraic level but that may you know just appear because somehow there's some degeneracy or something right and so you want some kind of non-degeneracy result that tells you that generically you don't actually have any sort of cancellations also at the analytical level that you don't already have at the algebraic level okay um and so here we have some kind of non-degeneracy result of this time which essentially says that for you know a large class of these kind of t-algebras that are all the ones that ever show up in the proofs that we care about um you can always if you look at just the sort of finite dimensional subspace of them then you can take the so if I go back to the sort of string in the manifold you can take the if you take the dimension of the manifold large enough and you choose these Christoffel symbols and these vectors in a generic way they can you can always guarantee that if the dimension is sufficiently large then there are no kind of spurious cancellations that appear okay so that's a sort of type of non-degeneracy results you can prove here um but I think I'm out of time so this is maybe a good place to stop and thank you very much for your attention thank you very much Martin for your nice talk thank you um before I ask questions please if anybody else wants to ask go first raise your hand or just start speaking and one thought that came to my mind when you talked about these t-algebras I'm not an expert on these algebra things either but this thing with many inputs and many outputs right I mean that's that's a proper read and then yeah so this is an example okay so it's an example of one of these universal algebras that you can associate to an operand right so if you have an operand then there's always a universal algebra that goes with it um so here this would be like one specific example yeah yeah so so there is indeed so there is there are whole books on universal algebra um but then they tend to be sort of too general for our purpose yeah because they sort of say oh take an arbitrary operand and then there's this algebra that goes with it and you have sort of general properties but here we don't care about the arbitrary operand there's a sort of one very specific operand yeah yeah I'm sorry now I have to ask I don't think you have anything upright you have a sort of weird structure where you have half of the structure for prop you're not you're only taking traces right or you're doing these things together so you have you have an uh you have an uh SN action and SM action so that's the first thing that would underlie a prop but and then you have what's called the horizontal gluing which is we take the SN uh you add them together and then you have sort of these wheels which give you these traces but that's it right you don't have anything else you don't have something that you can put the inputs into the outputs or do you have that you do uh yeah but you can you actually right the because what I described here was not the operand right what I described this is of the algebra uh so this algebra you can view this algebra as coming from an operand and then the operand would be the one where your objects are things of that type you have like boxes uh so you have a finite number of inputs uh a finite number of outputs and you have boxes and each box also has a sort of number of inputs and number of outputs uh and then they're connected well in the way that you should think you know they're sort of connected in this way um so here uh now I'm sort of running out of uh so I can do this and then this and well there's nothing to connect it to these outputs so there's stuff like that but now you can these type of objects you can plug them into each other right because I can get I can take an object of I can take a gun I can take a guy with two inputs and one output and sort of stuff in the middle uh and now I can take that guy here and I can sort of plug it into this box here and I connect these two inputs to these two slots and that guy to that slot right and that actually gives me an operand and sort of the these t algebra sort of comes from there yeah so technically speaking what you have is a wheeled prop sorry it's called a wheeled prop operate uh technically only has one output you have multiple that makes it a prop oh I see okay then uh since you have things going back that's what makes it called be called wheeled okay the property I have a directed graph and then if you go back up you get this and so and your t algebra is a free t algebra or I mean sorry is a free algebra over this thing or just a specific one that's what you're saying no right so it doesn't have to be free right so you can so the free ones you could describe them as basically just being you know sort of linear combinations of stuff of that type but the ones that then show up in our context are not free and and that those are the ones you care about that's what you were saying you don't need the proper language because some of the stuff you should sort of get for free yeah yeah yeah no the ones that yeah I'm trying right thanks all right I suggest we we prepare for you on the next thanks Martin thanks very much again thank you Martin
The stochastic quantization of the 1d non-linear sigma model (i.e. the natural Langevin dynamic on loop space) naturally leads to the study of an algebraic structure we call a T-algebra. We will discuss how they arise, a few of their properties, as well as a concrete example of their application.
10.5446/51275 (DOI)
Okay, well, thank you very much. I'm honored and thrilled to be here. So the first thing to say, I guess, is happy birthday, jerk, I guess, is very belated at this point. If I'm not mistaken, it's sometime in July. So Dirk was a very important kind of mathematical influence on me about 15 years ago when I arrived in Boston, where Dirk was spending half the year at the time. I started talking to him and learning about his work on the Hof Algebra as a renormalization and thinking about how these structures could be kind of lifted to a categorical level. And so in fact, today's talk, which is sort of based on recent work is very much, at least in my mind, related to that set of ideas. And over the years, I've had the pleasure of visiting him in Paris and in Berlin. And I have a great memories of those visits. So thanks for all the inspiration, Dirk. All right, so let me get started. So the main idea of this talk is that one can interpret things like the Hof Algebra of rooted trees or the Hof Algebra of fine-man grabs, as well as a number of other combinatorial Hof Algebras as Hall Algebras. And what's a Hall Algebra? Well, a Hall Algebra is an algebra whose structure coefficients count extensions or short exact sequences in a category. And so I will talk about these types of categories. And the ultimate goal is to describe the construction which attaches to a projective TORG variety, a Hof Algebra. And this will arise as applying the Hall Algebra construction to some category of coherent sheaves in a sort of somewhat strange setting. So my plan is to first talk a little bit about Hall Algebras in the kind of traditional setting. They are a tool that is important in representation theory. And so for people who study Quiver representations and quantum groups and things like this. But the setting in which we will ultimately apply them is actually somewhat different, which are sort of categories that are combinatorially defined, which are non-additive, but nevertheless one can do the same thing. And then I'll discuss some elements of sort of the algebra geometry of monoid schemes and the kind of combinatorics we see in that setting. Okay, so what's the traditional setting of Hall Algebras? The way it has been sort of, they have been used in representation theory since at least the 1980s. You start with an Abelian category, which has some strong finiteness conditions. So this is called being finitary. And it means that the set of morphisms or the set of maps between any two objects, as well as the set of extensions between any two objects are finite as sets. Okay, so this is not so easy to achieve somehow because many, if the category is linear over some field, that field better be finite or else a finite dimensional vector space is not going to be a finite set. So the two main sources of examples of these types of categories are quiver representation. So quiver is a directed graph and representation is, well, we attach vector spaces and linear maps along the edges. So the category of quiver representation is over a finite field, has this property. And another geometric source of examples is the category of coherent sheaves when X is a projective variety over a finite field. Okay, so these are the kind of two main examples one can have. So what is a Hall Algebra? Well, as a vector space, it's just functions on isomorphism classes in the category. So we just take this, this is the most naive version of Hall Algebra, I should say there are more sophisticated versions, but just take functions on isomorphism classes that have finite support. So they're non zero and only finitely many isomorphism classes. And you can equip these with a type of convolution product. So if the convolution evaluated on the isomorphism class of an object M is obtained by summing over all sub objects, N of M and evaluating F on the quotient and G on the on the sub subject. So if you if you squint your eyes and replace the summation sign with an integral here, and think of M mod N as X minus Y and, you know, N is Y, then you can see why this is called the convolution. It's reminiscent of convolution of functions and harmonic analysis. So a basis for these finitely supported functions is given by by functions on just individual isomorphism classes, delta functions. And so if you if you convolve to delta functions, what you what you see happens is you you get you get a summation, or where the structure coefficients count the following or the structure coefficients correspond to the following numbers. So you're counting the sub objects of K, which are isomorphic to N, and such that K mod the sub object is isomorphic to M. So up to some automorphism groups, which you're counting are short exact sequences of this form. So N goes to K goes to M. Okay. So that's what I meant when I said that the structure coefficients of all algebras count short exact sequences. And of course, this implies that the structure coefficients are non negative also, right, since they count things. Now to, I'm going to try to sort of connect this stuff with with sort of quantum groups. And when if you've seen quantum groups, you know, there are a bunch of powers of cues floating around. And to to get those, we have to take a slight twist of the multiplication. So you introduce something called the multiplicative Euler form. And you so here is a formula. So of course, the for this formula to make sense, you really need a category where, you know, X or non zero only in infinitely many degrees. So something of finite homological dimension. And you can think of this as as basically cue to the power of the ordinary Euler Euler form. And so so we take our old multiplication, except we multiply through by this, what's ultimately basically be a power of q that depends on M and M. And then the theorem of Ringo and Green is that is that if you have a finitary building category, then these these these algebras either the one with the Euler form or without it are our associative algebras. Okay, so you get a you get an associative ring. Now, if you want to study co algebra structures, this becomes a little bit more subtle. So you if if a happens to be be hereditary so that the the global dimension of a is less than or equal to one, then you can you can equip this with a with a co product and an antipode and and get a get a half algebra. In some cases, this this won't this will be something like a topological half algebra, because the the co product might not might land in the completion, for instance, but roughly speaking, you get a get a half algebra, but only in this in this nice case where the the global dimension is less than or equal to one. And now, the claim is that that if you if you see if you look at what you get in as these all algebras, you get sort of interesting quantum group type things. So, as I said, the kind of two main examples of categories were that are that are finitary and abelian being quivers and coherent sheaves on some on some projective variety. So, so here's the the first theorem tells us what happens when when we look at quiver representations. So if I take a quiver, I can view the underlying graph undirected graph as a dinking diagram for for some for some cosmode algebra. And the theorem states that that this that this whole algebra that we computed by counting short exact sequences contains sort of a positive half of the quantum group corresponding to this to this cuts moody algebra. So, roughly speaking, just like ordinary, let's say semi simple or a kiss moody algebras, these quantum groups have kind of triangular decompositions. And roughly speaking, this plus means that we're looking at the kind of upper triangular part here. And this map is is is an isomorphism and in sort of finite type. So when the when the Dinkin diagram is of type Ede, then this is actually an isomorphism. And there is then kind of a procedure by using using the Drinfield double, which you could then use to recover the entire quantum group. So I guess what I want to emphasize here is that if we hadn't learned about quantum groups from from Drinfield and Jimbo, we could have discovered them in principle by using this this whole algebra construction. And another thing to kind of point out is that this the the order of the field, which is q, right, this q is a prime power, the square root of this thing appears as the deformation or the quantization parameter here. Okay, so so somehow prime powers have something to do with quantization. And I'm not sure if anyone really understands why that is. Okay, and so so this was this is a theorem that somehow tells us what happens with quivers. And as a just a sampling, I'll mention this theorem of Kampanov and Castle Baumann, which tells you what happens if you take the category of coherent sheaves on on just p one or you know, simplest projective variety you could have over a finite field. And then also, you see a quantum group type object you have. But now it's a it's a quantum affluent algebra. So, so roughly speaking, this is this is what happens if you if you were to to quantize the loop algebra of SL2 rather than just SL2 itself. Okay, so so in both of these cases, we get some some interesting kind of quantum group. And there's been lots of work on this by by by many authors. So, just work of bourbon Schiffman, Eric Vassero, Capron of and and many, many other people. And so so as you as you increase the so in the case of coherent sheaves, if you increase the complexity of the of the variety, this thing gets complicated fast. So already, already for elliptic curves, it's it's it's not it's not easy to see what what happens and you get interesting algebras like the sort of double affine heck algebras show up. And sort of more generally, this the study of of of hall algebras of curves over finite fields is sort of related to to the theory of automorphic forms on on function fields. So the the action of the hall algebra on itself corresponds here to to the action of sort of geometric heck operators. So so already for hygienists, it's sort of aside from kind of abstract results, it's not easy to see what happens concretely. And when when x is a variety of dimension greater than one, then basically, it's, you know, very little was known and understood about what what what these what these hall algebras look like. And also, these these categories of coherent sheaves no longer have global dimension one or less. So the the sort of co algebra structure, I mean, you can write down a ring. But, you know, whether this is a half algebra or not becomes a much harder question. And so the stock is ultimately aiming to to look at this last case of sort of higher dimensional varieties, but in a certain kind of combinatorial limit, a type of classical limit. Okay, so going back to to the formula for for the multiplication in the hall algebra. If we look at this, there's nothing about this formula which somehow explicitly requires the category to be a billion or added. Okay, so so even though historically, people are looking at quiver representations or coherent sheaves. There's this this formula makes sense and serve a more general context. And and that's the context that will be interested in. When when I started thinking about this, there wasn't really necessarily a very good framework for thinking about these non additive examples. But since there's been very nice work of Dick or Hoff and Capron of they define a class of categories called proto exact categories. Okay, and these are these are somehow these are generalizations of Quillen exact categories, but which are allowed to be non additive, and which are somehow tailored to to hall algebras. And there's a, you know, there's a very nice formalism that that that that was developed by them and and sort of related to kind of higher categorical aspects of the story also. And something to note is that that any any of these sort of proto exact categories has a has an associated algebra k theory. Okay, so you can you can define a k theory of sort of higher k groups of of anything that is that is proto exact. Now what they show is that if if you have a proto exact category, which is again finitary so that the morphisms between any pair of objects and the extensions of any two objects are finite as sets, then you can write down an associative algebra using the formula that we saw before. So everything works kind of as as expected. Alright, so, so now there are lots of examples of these proto exact categories that are combinatorial and non added. And, and so let me just mention a few. Well, the simplest, the simplest example is that is not additive is perhaps is the category of pointed sets. Okay, so, and sort of related examples are, if you take a monoid, and the category of modules over the monoid, whereby module I mean, appointed set with with the action of the of the monoid. So things that are by people in semi group theory are called acts, sometimes. Other examples are, you know, you can take a quiver and instead of putting vector spaces on each vertex, you can put pointed sets. And again, things can be made to work more or less as before. But other other structures and combinatorics such as pointed matroids. And then the examples that come up in renormalization, things like rooted, rooted trees and forests, as well as Feynman graphs are examples of these, these proto exact categories. Which means, as I mentioned already that that therefore they have an associated algebra k theory. So which which we know is interesting because I mean, I'll get to this in a second. But these, these hierarchy groups are somehow known to be very interesting. But the the thing that will be of the main interest in this talk is this last example, which is the category of coherent sheaves on on on something called the monoid scheme. So I will this this monoid schemes are basically sort of versions of algebraic geometry in a in a non additive context and sort of correspond to sort of combinatorial limits of ordinary ordinary schemes, if you will, but they are they are a good source of these sort of non additive categories where Hall algebras still make sense. So my goal is basically to do the following. So since we know that we know that Hall algebras give interesting quantum groups. And and so in particular, if you have a higher dimensional variety, something like a surface and beyond, you would expect to if by looking at this, this whole algebra of coherent sheaves over Fq, you would hope to get some sort of interesting, maybe quantum group like object, but this seems very hard. So so let's try to do something simpler. And let's try to compute as classical limit. So I want to take the limit as the deformation parameter. This q goes to one and see what happens to to this whole algebra. Okay, so it should, it should become somehow more commutative in this limit. And and hope to to use this information about this classical limit to understand something about the the original structure of the thing that we were really after, which is, which is the kind of quantum object. And this somehow ties in with this this kind of philosophy of of doing things over the the field of one element. And this this this stuff comes up when you're when you look at at limits of calculations over finite fields as as q goes to one. So this is this is an old story and and you know, kind of a set of I would claim it's more of a set of sort of interesting analogies and ideas rather than than maybe a sort of a complete theory at the stage, but but the ideas are kind of neat, I think. So let me give a couple of examples here. So for instance, let's consider the enumerative combinatorial problem of counting subspaces of n dimensional vector space over fq over finite fields. So in other words, I want to count the number of points of this across monion over fq. So if you do this, it's an elementary exercise to see that that this is given by a rational function, which is called the irrational function in q, which is called the cube binomial coefficient and choose k. And if you take this rational function and it has a has a well defined limit as q goes to one, in which case it's this that limit is just the ordinary binomial coefficient. So this this sort of leads to this idea that the that the limit as q goes to one of the category of vector spaces over fq is something like the category of sets or maybe better pointed sets. Okay, because you want some some something corresponding to zero in your in your quote unquote vector space. So again, this leads to this idea that a pointed set is a vector space over f4. Let me just mention another sort of classical observation here. This is due to teats is that if you take, if you take a simple algebraic group over fq and you you count so you count the number of points of this group over fq and you you take the limit suitably normalized here as q goes to one, what you will find is that you get the order of the vowel group of g. And so, so this again sort of led to this notion that maybe you know, if we if we had a good theory of algebraic groups over f1, then then the so f1 points of an algebraic group should be the vowel group. Okay, and to some degree this has actually been been made precise in know the work of of Lore-sheik and and con concerni and others. So the, I'm not going to get into this very much but the you know the basics of the sort of dictionary is is the following so the category of vector spaces over f1 should be something like pointed sets and algebra over over f1 should be a monoid. So all these all the structures over f1 are somehow non additive. So, so, so you lose addition there. So, as you go from vector spaces to sets, you you lose addition as you go from algebus to monoids again you lose addition. You know the notion of a module is becomes a pointed set with an action of of the monoid. And so it's not surprising that the notion of kind of a scheme or algebraic variety over f1 should also be something built out of monoids. So, let me let me say a few words about what when monoid schemes are. So, so I have this this one additional slide here. So, you know, in at least in my mind, these various sort of proto exact or proto billion categories that are non additive, or somehow, you know, can be in some cases at least sort of thought of as limits as q goes to one of sort of additive things. So, so these, these are somehow the the analogs of of a billion categories, a billion or exact categories over f1 things like matroids and graphs and things like this. And in this combinatorial setting. So, I mentioned already that this, if you're working over fq, the the issue of the existence of a of a co algebra structure becomes kind of subtle. You need some conditions to to define a co algebra structure. But if you're working with these combinatorially defined categories, you can do something very simple minded. So, our whole algebra is functions on isomorphism classes. And most of these combinatorial gadgets have some have a kind of a co co product in that category which more or less amounts to this joint union or or wedge some which is kind of the pointed version of this joint union right so we have two Feynman graphs we can take their disjoint union. If we have two rooted trees we can take their disjoint union and get a get a forest and so on with with combinatorial objects, the co products are somehow more or less disjoint unions and so. So, you can define a co algebra here co algebra structure which is sends a function to the function evaluated on this disjoint union or or or wedge cell. And this turns out to actually be compatible with the product for in these combinatorial categories so you you get something which is manifestly a co commutative hall algebra or a co commutative by algebra. And it's you can it's also easy to see that there is a natural natural grading by by kind of a positive cone inside the growth in the group. And this thing is connected so so you always get something which is which is a kind of graded connected co commutative half algebra and so at this point we can apply the the Milner-Mor theorem and it'll tell us that what we have is is an enveloping algebra. Okay. So, and this this this lee algebra, this this Hall lee algebra is is just corresponds to indie composable objects. So, so things like, you know, graphs that are connected or trees that are or I mean, trees that are honestly brooded trees and not not forests and so on. I should say here that in in connecting the sort of hall algebra story with the usual story of the half algebra of graphs or trees, what we're getting here is the dual, right, rather than than. So we're getting an algebra which is non commutative but co commutative as opposed to how maybe most of the time the these these half algebras are viewed. So, so let me so let's let's let's see how this kind of this classical classical limit idea works out so we want to we want to use this kind of story about all algebras together with this sort of F1 philosophy to to compute some classical limits. So, well, let me just give give a couple of examples. So if you if you take the if you look at the category of modules over the sort of the the free monoid on one generator. So this is the monoid whose which is just powers of T, then then this than the the whole algebra of this category that you get is is is basically a dual of of Dirks half algebra of rooted trees. And this is because to give them to equip a set with an action of this of this monoid is basically to draw a directed graph, which which tells you how T acts and then you can see that the type of graphs that can rise are are either rooted trees or sort of cycles with with rooted trees attached. And if you take if you take the if you take a quiver and you and you look at at the representation of that quiver and pointed sets which somehow you should view as the kind of kugos to one limit of the category of ordinary quiver representations. What do we get? Well, so the naive guess would be that I mean the sort of quote unquote classical limit of what we had before which was the positive half of the quantum group should be just the developing algebra of the positive part of this of this customer D algebra. What you actually get is in general you get this module or a certain ideal which I'm not going to describe here but this somehow reflects kind of a non flatness of this this kind of q goes to one limit. So something non trivial happens as q goes to one. And so you get something which is kind of maybe smaller than expected. However, if the if the quivers of type a everything works nicely so you get exactly the the enveloping algebra of upper triangular matrices. So as we know in mathematics everything works nicely in type a and then then it doesn't work as nicely in other cases. Okay, and so the these these whole algebras applied to other to other kind of combinatorial categories recover sort of other types of well for for Feynman graphs we get the dual of Dirich's algebra of graphs and for for matroids you get the dual of Schmitt's matroid minor half algebra and and so on. And as I as I mentioned these these categories have have have an associated algebra k theory and just to indicate that this k theory is interesting. In the simplest case if we take vector spaces over f1 which is pointed sets, and these k groups correspond to the stable homotopy groups of spheres. So so even in the and this is somehow the simplest case so in several other cases like for instance these the case of matroids you can show that this that this k theory here is at least as big as homotopy groups of spheres in the sense that there's there's the the the k theory of f1 vector spaces sits inside of these things quite often. So so these are these k groups are are interesting and and somehow hard to to compute. Okay so so let me let me finally talk about these these trying to do algebraic geometry in this kind of non additive setting which will ultimately lead us to to toric varieties or or rather to their kind of monoid versions. So we know that that you know an ordinary scheme in algebraic geometry is obtained by gluing spectra of rings and you can do the same thing with with spectra of monoid. So if we have commutative monoid you can define prime ideals you can you can define as a risky topology and and and do the same exact same thing you do for rings. And and what you do what you obtain when you glue these things together is is a space which is a it's a minuital space so it's a topological space with the chief of monoid on it. And this is this is what a monoid scheme is it's it's the exact same story based on commutative monoids rather than commutative rings. Just to just to give you a sort of a flavor for for this so so what are what are some of these kind of simplest schemes the thing to kind of notice here is that these schemes have very few points. So if we take the affine line which is which normally is speck of polynomials so instead in the monoid version we would look at speck of a monoid on one generator and this thing has only has two points so it has it has exactly one prime ideal the one generated by T and it has the kind of generic point corresponding to zero. If we so so the kind of minuital version of affine in space is you take the free commutative monoid on n generators and you look at prime ideals in that and in fact these correspond to subsets of these variables exactly the kind of coordinate subspaces. And so part of the reason why there are very few points is is so as I will maybe say this monoid schemes are closely tied to kind of Torah geometry and and their points in this kind of monoid sense correspond to to Torah's equivariant subschames in the in the sort of ordinary story of Torah variety. So there are much fewer of these sort of Torah as equivariant things than there are of points in general of course. So here I also give an example where you can take two copies of the affine line to and glue to to get to get p1. So p1 now just has three points it has zero infinity and a generic point. Okay so so how is how is our monoid schemes related to to kind of Torah varieties well Torah varieties are determined by fans and a fan gives you a monoid scheme. So so a fan in in the kind of Torah sense is a is is a is a collection of cones satisfying some some properties. Let me just give an example. And the way that so here's a here's a fan it's basically a way of dividing r2 into into three chambers in a nice way. And the way that so these three colored regions here these are the three cones. Each cone if you look at the at the kind of lattice points that that live within each of these regions you get a you get a finitely generated semi group and or monoid. And and these this picture kind of tells you how these monoids are glued together. And this data can be assembled to sort of gluing data for for a scheme for for for you know for monoid scheme. Okay it's and if you linearize this object meaning instead of taking the monoid you take the monoid algebra you would the monoid ring you would get a toric variety in the in the kind of ordinary sense. Okay so in the three cones in that picture give us give us these three monoids. So so sigma naught is is the is the is the first quadrant here and and the the the semi group of lattice points here is generated by the standard basis vectors e1 and e2. And so this corresponds to the variable x1, x2. And these these other cones give us give us different different semi groups. Okay so let's let me discuss what's what coherence sheaves on on this object kind of look like. So the story is again parallel with the ordinary story. So if you have a module over over a ring that module gives you a coherent sheaf on the or a quasi coherent sheaf on the on the scheme corresponding to that ring and the same thing here happens if you start with a module for the monoid then using standard constructions you get a quasi coherent sheaf on on that on that monoid scheme that I described. And of course now this the these categories are not additive so you can't in ordinary algebraic geometry coherent sheaves form an abelian category on a monoid scheme they can't because it's not additive but they do they do form sort of proto exact categories in this in this dicker half-cupron of sense. So that means that we have a chance to to talk about hall algebras of these coherent sheaves. So now you kind of run into a certain problem which is to to to get hall algebras we need finitary conditions we need two objects to only have finitely many extensions between them so only finitely many short exact sequences between two two objects and that actually fails in this in this monoid context even when the kind of the fan underlying the monoid scheme corresponds to something which is projected so that doesn't happen in the kind of ordinary setting when we when we do ordinary algebraic geometry we have results of ser and and that tell us that x in that case is finite but but things can go wrong in this in this sort of non-additive world. So that's a problem but what we can do is to pass to sort of a smaller category of sheaves. And so so we define a class of sheaves called t-sheaves and what are t-sheaves well I'll show you a bunch of pictures pretty soon but these are basically these are sheaves which which locally admit a grading by the by the semi-group so so we've seen that our our our monoid scheme is somehow is covered by by by subsets corresponding to cones each cone corresponds to a semi-group and and so locally we want the sheaves to sort of admit a grading by this this s-sigma and there's a second condition which actually this the second condition is something something that comes up a lot in this sort of non-additive world which is you want you want to type of cancelativity in in these sheaves to hold and this actually fixes the finitivity issue so you you get a well everything once you impose these conditions everything works so how does this connect to to kind of combinatorics and other things well the the basic theorem is that this these t-sheaves on affine space just corresponds to skew shapes and dimensional skew shapes and in the kind of ordinary combinatorial sense so let me let me give an example here in two dimensions but this works in any dimension so so here is a here's a skew young kind of young young diagram skew partition and how can I think of this as a as a module for a monoid well so in two dimensions I have the for for kind of the affine plane I have the the monoid on on generators x1 and x2 so how does this monoid act on this on this set well x1 moves one box to the right until we fall off the diagram and then things go to zero and x2 just moves up so any any diagram like this can be thought of as a module over over this over this monoid and therefore it can be thought of as a coherent sheaf okay and so I can make the remark here is that this this this is a picture of in this kind of monoid setting of a torsion sheaf supported on on the sort of third formal neighborhood of the origin because three is the smallest power of of this maximal ideal that kills all all elements here so if you if you if you take any positive path of length you know at least three you're going to fall off this diagram and that's the smallest number that works okay so here's some here's some other pictures of of sheaves so these diagrams can be infinite so so here I'm thinking of a diagram that continues off to infinity in the y direction in the x direction and so this was correspond to a coherent sheaf on supported on the union of the x and y axis this is a diagram which is just kind of you know has something missing in the lower left hand corner but is infinite beyond what I've drawn and this is a picture of a torsion free sheaf here's a picture in three dimensions okay so so this would be a picture of a of a sheaf on a three supported on the union of the of the three coordinate axes and and in general if I have a monoid scheme I'm going to glue these skew shapes together to get something global on okay so so these t-sheaves are just sort of objects that are that are glued together from from skew partitions and the the theorem is that that this that this category of these t-sheaves is is is nice it's finitary it's proto exact and and so you can define a whole algebra and by all the kind of abstract nonsense I've said earlier that's going to be an enveloping algebra so so in other words for each torque variety we get a lee algebra and you can ask well what what do these lee algebras look like so let me let me just here give a give an example of kind of how you would compute a lee brackets here between so here I've chosen sort of two torsion sheaves these are these are finite diagrams that they create this is in two dimensions so these are two torsion sheaves and in two dimensions supported at the origin and so I want to look at all extensions between these two diagrams and so this this amounts to all ways of the product in the whole algebra amounts to always of stacking one diagram on top of another and and then anti-symmetrizing this operation so so s times t is is the sum of these three terms here always of kind of sticking t to the right and up of s and then we anti-symmetrized so now we're sticking the diagram corresponding to s up into the right of t okay so so would you would you get in as a sort of a byproduct of this is that for instance that the you know that skew shapes and n dimensions have a lee algebra structure and this under this bracket and this in fact these these the structure constants for this lee algebra always plus or minus one and zero okay I'm starting to basically run out of time here but we we are able in certain examples to actually compute these these lee algebras attached to toric varieties and you see stuff that looks like you know loop certain certain pieces of loop algebras like for p1 you get a certain piece of loops to gl2 which is you can be thought of sort of a classical limit of kind of kopronov's theorem that I mentioned earlier where he was working over fq and for instance and if you take sheave supported on the second formal neighborhood of the origin and a to so point sheaves again you can look at the kind of hall algebra of this category and in this case you get you get a certain sub algebra of a puzzle of loops to to gl2 and there are other examples so so we're able to do you know p2 well we're able to exhibit a basis for p2 and and and in some cases when we truncate this lee algebra we're able to sort of identify it with things that that are known but but in general the structure of these things which again are believed to be classical limits of some type of quantum group are still pretty mysterious all right I'm going to stop here so thank you very much all right let's thank Matt are there questions yes I would like to ask a question on these black dots on the shapes you want to see me try these black dots to get to get a lee bracket so no no it was at the end and oh okay after that yeah yeah yes yes so do you do you have some properties for this black dot for example is it a pre-list or whatever so I I don't think in general I mean it's certainly for for for a general proto exact category I don't I don't expect this to be pre-lea because I mean this is more of a kind of intuition that I have more than a theorem but what pre-lea is about insertion and sort of in a single spot so if you have these skew shapes they interact when you you're not really inserting in a single kind of yeah fine place like on a tree and that ruins the pre-lea property and so so so this came up I thought about trying to do sort of an insertion elimination type thing for these for these skew shapes and then things kind of fall apart precisely because when you stick them together they kind of that the interaction is non-local in some sense okay thank you I had a quick question you mentioned that the k theory of these these categories and in the very specific example of rooted trees which have been studied in great detail what is the answer what are the what is the k theory for those it's so so I think we're free so I'm embarrassed to say I don't it's something that again contains the stable homotopy groups of the sphere spectrum I think you can kind of you can write it as a sort of more or less a sort of a smash product of that with something else but I don't I'm gonna get it wrong if I say it now but I can I can look it up I think and tell you I mean the fact that the trees are like a co-free kind of hop-father but I mean is that I mean I would have expected something trivial or something very generic for for these kind of k theory questions one what one wouldn't expect pointed sets to give you the stable homotopy groups of the sphere spectrum right yeah so so yes I agree that it's okay so I will not be asked what happens for graphs thank you well I don't know but that's I think that would be an interesting question yeah thanks I think I have a question but cluster algebra is a very popular in physics at the moment and they have people who say that cluster algebra is all algebra of quiver labs or something like that do you have a comment no I mean so I've also heard this I've heard people trying to kind of relate this I mean I think they arrive they arise in the slightly different ways if you start mutating in quiver but but I don't understand the precise relationship of the whole story yeah thanks any other questions I always find it very amusing that this F1 philosophy it's almost the opposite of the way you do q counting where you would often start with a binomial identity and generalize it to the q form and here you're doing the opposite where you're taking something that makes sense at the q level and going the other way anyway if there are no other questions then let's thank Matt again.
The process of counting extensions in categories yields an associative (and sometimes Hopf) algebra called a Hall algebra. Applied to the category of Feynman graphs, this process recovers the Connes-Kreimer Hopf algebra. Other examples abound, yielding various combinatorial Hopf algebras. I will discuss joint work with J. Jun which attaches a Hopf algebra to a projective toric variety X. This Hopf algebra arises as the Hall algebra of a category of coherent sheaves on X locally modeled on n-dimensional skew partitions.
10.5446/51279 (DOI)
Hi, thanks for the introduction and thanks for the invitation to speak. And happy birthday to Dirk. So as I wrote celebrating 60 years of achievement, I'm pretty sure your parents thought you were achieving things at very early age, but I got to meet Dirk around 1998. His reputation had actually preceded him. A friend of mine was his PhD student. And we started talking at the IGS and talked through the years. And these interactions have been very important for me in my work. And some of the things I'm about to say wouldn't exist if I hadn't talked to Dirk or would exist in some other form. All right. And so since this is going to be a little bit technical, I swish it up. I'll have pictures first and explain what I'm trying to make a technical sense out of. And what usually happens is that after I made technical sense out of it, it's pretty long and very correct and very mathematical. But maybe the pictures and examples will provide a guiding way through this categorical structure. That I'm going to introduce. So the main character will be fine in categories. I'll see what happens. Sorry, I'll define what that is. But what I want to stress is that, so actually, can you see my cursor? I never know if that's true or not. I can see it. I think it's good. Yes. Yes. Okay. So then I can do this. So the point is that these things are morphisms. And if you know something about categories, morphisms start out to be sets. But then like in vector spaces, I can add morphisms. So the morphisms between vector spaces are vector spaces. So having more structure on the morphisms is actually very natural. And this is important, even if you're starting to some morphisms, as you do in these hop algebra things. So and then the main objective is a standing of the theoretical underpinnings. Here's something which I've been starting to do recently is actually, once you have that, you can actually do calculations. And I'll have some examples of that. And it gives you unexpected links between algebra, geometry, and physics. So one large thing that if you want to go towards algebraic geometry, like in the last talk, you see that you'll have functors and a six-fungta formalism. So it looks a little bit if you want to go that way like sheaves. All right, so how do I get from one thing to the other? So very much what we hear in this conference is you want to go from combinatorics to algebra. And the point is that either you can look at the morphisms and they will give you a colored or partial algebra and then you get half-algebra and bi-algebra as we've been discussing. The other thing you can do is you can look at representations. And I'll give a very basic example of this a little bit later on. And so in general, they will be functers into a target category and then immediately switches your possibilities to have a target category that's combinatorial again, or algebraic, so linear, you can have sums, or geometric, so you know, spaces, topological spaces, modular spaces, and things like that. And then there's an interaction between these levels where you can take these representations and put them back onto the category. Let me put it this way. And that is called enriching. And then just like I said before for vector spaces, now you can have spaces of morphisms, DG morphisms, and so on and so on. And if you stay in this framework, all the universal things will remain true. All right, so then what else can you do? So there is a, there is, you can get something I call the group deconstruction, which tells you that you can decorate things and make more examples out of examples. What I will talk about is one way to get topological spaces is you do a W construction. That's a name that's well known in topology. And you will get cubical complexes. And if you apply this to the combinatorial structure of graphs, you end up with modular space. And I found that one of the most amazing things in this whole story. I will talk about this later. This is also joint work with Clemens-Berga. Then there, now you think which is nice, there's sort of, and I'll talk a little bit about this. There's a plus construction. There's certain hierarchies. Where you start out with something simple, just an object. Then you apply this and you get something that is simplicial. And we've seen that here a little bit. It's all, it will reappear. And then you get to the planar root of trees, which are basic to one of the hop algebras of Kamek Kramer. You could also do something a little bit more difficult. And then you would end up in cross-simplical sets. And sometimes they know, so one flavor are non-commutative sets, which you can get. And then you get into root of trees. So the symmetric guys. You could stay in the planar setting. And we've discussed that, or you could just start at graphs and get structures for those. All right. So here are the promised pictures. So what is the basic idea that a physicist can maybe relate to? So we have a Feynman diagram. This is in five to the three theory. The idea is I want to use this as a morphism. So what should be the morphism? It should go from a source and a target. So what is the target? The target is basically you contract all the inner edges and you end up with a vertex, which just says the three external legs. And what should be the source? The source should be all the vertices, all the local structures. So I just break all the edges. That's my source. And then if this is a morphism, and I have a category, and that's the important thing, in categories, I can compose morphisms. And here's a decomposition of the morphism. And already you see the story if you're familiar with it appearing because you see that this piece of information here is a subgraph. It's this subgraph. And when I contract that, then three of these vertices, then u, v, and w, just merge into one vertex, which I give a new name, r. And then I can put these vertices onto a new graph and contract that. And the theorem, which is later, is that this is a natural structure. This just appears without me doing very much. And then this factorization actually inserts this graph into this graph. There's a little bit more that I wrote this as a note. You have to be careful if you want this to be an actual category. And it was a little bit about a glib about sort of just marking the vertices. I'll have an example later on. You have to mark all the flags. You have to instance say that u, v, and w actually are mapped to this vertex r. So if you look at this example long enough, it actually reveals all the features that you need for a strict definition. And this is what I already said, the composition of morphisms, which goes like this. This will be insertion of graphs into graphs. So I'm inserting this graph into the vertex r. And I obtain this graph. And if I contract a subgraph, I get a factorization because I can read off the subgraph. This will give me this morphism. And if I contract the subgraph, I get this one. So this is also very much like in the last talk, looking at sub-objects and quotient objects. All right, so there is the subtleties that come with the marking is that you have to get the isomorphisms and automorphisms right. And you all know that there are all these factors with automorphisms that you have to take into account. So what you actually have is these stars are in these vertices, are in vertices, but they actually are vertices with automorphism groups. So we're looking at a group point. And that gives us, if someone, one of the subtleties is that we actually get multiplicity is coming out in these half-automated structures. The second part of the title are cubical complexes. And again, I'll start with a picture. So here is a picture of what may it be. So let's say this is a root of three, but it actually doesn't even have to be rooted. I have edges. I put labels on the edges, which are parameters from zero to one. So s and t are living in an interval. So this is an actual square. And then I can send s and t separately to one or zero. And what do I do? There are actually two things I can do. So if I send something to zero, I just contract the edge. And I multiply the labels. So if I send t to zero, I go over here. And I multiply b and c. If I send t to one, there are two things I can do. And this actually comes out of talking to Dirk. Previous, prior to talking to Dirk, I would have just marked that as one and called that one frozen edge. After talking to Dirk and looking at these cut costa rules, I could just also forget this edge. So what's marked blue, I can forget that or cut that edge. And then you can go around and see what's going on. So if I would go up here, I have two frozen edges or two cut edges. And here I just multiply everything together. And if that looks familiar, that's because, I'll talk about this later. There are several other ways to look at this, for instance, the bar complex. So now let me just very briefly say what a Feynman category is. So I said you have these basic objects. They will form a groupoid. So think of the local vertices. The category itself will be a symmetric monoidal category. So it has a tensor product with a grasp. This is just disjoint union. I have an inclusion, which gives me the basic objects. And then I need some notation. So if I have a vertex, I can look at the v. I can look at the three symmetric monoidal category. These are just words and words of isomorphisms, words of automorphism. So any isomorphism is a word of isomorphism. And then I conclude this here. And that's already just the basic structure. So I call such a triple-affignment category. If this inclusion on the word level is an isomorphism, in equivalence of categories. So that means basically every object is a tensor product of basic objects. So for the graphs, I'm looking at aggregates of stars. And I also have an equivalence of symmetric monoidal categories. I'll explain that in one second. This is the main axiom. This is what will make all the constructions work. So it's not just any monoidal category. There is something special going on here. And this is sometimes called a hereditary condition. And this is the thing that makes things work. And the last one is technical. I just put it on here because it's needed for some computations that they actually work. So what's the basic consequence? So let me try this. This is the first time I tried this y. So what this is saying is that if I have any morphisms, say y, or x mapping to y, then by the first axiom, I can write this as some tensor product of some basic objects. And I'll write star v to remind you of the vertices of the graph. And then what this axiom says is that if this is a phi, then I can complete this by looking at fibers of this morphism. So there's another isomorphism here to this guy. And there is a map here, which is a tensor product of phi v. And each of these phi v's is a morphism from xv to star. So what this says is that if I go back one slide, so this says that this is exactly what's written here, I can decompose any morphism into these morphisms, which go from a more complex object to a simple object. So this is from many stars to one star. So what does this mean? For physics or for graphs, so first of all, to make this stringent, you can work in the so-called Borisov monocategory of graphs. And then we restrict to those graphs, which are just aggregates of corollas. And then we get graphs back as the underlying graphs. So these three lines just explain that there is an actual, very strict categorical setup where the pictures I'm drawing make strict sense. And in physics, how to think about that? These are the vertices of the theories. And the morphisms are the possible final graphs. So that's why I use phi to the 3. And of course, in a physical theory, you can embed one graph into another graph and have another graph of the theory, or you can read off, pull out these graphs. And that's all that you can do with this is clear from expansion of the Feynman diagrams. And this is, of course, the main thing that underlies the Hopf-Alder graphs. And what you can think about, so the intro is interesting thing, one interesting thing might be that, what are these categories if I map to a single vertex? But what is that? So that just means I have any graph looking over this. And these are usually the terms in the S matrix. Show them this in S matrix. And now you know why I don't have handwritten notes. All right, so let me kill those. And here's again the same diagram of composing two things. So basically I will compose two morphisms in this category of graphs. They will have underlying graphs. I have this double bar to denote that they're the ghost graphs. They're not actual graphs because there's some information. So they are actual graphs, but they do not characterize the morphism completely. But what they do characterize, and this is important, they characterize the isomorphism class. So looking just at isomorphism classes, this diagram turns into this diagram. And so what I see is the factorization of one graph into subgraphs and the quotient graph. So what are sort of in this category of graphs, basic morphisms, so okay, so these are the basic morphisms in one sense. And these basic morphisms are basically connected graphs, if you wish, for a line of graph. They're a large class of example, it's just connected graphs and then these will be disconnected graphs. But then these graph morphisms, see if I can track the subgraph, I can do this one edge at a time or one loop at a time. And so even these morphisms decompose into elementary morphisms, should we call them basic. And the elementary ones are, you have two vertices, you join them by an edge and contract the edge. The other one, you have two flags or legs of a vertex, you connect them by a loop and then contract it. And the last one, this is important for non-connected things to make them connected. You can just merge them together. So this has at least three applications. One is something we saw in Matthew Haydnath's talk, this is getting props to work. So we're taking two things and putting them together. The other one is geometric. This is actually an incarnation of the connected sum of two things. So I have two things which are not connected and I put them together to get the connected sum. And the third thing, these are important to write down the V equations as master equations. But let's just include them. So after saying that, they're good, but let's forget them for a while. All right, so we move on to the next thing, what are representations? And representations are functors and I can take the functors on the final corner or itself and don't worry, I'll have a quick read the example in a sec and look at just the restriction to the elementary objects. And those things I will call ops and modules. And there's a reason for that. There is a trivial functor. So this is just, you know, mathematically it's saying almost nothing. It's like, I always have a monoidal functor where I just send everything to the monoidal units. So if it's categorical, sorry, if it's set theoretical just think of mapping everything to a point. If you're looking at K vector spaces just map everything to K. All right, and then how can I think about this? So if I have a graphical category, I can think as physicists of graphs as Feynman rules. Sorry, as these representations of a graph category is giving you Feynman rules. So first I have to fix the target categories. So fixing the target category, I'll be in a simple setting here, which sort of I just have a vector space, a vector space of fields. And I give a quadratic form, which gives me propagators, which is the inverse of the quadratic form I have here. So, and then what is the fun to do? So for each basic object, which was, remember just the one vertex graph, I'll call that star S, S are the legs of the graph. This should give me a morphism from some, my vector space. W says, no, I'm associating the following thing to it. So I pick a vector space and I associate to each of these guys, so I'm defining this function O to each of these guys, the vector space tensor S. And then from the graph morphisms, they are given, so the morphisms of the category are given by graphs. And if I want to know what to do with a graph, that should give me a morphism of these vector spaces. And what I do is I just contract the tensors with the cosimere. So let me write just a very quick example. So if I start out with just two vertices like this, one, two, three, four, then I can map that over to one, two, three, four. And what I did is, is called this five and six. Then what I did is I put together five and six and contracted and what is the operation I get? I have to be able to do a lie. So what I want is I want to get an operation from W tensor four to K and I just take Y of phi one, phi two, phi I, G I, J, Y, this is Y one, Y two, phi J, phi three, phi four. So I'm just contracting in this place and this place. So these are the places that are indicated exactly, exactly by these edges. So it's a straightforward thing. And then actually you can check that this gives you a nice function. So this is factorial and in this way, you can think about Feynman rules. What I'm doing here is I'm always mapping to K and if I'm in algebraic situation, this is a non-generate form that's good enough. I can dualize, otherwise you have to be a little bit smarter. And that's of course what many of these things are about to be a little bit smarter. This is just a slide saying, okay, if I have these graphs, as I said, you can decorate these graphs and operatic people will know this, but not operatic, not so much. I just want to highlight that I did talk about props and Martin Heierach's talk was actually something about wheel props. So even if you don't know this whole zoo, they do appear naturally as something important. And for me, these modular operads and cyclic operads will play a large role because the modular operads are related to modular spaces of curves. All right, so then how do I get my general relations going? So categorically, if I have functors, what I can look at is agile and pairs. And what actually happens if I have a functor between two Feynman categories, so maybe I should write this here. So if I have an F going between one Feynman category F and another Feynman category F prime, what I can do is I can push forward and pull back modules from F prime to F or from F to F prime. And so this goes into the computational thing. So category theory actually tells me how to compute these things, namely the push forward and the pullback. And if you care about the six functor formalism, then you can do other things where you have to take a right kind extension as well. But let me make this a little bit more concrete. So let's see what happens in the most trivial case possible where I just have one object in V and its identity. Then if I look at the symmetric monoidal category or the monoidal category, so let me start with the monoidal category. The three monoidal category are just words. The letter is one. If I repeat one n times, all I'm getting is the number n. So I'm getting the objects as the natural numbers. If I do the three symmetric monoidal category, then if I have the object one tensor n, this has, I can act by just permuting. This is one, one, one, one, one. And I can, sorry. I can permute the ones back and forth. So this will have an action of Sn. And this is a typical example of a group point. So another way to imagine this is to make contact with what we had before is you think about the star, which is numbered from one to n. All right. And then the V modules are simply objects of C because I just get to map one object, one thing here to there. And then since this is three symmetric, nothing happens to the, to the, to F as well. I'm sorry, there's the V modules. All right, sorry. Now to look at what happens, if I want to extend that, then I can just take the free extension and say this V tensor is F. And that's what I was just saying. And what I get is just the representations of V, namely the group point representations of my objects with morphisms in V. Special case is, if I look at this special category, we have one object and the morphisms here are just elements of the group. And now the representations are really group representations. And if you go through the calculations, what you see is if you have two groups, you have a function for the categories. It's just saying that you send the morphisms by this thing and the object, the only object to the only object. And then pullback is restriction and push forward. Now you can compute that as induction. And the adjointness is known in representation theory as Frobenius reciprocity. So what this general theorem says is, this is true for all Feynman categories. So if you find some function between Feynman categories, you have restriction and you have induction. This is going to be important later on. Maybe I'll skip this. If I'm looking at the clock, I'll skip this and directly make these comments that there is this layer where you can look at very commentarial things. And this is what the slide would be about, that if you just look at just subjections, what you get out is you can compute. This is again a computation. You get commutative algebras as representations. If you do the non-symmetric analog, you get associative algebras. If you actually look at all finite sets, you get unitl commutative algebras. You can just look at the other part instead of subjections. You can look at injections. Then you're in the realm of FI algebras, which are popular now in representation stability. And they were introduced by Charles Farben-Ellenberg. If you add symmetries to the non-symmetric part, that's where the cross-simplical groups come in. And this is also where the non-commutative sets come in. And I have a distinct feeling that this is very much related to a course talk that non-commutative sets are maps where you have orders on the fibers. All right, so getting to one of the main actors, namely the Huff Algebras of Conan Kreimer or the generalization of that. And this is work that's just been published this year. Unfortunately, very many pages, or fortunately and unfortunately, but to make these things strict is a little bit difficult, but the general idea is easy to state. So what was the main point? The main point was in a category I can compose. So I can also decompose. So naturally I can write out a co-product. If I now you see I have to take some, so I have to enrich or take the free abelian group, the co-product over all decompositions of amorphous. And so you already saw in the basic example I gave, the decomposition is exactly this subgraph, co-graph. So if you want to think about it that way, that's perfectly fine. And this says that this generalizes to any category which is actually finite where this thing is finite. Now you can ask yourself the following question, namely if it's a minority category, I also have a multiplication. So I get an algebra and a co-algebra structure. And the question is this a bi-algebra structure? And that turns out to be a little bit subtle. And the answer is basically yes, but you first have to go to isomorphism classes. So you call two morphisms isomorphic if they're related by an isomorphism. And you saw that implicitly in the last talk because there the functions were defined on isomorphism classes. So that actually factors exactly through this quotient. Then the main theorem is that if you started with the Feynman category, indeed the bi-algebra equation holds and you have a bi-algebra. And this bi-algebra is usually not connected. And if you want to get the usual things that you're used to which are connected, you have to take a quotient. And then this under explicit checkable assumptions, there is a canonical quotient, which is indeed a half-algebra. And so in the non-sigma case, this is easier if you don't have automorphisms of your object. So say you're looking at planar stuff, then already be before going to isomorphism classes as a bi-algebra. And then the quotient is also very easy. You just take the identity morphisms minus one where one is the identity of your monoidal category. So it's the identity of the identity. And the reason is that if you decompose a morphism, you see exactly that you'll get this decomposition for any morphism file. You might have the identity. So if I'm going from X to Y, I can always split by going through X or I can split by just going through Y, the same morphism. And so I'll get these two things. So in half-algebraic terms, you see that this thing won't be primitive. It will be Y primitive at the most. And that's actually an interesting structure that my student is actually working on currently. And now, I can't click that way. The incredible thing to us when we computed these things is I did something quite general. So now I can feed in my surjections and look at what I get. And I actually get out Godscherhof's half-algebra for multi-zeta values. I can put in a certain enrichment of surjections and I get, namely leaf labor trees. I get conch-crimeus half-algebra of trees. I can do this for this category of graphs, which I had before, so I got conch-crimeus graph algebra. And this was maybe the most surprising. So if I look at this example here and take care of signs, I get actually a half-algebra that Bow was invented in a completely different setting to look at double loop spaces. So this is some simplificial structure that is hidden in conch-crimeus algebra. And here are the associated pictures. I hope your resolution of the monitor is good. You know, usually I project this into a larger thing. So this is the half-algebraic cross-structure you're used to. So this is a ruler tree. This dotted line is a divisible cut. Whatever is on top falls off. And so you see, if I label everything, then I have to label all these cuts. And so I did this here. So this is the planar version of doing this. And you see now, I can't just stick with labels from one to N. I have to label an arbitrary sets, but this will work out nicely. Another thing which I can do is I can forget the labels and then cut and you see I have no labels anywhere. And this could mean two things. Either I'm in the planar case where I can just label one, two, three, four, five, oh, this one I may label this way around. But this has in the planar structure, this has an automatic labeling. So I can look at this picture in planar case. Or I could say, well, I forgot about this. So mathematically that just means I took the covariance and then I look at the covariance of this thing. And then I'm in the non-planar case. That's the fully labeled planar or non-planar. And I learned this from a talk of course. Another thing that you could do is, well, I mean, I could say this is important. This I is the same as that I. So if I wanna keep track of the labels this way, I have the automorphism group permuting these labels and that will permute these factors. So I can look at the covariance setting. And now comes the all important quotient. So what happens here, I have these labels and we've seen in talks before, we've seen the actual conchaima algebra without leaves, without legs. And so what I have to do is I just have to pull in all these legs. When I do that, everything is fine, except if I only have a leaf here, which I'm allowed to do here. See, I cut just through some leaves. And what happens there is this goes into one. And that is exactly taking this quotient that I had before making these into the unit. So that's exactly what happens there. All right, so that explains this thing. And just summing up the abstract, we can produce this conchaima algebra. We get conchaima algebra, balances for double loop spaces. There's a non-commutative rated version. And we have the threefold hierarchy, non-commutative, planar, commutative. And then there's an amputated version. All right, and then the decorations, which I will discuss, I think I'll have some time for that, maybe not that much, but if you now put some, so what do you do to make this structure more elaborate? You want to put in some labels on the vertices and on the edge of the graph. And there are two general constructions for Feynman categories, which allow you to do that. Then some more remarks is why Baals and Gansheroff work is because simplicity is form and operat. So this is just a simplicial structure that you have. You can go from this to looking at co-operatives with multiplication. And from this, you can deform. And when you look at this information, so this might be of independent interest, you can actually get in this thing, you directly see developments and deferrations, sort of developments in the notion of Gerson-Harber, and these deformations are queue deformations. So instead of taking this quotient directly that I had, there is an intermediate quotient, where you have a queue deformation, which basically sends each of these leaf labels just to queue. So this would give me a queue squared. So that's still graded, and that is interesting in its own right. And the last remark I'll make, which also relates to many things we've seen here before and things that are going to come afterwards. So now remember, the co-product just said, I do a factorization. So if I do multiple factorizations, what I get is I get composable maps, which are exactly what is sitting in the nerve, and that's actually also where these simplities come from. So an iterated co-product gives me exactly this element in the nerve. Okay, so here is the slide that tells you that what you expect is true. Once you get to isomorphism classes, then the morphisms are not, are actually given by their ghost graphs. So then this factorization of the morphism is exactly looking at the co-graph and the sub-graph and the co-graph. And then at this point, let me mention one other quick thing that at this point is, and this is something that is for the future, you start seeing the co-module structure appearing because they had these special elements. So if I start factoring a morphism that goes to a special element, I get something that's just a general morphism and again, a morphism that goes to the special element. So you see that this thing here, these form a co-module over the hopf algebra. And this is sort of, if you want the core hopf algebra, so that's the beginning of decorating and looking at core hopf algebras. And the other thing I can do is now I can invert this. So if I have an element like this and I have a general morphism here going to Y, I can just combine these and these will be exactly the B plus operators, which take an element here of this special form and I can apply it if the colors are correct. So if this actually has the target of Y and just make that into the product. And then last thing, so how is this really, that is really the B plus operator because if you think about it, this gamma of phi one, this was a disconnected thing. So in the crime of tree version, this actually takes the forest and makes it into a tree. And in the more general thing, you can plug in your primitive elements up here. All right, this is just a computation that you do get these multiplicities. If you start labeling everything, so if you start with this graph, I'm certainly not the first one to tell you what the core product of this is and other people know this much better than I do. And the only thing that I'm trying, the point I'm trying to make here is this now appears naturally in the very, in the same format as my, I do not have to introduce another ad hoc formalism or anything just in my final category of graphs, which I had before. If I look at the factorizations, I will get two such factorizations. And that is because I can tell the difference between this edge three, three prime and the edge two, two prime. This will give me two different factorizations. And you see now it's important that I labeled everything because this graph, of course, as an abstract graph, it just looks like any other abstract graph, sort of regardless if it's two or three. All right. Now, this one is an explanation of a relation which is also nice. So again, I'm just giving details and pictures of the general story, how they apply to things you may know and love. So if you like the half algebra of Goncharov, you know that the way to write this down is with half circles and segmentations. And I said this has some simplificial feature and is actually related to concrime or as trees. And now we actually know exactly why and why not and how it's related to, so Goncharov had some guesses but how these two things are related. And the idea is you take this half circle and you put this tree in here. So this I learned a long time ago from people working in the field. And now what the half algebra does is it would decompose these trees and cut these trees. And the dual picture is sort of taking these half circles and taking the, so how about Ganga that I was looking for the name. How about Ganga taught me this and this I learned from Francis Brown in his lectures in Cambridge. So you see the interaction working. So I'm really happy about these things. So you see the segments here which cut this tree and this is a duality. But this is actually something very deep which goes back to Joyala. And there is a duality between double pointed, so intervals, double base pointed maps and maps in here. And you can find a nice version of explanation in this and the papers of Batanin and Bara. And that's where I learned this from. All right, so as I said, then there are several things you can do with this stuff and you can decorate, you can enrich and you can do this W construction. So maybe let me just say something about the decoration. So the decoration, because I need that the decoration says I can actually decorate vertices. So for the graphs, what's the decoration that one would care about is a cyclic order. Why? Because this makes the graph into a room graph. So if I look at this graph that I had one, two, three, four, just has a star with four edges. And I can give this this cyclic order or that cyclic order. So either one, two, three, four or one, four, two, three. And depending on this, I get different, different rib graphs, different graphs. And now if I do these decorations just through general, so this again is just a general category to a slide which says I can mix this with my push forwards and pull backs and decorations. And now let me skip that. Let's apply that to a very nice situation. So I can look at the following subcategory where I just look at graphs which are planar trees, not really just planar trees, inside all of the graphs. Let's say where the morphisms, the basic morphisms are connected. And then I can decorate this in two ways, either not at all and that's a trivial decoration. I can push forward this decoration that by general theory gives me a way to look at this category with the decorations and I can, this has then a cover where I resolve these decorations. And what I get out is actually something that's known. This is the category for modular operas. So this is graphs with genus markings. And now this is a computation that's important. This is a computation. This is I'm not defining something, I'm computing something. Same thing if I take the cyclic, what I just told you that I look at sort of these ribbon graphs, but that's not right. So what happens then, I take trees with cyclic orders and then I can unfold the tree and look to the plane. So I get planar trees. So this is what it's called the non-sigma cyclic category. And then this general diagram says I can also look at, so they have a relationship up here. And down here I can push forward this decoration and go up here and I get something new, which are, which is a nice category, which is called the category for non-sigma modular operas. But this has meaning for many things. So, and actually maybe I'll say this, this also says that actually this is something about fibers of morphisms so that I don't have infinite chains if I control this G. So what are the nice things? So we get back these modular spaces. If you push this forward, which is now a calculation, you get types of open surfaces. And then this has, so this thanks goes out to Karen Yates on this one. So now you can compute this. So very succinct way of computing this is to use combinatorial knowledge, which means that the spanning tree graph is connected. Doing this actually allows you to calculate the push forward, which in categorical terms is a cold limit. But this now is just something you can sit down and calculate. And actually it's a calculation, which is nice with core diagrams, if you wish, or some other format of this type. And you get out that exactly, this is true. This is what you get out is what you expected. And then this immediately applies to something also somewhat combinatorial. So this is this relation between combinatorics and algebra. If I just do the little game that I did before with my correlation functions, you can figure out that if I just look at trees or cyclic trees, I'm just looking at surfaces. And what I get is I get one plus one dimensional TSTs. And then there's a nice theorem saying if I algebraize this, this is the same thing as looking at a Fubini's algebra. And then going up here, I get an open TST. And this is known that this is just open close, for me as a open close one plus one dimensional TST. So something happened here. I can compute this self-commonatorially and I can then do the algebra. And I get a nice theorem about these spaces of fields. All right, and now I wanna do this in a higher version where the spaces of fields were just algebra's, but now I wanna get actual spaces. And what lets me do that is the so-called W construction. And these are technical details. I need something which is true for graphs because if there is a commutation relation, if I contract two edges, I can do it in any order. There's a slide suddenly because I cut two different intermediate graphs. And then I can throw on the categorical machine again and compute a call limit. But what this does in a non-technical version is for each graph with n edges, I glue on one cube. And just like I had in the beginning, I have two boundary maps, namely one where the parameter goes to zero, which means I contract. So that's going to zero or I mark. And as I said before, either marking means freezing or deleting. And then I get a complex by gluing along these edges. And again, this is interaction with Dirk. And again, thanks to you Dirk. So I know that this has something to do with Kupkowski rules where I do this and there's research ongoing thinking about what else one can get for these Kupkowski rules. So here's the picture again that I had in the beginning and I already discussed. So now we know exactly what this means. This has two edges, so I get a two cube, this has one edge, so I get a one cube and the boundaries are as explained. Now, what is the cubicle structure? I think I started five minutes late. So if I'm not cut off directly, I'll spend the five minutes. So here's the other thing to see that this is actually a simplificial structure and you can write down sort of, so you see now these sequences of morphisms and you see I'm contracting and splitting sequences of morphisms. So this relates to this simplificial structure of the nerve. And this is, if you know what the bar complex is, you see that something is happening in the bar complex as well because I'm just removing bars which multiplies these A's and B's as markings. Here's another picture where I apply this thing to something more interesting and this is now I'm blowing up just, so I'm looking at trees and associating cubes to trees and it's well known if I do this for root of trees, then it probably goes back to Bergman and Fouk that I get a sociohedra. And this plays a role for the cyclic delinear contracture which was in my abstract. So this is a picture for the cyclic delinear contracture and there we were marking with one as a frozen variable. And in general, as I said, what you do is you just take a sequence and you associate times T1 up to Tn to these arrows and if there's zero, you contract them. If you're there one, you freeze them. That's too technical. And now I can apply that to this diagram I had before. So remember here, I'm just including planar, sorry, here I'm just including trees into all graphs. Up here, I make the planar. Then this is already interesting that it's up here. It's not planar graphs, that's wrong. Sorry, it's not planar graphs, that's wrong. It's not ribbon graphs, that's wrong. What it is is what are almost ribbon graphs and their vertex types are exactly given by the surface types, which is interesting of its own thing. And so in work with Klan and Sparrow, we computed these things. So what can you do? You can make this topological and push it over or you can go over and make it topological. So if you push it forward and make it topological, you'll get metric almost ribbon graphs and you will get back the planar-kran-savage multiplication. If actually what you get is something contractable and this is where these cones come in that we've seen in several talks and also last time just in the comment section, but when you have this cone with the full mass, what you're doing is exactly this type of thing. So you have a cell and you're coning it off with one point and there is a maximal cone point for everything, which would make this contractable. But if you throw out the cone point, you actually get the simplex, the base simplex spec and this is compact. If you do this upstairs, you make it topological and then push it over, then something magical happens. You immediately get the modularized spaces of curves and that was a slide line. You don't immediately get them, but you get something that's, that you can contract these modularized spaces onto. So it's a strong deformation retract. And you can do this even on the cellular level. So there is an algebraic version of this, we'll have time. And now I'll close with a few nice pictures. So here is the picture of sort of getting the modularized space, the combinatorial modularized space. You just take these nice graphs. So the theta graph and these graphs, so these books are cycles and you get the usual picture of your triangle. And now you see, I contract one thing and that has a boundary and that will then continue on this way. So this is, if I do the construction downstairs, if I do it upstairs and push forward, I only get this. And this is a cubicle complex. Notice this is not cubicle. This is cubicle, it just has one cubes and higher ends would be more cubes. And how do these fit together? They fit together nicely. So there's a contraction which we can give. Again, with the Klemens-Berger that you can put this into the spine here and contract it. And such constructions are known by Bader-Karachow-Fortmann and Iguza and so on and so on. But we have a very nice new combinatorial way of just writing it down, very simple. Very straightforward. You can exactly see what's going on. It's a linear thing and we can describe that. So now, again, these are not defined by hand. So last slide here, what you can do then is, and this is work with Chaviez-Niga. So I want to say something new that maybe I haven't said. So I forgot about this. So then you can start truncating these things. And that's actually also what we did in the linear conjecture version. And once you truncate these things, you start blowing up these cells and you can get a full blow up of this complex. And then you can blow this down to this Kamurov-Stashev-Warnov, compactification and the Delinium-Mumpford compactification. So this guy is the one that appears in string field theory. By Zwebac, this is the one that algebraic geometries care about. And this one is the combinatorial one I actually already discussed. And I'll leave you with a nice picture of a blow up or so a relative blow up. So this is my symplex. And then I can do stages of blow ups. You see this is not a cube, but it has this cubical face. So I said a simplex cross of one cube. I glue that onto the faces. And then I glue these cubical things onto the lines. And that was already, so this is actually one simplex and two cubes. And then I'm done. And what I get out is this wonderful picture. And that is one of the non-polytops. It's a cyclahedral. All right, so I'll stop with that. All right, are there questions for Ralph? I have a quick question regarding graph complexes. I mean, there's many different graph complexes. You mentioned ribbon graphs, but like in conceivage the world, like the conceivage commutative graph complexes. So where do they fit in this point of view? So that's, it's in the paper. I can give you, so that has been discussed, but what it is, is this is very similar to the story that you had for getting the external legs of your graph in the conch limer tree picture. So there's a general story and there's another cold limit you can take. This makes it a universal construction. And then you immediately get out these graph complexes of conceivage and what Bilbach is doing with them. So do you have to pick a particular Feynman category and do this construction to get the different graph complexes? Right, exactly. So you can, and this is this bit with the decoration. So the Feynman category needs to be a little bit special that you can, so that allows you to actually contract these legs, but the graphical Feynman categories are nice. And then you can decorate them with all kinds of things you want. For instance, these cyclic orders, or you can make the edges odd to get, you know, fermionic things. And I could have mentioned that I didn't do this. So I just had a cyclic order, but I can also make this sort of anti-cyclic. And then you get these different graph complexes, like in conceivage, where you think about changing two legs and either they're symmetric or they're anti-symmetric to get assigned. So yes. Thank you. Right, there's a question in the chat from Jonathan, who doesn't have a mic. So he's just written it. And he says, your abstract mentions cones and simplices. What, sorry, cubes and simplices. What about cones? Are they important? Right, the cones, they actually appeared here. So the cones are important. So the thing is that if you do this construction here, then you see here's the cone. And what happens is the W construction is actually something which is called fiber, which means that if you started out with something that was a point, you get something that's contractable. And the cone is contractable. But the funny thing though, is that if you remove the cone point, it doesn't need to be contractable anymore because it gets to the base. And this is where the cones appear. So then somehow I had this feeling because just previously somebody was asking, how do you disassemble these things, these simplices? And then in the talk of, I'm gonna butcher its name, so I better look it up, that we had for these top order, it's sort of geometric, the geometric decomposition of, I can't find it, the geometric decomposition of these computational assignment integrals. You also have this cone, and then you dissect with these regular right angles, you dissect the base of this cone. And then you look at that. So I have a feeling that this kind of picture fits in perfectly with that. He follows up by asking, if you have an understanding of the product of two cones. The product of two cones. I haven't thought about that, but probably yes, because this is something, because this cone comes in from a push forward here, that automatically makes this cone happen. And that is, I can see that, so there must be some interesting truncation going on there, because you get the cone here. So let me go to the graphs and say this here. So you have these, in the room graph, you have these parameters that say here ADB, and the cone comes about, because you can set all these things to zero. So you know what the cone point is. But there's only one cone point, because there's only one zero graph, there's only one zero, that's what everything contracts to. So I would have, I could venture, I guess, what happens there. All right, Megan has a question. I think she should be able to unmute herself now. Okay, we're not. How about Alex Takeda? Hi, can you hear me? Yes. Yes. Okay, yeah. So you mentioned quickly these, you mentioned before these, this push forward, the six-fung through formalism. That was very interesting. I just wanted to ask you if there's any examples that I would have known of push forwards, like if I give you an infinity, algebra, can I push it forward to something over like a modular operator, something of the sort? Is that? Yes, you can. And so the point was that, what I just, what I computed was sort of the modular envelope. I guess, you know what that means, of an actual algebra. That's, what was this? That's exactly, I'm looking for something in particular. Hold on. Okay, so I'm looking for something that's related to the TFT business. Right. So I was looking at this. So the point what I was looking at is sort of just, so I was just looking at algebras that's related to this TFT business. But I could have started, I said just looking at cyclic associative. What I can look at is I can take an infinity version of that and then push out forward the infinity version. So I can resolve here. And then I would get a infinity algebras and then you can play the same game with a infinity algebras and do the push forward. And yeah, I can, that can be done. Yes. Okay, so because I would imagine that if you, for instance, the infinity case, for example, to push it forward, we need an infinite algebra and something like a pairing. So does this give me, Right, I'm sorry, yeah, I should have said cyclic infinity, yeah. I see. So it gives me, okay. So I already have an information of the pairing. Right. Okay. There is a diagram which I don't have. I mean, I could add here just the one that comes for, for sort of just for trees. Yeah. And that has a relation. So you can take the free thing with that too, or you can, if you're happy, I mean, part of the problem is something that I said before. So the question is, if you're infinity thing, has a non-generate form or not a non-generate form without any kind of integral, I don't think you can do something, right? I mean, you can put it onto graphs. When you have to be able to contract indices, right? I see. So you might get something like some of it all possible. Right. And then it's either going to be trivial or free or however you want to say it. So if you don't put that information in. I see. Okay. Thank you. But maybe last thing with that, but if you do trivial or free, one thing you can always do, and we've seen that here, if you don't want it to be trivial or free, you just put a Q there and sort of just count the number of vertices, number of edges. I see. So you can do some genus or maybe number of edges. Right. Exactly. Something like that. It's, right. Okay. All right. Time is getting on. I imagine people have to get onto the next thing in their lives. So let's have one final question from David. So wonderful mixture of the very concrete and specific as well as the categorically abstract. Can you just remind me what your ghost graphs have lost? When you start with the very concrete things, what have you thrown away and what do they still know? Right. So that I can tell you, let me just try, I actually prepared a slide with nothing on it. So I can write on it. If I would be smarter, I could just click there. So what happens there? So let me exactly do the example I had in the, sorry. Look at the example, this example. So where I glue this together, it just to a two point thing with this graph. Right. But if I just have this ghost graph here, this guy, I wouldn't have, see what, so it keeps some information, but it keeps the abstract graph. It doesn't know, is this, you know, what I'm doing drawing now is what the morphism knows that the ghost graph doesn't know. So I have two vertices here. The ghost graph will know which one is which. Okay. The ghost graph will not know which one is which, but the morphism will. So I need to identify this vertex and that vertex. And maybe like, sorry, maybe actually maybe make two legs here. So, and then the other thing it forgets is, you know, I have a symmetry on these two guys, which would be the symmetry of these two things. And so the ghost graph doesn't remember which one is which, but the actual morphism will remember which one is which of these. Okay. Thank you. All right. Let's thank Ralph for his talk. Thank you. Thank you.
Categorical Interactions in Algebra, Geometry and Physics: Cubical Structures and Truncations There are several interactions between algebra and geometry coming from polytopic complexes as for instance demonstrated by several versions of Deligne's conjecture. These are related through blow-ups or truncations. The polytopes and their truncations also appear naturally as regions of integration for products, which is an area of active study. Two fundamental polytopes are cubes and simplices. The importance of cubes as a basic appears naturally in various situations on which we will concentrate. In particular, we will discuss cubical Feynman categories, which afford a W-construction that is a cubical complex. These relate combinatorics to geometry. Furthermore using categorical notions of push-forwards, we show how to naturally construction Moduli Spaces of curves and several of their compactifications. The combinatorial ingredients are graphs and there is a universal way of decorating them to study different types. This makes the theory applicable to several different geometries appearing in Moduli Spaces and Outer space. With respect to physics, there is an additional relationship coming through Hopf algebras which in turn also are related to multiple zeta values. We will discuss these constructions and relations on concrete examples.
10.5446/51280 (DOI)
I apologize, this is work that was done really a year ago, but the dynamics of the virus were such that those of us who didn't have a natural place to go found ourselves working with whomever we could and I was working with some mathematicians and so I didn't think about this. What's involved here is goes back to a paper of dirt. Can you see my little arrow when I move it on the, so I'm pointing to a paper here of primer which is an old paper, physicists never look at old papers, but mathematicians do. This one goes back to 1991 and basically it talks about the amplitude for two loop graphs and from this paper had really a lot of influence in my thinking, but not necessarily in the way that I think dirt would have wanted it to. Basically the first takeaway which I think is worth keeping in mind when you talk to a mathematician is you want to convey what's interesting to physics and the takeaway which was conveyed by that paper was that two loops are already an interesting physical problem. That is the amplitude for two loop graphs is an interesting physical problem. So I added that the fact that algebraic geometers love cubic hyper-server. It's a little bit like the story of Goldilocks and the three bears. Degree two hyper-services are too cold and degree four hyper-services are for many purposes too hot, but degree three hyper-services are just right and when a mathematician algebraic geometry feels they can do something with a cubic hyper-server. So even though the story is complicated, it's approachable. The second takeaway which is a natural thing for an algebraic geometry to do, we're dealing with the second semency, we're dealing with the amplitude as a function of external parameters like masses and momenta. And the physicist immediately wants to specialize to the situations of interest and being physicists they know the situations of interest. But mathematicians have no idea which configurations of masses and momenta are a physical interest. And so the natural thing for a mathematician to do, particularly an algebraic geometry, is to take generic parameters, which has actually a technical meaning, but it's kind of obvious. And when you take generic parameters, for example, I will be talking about the double box. So let's see if I can find the double box. Yeah, there it is, up in the upper left corner there. My double box is going to have external momenta in the middle, which I don't think a physicist would tend to do because physicists would tend to want to talk about trivalent vertices. But that's okay. Having put my momenta there and worked out the answer, then at the end I can hope to specialize to where those external momenta are zero and to see whether the general picture, which you'll see is quite simple and nice, specializes as well. So anyway, that's the point of view that I will adapt. Let me see. There's one other thing that I don't seem to have written on this slide. Let's go to the next page. Yeah, so, again, an algebraic geometry confronted with this kind of problem tends to think of it in two parts. The amplitude is a relative period, that is, the chain of integration is not topologically closed, it bounds on where the coordinate hyperplanes are zero. And that means that it adds a complexity to the situation, which is not inherent in the pure algebraic geometry of the second semantic hypersurface. So, for an algebraic geometry, it's kind of natural to break down the problem of understanding two loop graphs into two parts. First of all, you try to understand the algebraic geometry of just the hypersurface itself. And understanding the algebraic geometry means, first of all, resolving the singularities of the second semantic hypersurface. And then computing the motive, which is the middle dimensional comology of that non-singular variety. And I have to admit that early in my career, I sort of saw that as the whole story. But in fact, of course, it's not, because having done that, you next faced the problem of understanding the amplitude itself, which is not a period in the pure sense, it's a relative period. And there are two ways to think about these relative periods. And this is something that's sort of grown on me only over the years. But one thing, the work of Matt Kerr in recent years for the three or four banana graphs dates this relative period to what's called a motivic comology. And motivic comology is a mathematical theory that sort of generalizes, it yields what are called normal functions, which are periods, but they are relative periods. I mean, they're sort of, it's better their functions, which take values, which are relative periods. Sorry, I think I've lost the thread here. So there's a real gain. I mean, the thing is that the integral, the Feynman integral is very complicated. I'm thinking now of the parameterized version. It's very complicated because the chain of integration, first of all, is not topologically closed. And second of all, it can meet the polar focus. The whole thing is really something of a mess. So the hope would be to find some, to reach out beyond the integral, the integral itself, and find some method to produce the amplitude from some other ideas. And that's what Kerr has done in the context of three or four banana graphs, which are simpler. I mean, in many ways, simpler. And another consequence of this idea is that the motivic comology is related to balancing conjectures. And these conjectures give nice arithmetic interpretations of special values of these functions. And so there's the possibility of a nice picture. Now when you try to do this, I can't do it. Let me be blunt. I can't go beyond where Kerr went. But there's hope. And the hope is, again, goes back to something Dirk explained to me over the years. If you think about the banana graphs, for example, you are, if you, when you think about the banana graphs, you think of momentum flow. And the second semantics of a banana graph is just the product of the various coordinates. And you imagine topologically, if you think of the graph, you imagine cutting those edges. And so, lost the thread here. So you want to cut those edges, and the fact that that picture is so simple is what enables Kerr to make this relation between the final amplitude and the balancing type normal function construction. But now if you take a more general graph, you have momentum flow, but momentum flow is kind of flowing in all kinds of different ways between you choose any two vertices, and you look at how the momentum flows between those vertices, or more generally, you choose, as you do when you construct the second semantics polynomial, you look at all ways of cutting the graph into two simply connected pieces, and how momentum can flow between the two simply connected pieces. And so the problem becomes one of generalizing the multiplicomology construction, which if I have time, I'll say a few words about, to situation where you have not just one cut, but sort of a whole chaotic range of cuts. So anyway, that's the second, what I want to spend most of the hour on, because I have actually some results on, is the first problem, which is just understanding the motive, the pure motive that you get by resolving the second semantic, and then see what we can say about that pure motive. Okay, so I'm going to focus on the double box, which I've drawn over here, simply because that's where I have the best results. I have results also for the kite, which, but the kite is sort of simpler, you just contract the two edges here and here, so let me just focus on the double box. So again, as I say, I take the external momentum and masses all to be generic. And if you do that, the hyper-servance is defined by cubic, and there are seven edges, one, two, three, four, five, six, seven, the seven edges, so it's a cubic in seven variables. And if you work it out, and this is the basic thing, is it has this shape. I have Q, capital Q, and capital Q and capital Q prime are quadrics, they're a degree two in three variables, here Q is in X5, X6, and X7, and I multiply it by the linear, by just the sum of X1, X2, X3, X4, and then Q prime is another quadric, this time in X1, X2, X3, and I multiply it by the others. And then I add a sort of catch-all term, which is divisible by X4, so X4 is the guy in the middle here. Okay, so it's divisible by X4, and you can write it this way as a sort of, you take a matrix where I've taken generic, it's a four by four matrix with generic entries except that the lower left-hand corner here is zero, and then I just multiply by the row and column vectors as indicated, and then I multiply the whole mess by, again, by X4, so that means that F will have no term in X4 cubed, it will have some terms in X4 squared given by X4, and this will have linear terms in X4, so there will be terms in X4 squared, but not in X4 cubed. Okay, so what can we say? Well here we're going to use the fact that everything is generic, we've taken all our parameters to be generic, and in that case you easily check that the singularities of your X are a disjoint union of two quadric, two conic curves actually, so C and C prime are curves in P2. Okay, it's a little complicated, P2 for example is the P2 with parameters with coordinates X5, X6, X7, and then you set the X1, X2, X3, and X4 all to zero, so the singular, it's a little confusing because here you're taking a sum, but here you literally take the follows equal to zero, and you work out that the singular locus of this expression is just this disjoint union. Okay, so what happens then if I blow up these two conic curves? I blow them up in the ambient projective space, not in the singular X, but I blow them up in the ambient projective six space, so this is all happening, X is a five-fold, I should have said that, in projective six space, so I blow these up in projective six space and then the resulting blow-up, I call capital V, so it's birational with P6 and there are two disjoint, the blow-up are two disjoint smokestacks that stick up out of P6, and then I take, and this is the standard game for resolution singularities, I take the strict transform of my Y, so Y sits in P6 here, I take the strict transform, sorry not Y, it's X that sits in P6, I take its strict transform here, and then I get, then I take its strict transform and I get a Y inside V, and then run easily checks, again using this generic condition that this Y is a resolution of singularities. Okay, so here is then the picture of the, this is the picture, and so Y is the resolution, so we want to understand the motive associated to H5 of this smooth variety Y, okay, and what we want to show, it's not very well written here, but I hope you can read it, is that H5 of Y, just the, the comology of Y is identified with the comology of a certain elliptic curve, H1 of an elliptic curve with a minus two twist, this has, it just has weight five, so the H1 of the elliptic curve has weight one, so I have to goose that up to get to weight five, so I twist by minus two, which has the effect of adding four to the weight, so this, at least from the point of view of weights, this kind of identity makes sense. And so how can I show this? Well, let me, first of all, say that this elliptic curve is kind of mysterious, I've talked a lot with Pierre Van Hove about trying to figure out, and I think he can actually do it, but it was certainly six months ago since we've talked, and he was kind of displaced by the virus as well, so we should, we should check again to, I think he knows how to compute the elliptic curve in terms of the coordinates of Y, but I'm not 100% sure. But in any case, it's an elliptic curve, and if we just say, okay, we want to see that it's an elliptic curve, the basic thing that we have to do is we have simply to look at the Hodge structure, the Hodge filtration on this H5 of Y, and what we have to show, well, I mean a little bit more, but the crucial thing is that the F3, the third Hodge filtration level is one dimension, because what will then happen is that F2 will then be symmetric, so it will be two dimensional, and so that will then be the whole story. So this is the crucial thing that has to be shown. So how does one do this, and one doesn't want to do anything special, because one wants a machine that ultimately you can hope to apply to other two loop graphs. And I will say, if I have time at the end, how to generalize this basic setup and the basic construction of the Y. So the first thing you remark is that Y sits in V. That's how we got it. And by long exact sequence of the cosmology, you can identify, oh, and also the fact that V is very simple. V is gotten just by blowing up chronic curves in projective space, so the cosmology of V, it's all Hodge, and so there's no F4, which then tells you that F4 of the complement of Y is F3 of Y, which is the, we want to show this as one dimensional, so we need to calculate this. And now there's a classical and very powerful theory due to Deline and Simca's car. There's another author, I'm sorry, I forgot his name, something called the pole filtration, which deals exactly with the problem of calculating the Hodge structure on a complement of a hypersurface in a variety. And what it tells you in this is that you can calculate this thing as a sort of hypercomology of a certain complex where I look at four forms on V now with single order poles, and then when you differentiate such a thing, you get a five form on V, but the pole order increases and again like this, and so you get a complex here, and the hypercomology in degree two of this complex is the relevant thing. That's identified with this, and this is identified with the thing that should be one of that. Okay, so now we come to an interesting problem in how to present mathematics. The actual proof of this is quite complicated. So I kind of despair of trying to explain it by Zoom. I do have a letter that I wrote to Matt Kerr and Pierre van Hoogh with the full details, which I am willing to share, but not promiscuously, so to speak. So if anybody really seriously wants to work on this problem, I would be delighted and I would be happy to communicate this letter to them. But for the masses, so to speak, I think it's best that we just, I'll explain some of the ideas, but I won't be able to give the full details. It's a complicated linear algebra story. Okay, but anyway, let me stress again this pole order theorem that makes this identification. So now what we have to do is we have to identify this thing which we can do sort of piece by piece, and as I say, the full story is complicated, but at least we can look at the last piece here. That's what I propose to do. And there's no reason not to do this more generally. So instead of a double box, let's consider a double N-guar. Okay. Now how am I doing time-wise? Can somebody tell me when I'm supposed to stop? I think at least until 20-pass or 15-20-pass, I think that's certainly fine. Okay, so 17-pass, 17 and a half times. Very good. So let's consider a more general double N-guar. Then such an X is in projective 2N minus 2 space, and the conics that we had before become general quadrics in N minus 1 variables. The story is very similar to what I explained already. You do the blow-up, again, you blow up the two quadrics where you set the other variables to zero. This gives two disjoint quadrics, and we get the same kind of story. And we define the omega 2N minus 2, that's the top dimensional comology, but with a tilde. And the tilde sort of obeys Bloch's law, which says that always the most innocuous-looking bit of any symbol is the most important. So the tilde means that these are sections, so with poles, 2N minus 2 forms with some poles along X, but they have the property that when you pull them back to V, they don't get any poles along E. So if I have here a form on projective space with poles on X, and I pull it back to V, a priori it can get some poles on these E and E prime, but some of the sections don't, and these will be the tilde. And these are the ones that compute the relevant piece of the end piece here. And the proposition is, in our case, that is in the case of the double-n-gon, this fellow here is one dimension, and that's good because that's exactly what we want. Because notice here, we're interested in H2. So H2, this is a complex, so H2, the relevant pieces, are H2 of this, H1 of this, and H0 of this. So the fact that H0 is one-dimensional, that's cool, that gives us exactly what we want, but the rest of it sort of becomes a question of showing that these guys don't come in and mess up our nice one fellow that we've got here. So that's as much as I'm going to say about the general argument. Now there is one further remark. We like to look, as I said, the game becomes to now understand all two-loop graphs. And by all two-loop graphs, I sort of think of them as triples, where I take the two-banana graph and I subdivide the three edges. So the P, Q, R graph, I have P edges up here, Q edges in the middle, and R edges underneath. And empirically, I can say with some confidence that the hard case, the case where things are going to be different, is the case where P, Q, and R are all at least two. So another way of saying that is that's the case of a non-planar case. In other words, you see, if I have this graph, I cannot connect this vertex to infinity without coming through the edge here. So that's a sort of non-planar, I mean, this of course is planar, but when I try to connect it to infinity, I can't do that planar right. So those empirically are, this is not going to be true. This proposition is false for the non-planar guys. But notice here, this is one. And I took the same number of edges for the two things here, but I'm sure, I guess for some reason I didn't do the calculation, but I'm sure that it doesn't depend on the numbers of edges. It simply means that one of the three has to be left in one piece. Okay. So then more generally, I want to describe. In closing, how to set up the machinery. And let's take the hardest case first. Assume P, Q and R are all at least two. Then you choose quadrics, which are labeled Q sub P parenthesis and Q sub Q parenthesis and Q sub R parenthesis in the indicated variables. And with general coefficients. And I write F P to be Q sub P times the linear combination as indicated from P plus one to P plus Q plus R. And the same F Q, so all the omitted variables appear linearly in this extra factor here, and F R. And then I have a fourth term here, which I call F P Q R, which is a generic linear combination of these, these fellows. And then F is the sum of these, of these, these four terms. So notice I here I've assumed that all of these are at least two. So this description does not work for, for the double end up. We need another, another description for that. But this end is the, is the second semantic polynomial. And so what has to be done is to go through the linear algebra using the pole filtration as, as I indicated, and compute the, the comology in the middle degree of this resolved hyper service. Okay. So what am I doing here? Yeah, so there are two special cases, which are the cases that I've already been talking about. So this is when you have one edge in the middle, r equals one. Then the, the, the thing has the shape that I already indicated for the double box, again with generic quadrics now because P and Q R can be larger. And then the extra term is divisible by X P plus Q plus one X P plus Q plus one is the single edge. That's the edge variable corresponding to the single edge. And it divides all these, all these fellows. And then finally, there were, I actually have two edges that are left uncut. And then the F has, has the shape here. Okay. So, maybe I should say I have a few more minutes here. So maybe I can say a few more words about the, yeah. So let me say a few words about the passing to the amplitude. And this is all completely fantasy. I'll accept that Matt Kerr has made it work in the case of the banana graphs, at least the, the two and three banana graphs. And so the game here involves, as I said, something called a motivic comology. And motivic comology is, I mean, I'm not going to go through the details, but here's, here's an example. I have my hyper surface X, and I remove the, the low side where, where one or more of the coordinate hyper planes meet X. And so let me call that resulting thing X star. So on X star, I have what's called the tame symbol. That is one problem with this thing is it jumps around. I have the tame symbol, that is these TI over TJs, and I put always T naught in the denominator, are well-defined units on X star because I've removed the zeros. So these are functions with zeroes of poles on X star. And so such a, such a tuple represents a class in the motivic comology of X star. And the, the indices are N, they're, notice they're N plus one capital T's, but, but I, I break symmetry by choosing one of them to be the denominator. And so I have N actual functions here. And so I end tuple of, of units, and that defines for me a class in motivic comology, H and with Z of N twist. So that's a whole story in itself. But we want something on all of X. We don't want just something on X star. And Matt has developed a theory along with, well, I'm not sure I should give him the whole credit. There's another name that escapes me who we worked with. But in any case, they worked, they developed a notion of tempered hyper surface and tempered means that this class, which I priori is to find an X star, actually lists to the whole motivic comology of X. And that's an interesting business. Let me just say a few words because this is all in some mysterious way linked to the well-known properties of the second semantic polynomial with respect to contraction or, or, or cutting edges. And the fact that these classes extend is not a trivial fact and it's linked to the behavior of the polynomial, the second semantic polynomial under edge contraction and so on. But once you've gotten it to, to be a class in motivic comology, then there, there is a numerical invariant associated to this class and so to speak a cycle class, which in this case sits in the comology of X with C mod Z twisted by N coefficients. So this is a circle with a, with a twist or a circle. C mod Z. It is C mod Z. It is what it is. But it maps if we sort of throw out the compact piece of this thing to the comology with real coefficients with an N minus one twist. And this is the, this is the so-called cycle class. And this is the class which Kerr relates to the Feynman amplitude. The relation is not direct. It's complicated and it's not as simple as one would hope. But this is the, in the banana graph, this is the thing that Kerr uses. So the second step in my program here, which is to pass from the pure motive of the hypersurface to the mixed period, which is the Feynman amplitude, revolves around understanding in some much deeper way, just as the second semantic is built up out of these cutting the various ways you can cut the graph into two pieces. In this particular case, in the banana case, there's only one way you can cut the graph into two pieces. And that's what makes this work. But suppose that we have the full complicated situation with them anyways, then the challenge is to try to recreate the Feynman amplitude from some sort of analogous, multidimic comb model. Okay, so that's really all I want to say. Say congratulations to Berk for, for living such a constructive life and being lucky enough to be a pro in where there are no virus cases, unlike the rest of us who are Chicago at this moment is absolutely horrible. I tell my wife not to go outside. But anyway, thanks for the opportunity to talk. Thanks very much. Thanks, Spencer. We had a question from Matt under the participants Karen, can you unmute him because I can't seem to be able to do that. Yes, and I will fix your situation too. Yes, thank you. So I was curious for the double box and on you seem to be mostly describing the F polynomial. Unlike for the sunrise, of course, the you polynomial is also really necessary even in your dimensions to express these things. Is there a reason why you only consider the F polynomial? Well, so the F, yeah, so the, well, first of all, in how many dimensions as you know, the integrand, the Feynman integrand is depends on what dimension space time. But I take a dimension of space time where you is. So first of all, let me say that you is very simple in this kind of case. Let me see if I can explain that statement just a sec here. So here, if you take the double n gone, so let's, we want to compute the first semantics. Okay, so what is the first semantics? Well, we're looking at pairs of edges. First of all, there are no coefficients, it's just pairs of edges, which cut this diagram, which when I remove them, this, I mean, don't disconnect, but kill the loops in this two loop picture. Okay, so how can I kill the loops in this two loop picture? Well, one way is I cut one edge from one of the loops and one edge from the other. Another way is I cut the guy in the middle here and then I just cut some other edge. Now, if you think about it, that's the end of the, those are the only ways. So the first semantics then is just going to be a sum of those two kinds of quadratic terms. One which is divisible by this edge variable and then divisible, well, in fact, we can say what it is. It's this edge variable times the sum of all the other edge variables that will capture all the monomials of degree two that enter in that part. And then we have the other guy, which is the product of the sum of the edge variables in this loop, but not this one, and the sum of the edge variables in this loop. So in other words, I've written this quadratic as really a sum of two terms. And I mean, you see what I'm saying, not, so the algebraic geometry of the first semantics in this kind of situation is very, very simple. That's the takeaway. I see. Thank you. You had a question from David. Yes, Spencer. I just wanted to remark that your box diagram with those extra lines in the middle is called the rail track diagram by people who work in gauge theories. They are far from generic because all of their internal masses are equal to zero. And they thought for a long while that will protect them from elliptic obstructions, that they would evaluate in terms of multiple polylogarithms as a whole ethos built on that philosophy. But in fact, it's precisely that diagram where they encounter their first obstruction. And it's recently been related. One of the authors is Matt Von Hippelguth asked you to a single integral of a perfectly specific tri-logarithm over the square root of a quartic. David, you said that all the masses are zero. The internal masses are zero. Internal masses are zero. The four external masters at the corners are non-zero. The lines in the middle of the railway track, you can put equal to zero. And in that situation, the kinematics is enormous as simplified because there are only a certain number of cross ratios that Matt will come. I'm old fashioned when I say mass. I mean mass terms as opposed to the momentum terms. So there still are mass terms, so to speak, along the tracks? Not the railroad ties. No, no, those are just gluons. They're completely massless. So we can't get gluons and mass just for mass. It's far from generic, but what I'm saying is they have a perfectly explicit elliptic curve. They can calculate its vast mass invariance, and it would be interesting to see in their non-generate. This is for the box case or this is for the double box case? No, sorry, double box. H4. Fantastic. So if this was an ordinary conference today, we could meet in the hall and you could tell me more. So your diagram is that I've heard about at at least three major conferences. As the first place that gauged theorists, even N equals four super young mills on the planar limit, meet elliptic obstruction and you've just sat down by pure thought and told them that in advance. Yeah, except that I can't tell you what the elliptic curve is. I mean, maybe, maybe PR can, I'm hopeful that PR can when I can get a hold of it. Well, they can tell you in that simplified kinematic situation precisely what mass transfer invariance of that elliptic curve are in terms of the. Can you point me toward a paper? Yeah, yeah, I'll send you, it's published. Oh, okay, okay. If you get a chance, that would be beautiful. Yeah. Thanks. Okay. I think this is a perfect thing for a dinner conversation later on. So what time is dinner? Can I? Well, we had planned it for, for just in eight minutes, but because we're behind and I mean, people will have to get the dinner from somewhere. So, you know, it's Paris time dinner. So, okay. And I would note that in the gather, some of the tables have whiteboards at them. So if you think you might want to talk math, including latex on the whiteboard, you can pick a table that has a whiteboard. I have a very nice story about Maxime giving a whiteboard talk a week or so ago, and it was a beautiful talk and the whole thing was lost at the end because the whiteboard turned white and nobody could cover the thing. So Maxime said, that's okay. And he just wrote the talk out again. Wow. I think nothing in gather gets lost. I think, I think everybody said that. Everybody said nothing, nothing gets lost, but nobody could recover. Anyway, okay. Okay, for now, let's thanks Spencer again for his nice talk.
Amplitudes of one-loop graphs are known to be dilogarithms. What can one say about two-loop graphs? In a surprising number of cases, the motive of the second Symanzik of a two-loop graph involves (indeed, the motive is actually built around) the motive of an elliptic curve, suggesting some relation between the amplitude and elliptic polylogarithms. I will discuss a number of examples.
10.5446/51239 (DOI)
Education in the 21st century has changed in many ways and by and large these changes are due to the availability of a gigantic pool of educational resources on the web, among them video. In the following I will outline our principles of creating and using video in the context of linguistic education at an academic level. I will show how our videos are produced, so this sort of scenario. I will show how our videos are integrated into class and how our students use our videos as you can see over here, perhaps on a phone. And I will also try to answer the question why we make our videos available on YouTube and not within the protected zone within any other platform. Let us start with a question. Where can video enhance a traditional class? There are several options of integrating video within the complex scenario of a class at university level, which normally expands over a period of a dozen weeks or even more. For example, we could produce preliminary videos that contain the administrative and organizational details of a class and make them available prior to class start. In the past, this information of which you can see a fragment over here was generally made available in print in the departmental class descriptions or like in many universities today. It is published on the university-wide campus management system, but be honest, have you ever cared about such information? Many students don't read it and they come into class totally uninformed. Why not provide this information by means of a short preliminary video that summarizes the goals, the central topics, the organization of the class, its central requirements and the deductive principles applied to name a few. The actual delivery of such video-based preliminary organizational details can be simple. All you need is a link to your video and if this information still needs to be printed out, maybe the quick response code or in short the QR code that can be made available to access this video may be of some help. The availability of this information prior to class start frees the class instructor from explaining all these details to his students. Instead of wasting the first in class meeting with administrative details, the class instructor can now straightforwardly start with a class content. But where else can we use video? Well, once a class is running, a suitable option is to create a video that contains the content of a class unit. An alternative option is to use several videos related to a unit. For example, one that contains the central content of a class, one that suggests particular in class activities and one or even several videos that provide explanations or provide exercise materials plus solutions. Once the videos are ready, it's up to you and your deductive concept how you want to use these videos. On the Virtual Linguistics Campus, we use all these options. The preliminary videos, we call them class descriptions, inform our students about all class related organizational details and make them available prior to each class. Content delivery is realized by two different video types. On the one hand, we have screen casts where we record the central screen activities, for example, in connection with software training. And we have complex e-lectures where we deliver the linguistic content standing in front of an active board of this kind. For both, we apply the following principles. The presenter must be visible. The video should not be longer than 20 minutes. And what is being presented should, if possible, be made available in print. Now these principles also apply to the remaining types of video we are using on the Virtual Linguistics Campus. For example, to the in-class activity suggestions. These are video productions that are produced together with our students that inform those who use the e-lectures in accordance with the inverted classroom model and demonstrate to them how they could, as one option, structure their in-class activities. The remaining videos are of an explanatory kind. Either they take up the questions put forward by the community, or they provide our students with model solutions about class related questions. That these cannot be made freely available on the web should be clear. We want our students to work before they see the answer. And how do we produce these videos? Well, first of all, you need a studio. That is a room with appropriate lighting, a camera, and the setup needed to record your video. And by means of the camera, you record the presenter in action. At the same time, we capture the screen activities on our computer screen, which in our case is this Promethean Active Board. And we use a suitable software for it. In our particular case, we use TechSmith's Camtasia Studio. As a result, we get a screen capture, a capture of this board content. And the screen capture does not contain the speaker. So we need a final post-editing process where we combine both tracks, in Camtasia Studio, to receive the final result. If we look at the usage of our videos, well, there's virtually no limit to use them. Students have access to the content by means of a so-called eLearning unit, which contains the so-called virtual sessions from where they can access the videos directly, or they can use their phones, as in this scenario over here. Here are the phones, where they can watch the videos directly via YouTube. Well, but why YouTube? Many of my colleagues fiercely oppose to the use of video. I won't discuss their resistance here, which may be due to a general opposition to the use of videos in teaching, or that they simply do not like the idea of having to present themselves in public and can thus be freely evaluated. I think that video is a wonderful option for our students, and that a platform such as YouTube is perfect for a variety of reasons. For example, uploading a video is dead easy. Within a very short time and by means of a few mouse clicks, one can add a video to an existing YouTube channel. And editing existing videos is also not a big issue. We are by no means perfect, and so editing has to be done quite often. And the editing function in YouTube allows us to add comments, to erase mistakes, and so on and so forth. A very suitable option for the creation and generation of video material. And then there are numerous analysis options which provide you with excellent insights into the use of your videos, from where are they used, when and how, and so on and so forth. And with the option to group your videos into playlists, you can create full courses that contain the entire video material from the preliminaries via the e-lectures to the in-class activities. Apart from these functional aspects, there are various aspects that motivate you quite a lot. A freely accessible video channel permits the users to evaluate your material, your performance, your behavior in front of the camera, and so on. Users can easily motivate you by means of appraisal or good marks, but they can also demotivate you by applying harsh criticism or by giving bad marks. So we prefer the first option, and in particular we can profit from the number of clicks, that is the actual viewers of the videos. In our case, about 160 videos after one year and 180,000 clicks. The number of likes and dislikes can be very motivating, where hopefully the number of likes is much larger than the number of dislikes. And most importantly, you can build your own community of followers, that is, your subscribers. And after one year, we've had 1,600 subscribers. Beyond the generation of mere motivation, however, it is important for the users to leave behind comments to make proposals and to correct errors. This readily improves the quality of individual videos and of your channel as a whole. Many of the comments or questions are truly scientific, like the ones over here. Is the uvula an active or a passive articulator? What about inserting R between morphemes as in drawing? Is that received pronunciation? Now questions like these have to be answered with all scientific care. In some cases, questions may be more or less didactic in character, leading to explanations about our didactic and presentational concept. In both cases, we have to take great care in answering these questions in order to satisfy our customers. The videos, questions of the month, contain our answers to our community and our reaction towards this issue. Well and then, there are really motivating comments, which make you feel fantastic. But even those comments that do not praise your teaching talent can evoke interesting discussions that you would probably never get without opening your video channel to the public. For example, the debate about the origin of man as a consequence of our video, the evolution of language, is a good example of such a comment. The result was a discussion about the evolution of man. Finally, there are numerous flanking effects that we did not expect when we decided on using YouTube. After one year, we've had almost 1,000 new community members on our e-education platform, the Virtual Linguistics Campus. And meanwhile, a considerable number of them have registered for our online classes. And furthermore, what is important for us, we have satisfied the expectations of our sponsors and have evoked a lot of media interest leading to an increase of our international reputation. So by and large, we are very glad to have taken all these decisions. We will not end up here. We will develop new video types. We will try to improve our presentation techniques. We will deliver more and more content by means of video. So we will be at your service. And we will make mistakes. We are not perfect, but we are humans. And humans make mistakes, but we will try our very best to avoid them. So as usual, at the end of such a video, I invite you to join us wherever possible. Share your ideas with us, motivate us as well as you can so that this channel can remain what it has become one year after its start, your linguistics video platform. Thank you.
Many of the 21st century changes in education are due to the availability of a variety of resources on the web - among them video. This presentation examines the options for using video in the context of a class at university level and discusses the potential of video for quality enhancement and quality assurance.
10.5446/51243 (DOI)
The digital revolution, in particular the computer and the internet, have had an enormous impact on all academic disciplines. Of these disciplines, linguistics may be considered as one that has been affected most. Teaching, learning and assessment, as well as research, have undergone dramatic changes in recent years. This presentation points out the challenges and consequences for our subject as a consequence of the use of the new media. Let us look at the challenges we have been facing first. Now, linguistics is not taught at school. Students of linguistics thus have little or no prior knowledge about the field. So as teachers, we have to introduce more fundamentals than many of us would like to do. Well and the internet has completely changed the field of research. Now all linguistic data is on the web, in many cases freely accessible at any time from anywhere. Unlike any other science, linguistics has developed a large number of models and theories. These have to be presented and as it seems, digital representations have a higher explanatory value than traditional presentations in print. Modern linguists use a variety of tools, of programs, of software in order to do their research. These tools have to be introduced, they have to be explained and mastered, another challenge for teachers and students. And the last point concerns assessment. Traditionally, testing in linguistics has been static in many ways. In digital scenarios, however, new assessment types that increase the quality of traditional assessment are possible yet another challenge. Let me exemplify these challenges in detail. Let's start with the lack of foundations. As I already said, linguistics is not taught at school, so we have to start from scratch. Let us take the main branches of linguistics as an example. Those who start linguistics have little or no idea about consonants or vowels, about articulatory phonetics. They do not know what a phoneme is, two fundamental concepts in phonetics and phonology. Well in morphology, we have to explain the most basic concepts such as the term aphix, what a word is, and so on and so forth. Well in syntax, there are so many new concepts that have to be explained among them thematic roles. And even if you continue with the field of meaning in semantics, well, students might know what the synonym is, but the meaning of sentences here illustrated by some symbols taken from predicate logic never heard before. Well and in pragmatics, you have to explain such basic concepts like conversational principles, speech acts, dikesis and so on and so forth. In other words, we are faced with beginners and the introduction of even the most fundamental concepts constitutes an essential part of teaching and learning in linguistics. The primary data in linguistics comes from the languages of the world, either in written or in spoken form. Whereas in the past, the data was either presented in written form only with no audio verification possible, today the data is on the web. It exists in huge written or spoken corpora and can be analyzed and used freely. Here is an example from our language index. This is a huge database which contains audio files from more than a thousand languages of the world. I chose this example because I needed later on. Here is what Delia says about herself. Welcome all and welcome to user linguistics website here. My name is Delia and I am a chat patois. Now such data is now freely accessible. Learning the data however has brought about a new challenge. As modern linguists, we are now confronted with a variety of linguistic software tools and data repositories that we have to understand and use. In other words, linguistic software training has become an essential part of modern linguistic education. In linguistics, the modeling of hypotheses and theories have always played an essential role in education and research. Let me illustrate this with three examples. The first comes from our language index again. Now let me look at Delia's diphthongs. If you want to present the diphthongs of a language and you want to show the transitional paths of their glides in a traditional format, you are confronted with enormous presentational problems. But if you use a multimedia format, these problems disappear and we can easily enhance the explanatory value of our presentation. Now here you see in a multimedia format, we cannot only listen to the diphthong but we can also see the transition of that particular diphthong. Or take these two examples. To illustrate the phenomenon of feature association in non-linear phonology, an interactive model is much more appropriate than a written model or a diagram in a book because here you see the dynamicness of feature association in a multimedia format. And have you ever seen Chomsky's rule of affix hopping in action? Well here it is. Select the verb group and let the affixes hop. Again. And if you want another one, here you are. Well compare that with the written explanation. I don't think I have to add anything. As I already mentioned, linguistic software is now an important issue. Today a variety of software tools can be used to handle linguistic data or to construct linguistic models. There are audio analysis tools such as Pratt or Ducity or several wave editors. We have graphic tools such as IrvenView or GIMP. There are several corpus interfaces and data repositories around and last but not least, modern web technologies that allow us to integrate language maps, speaker locations and many more now constitute a major part of linguistic web technologies. Well in assessment pen and paper tests have been used in linguistics for a long time but now with the new technologies available we can enhance these tests enormously. We can use virtual keyboards for example and sound support in transcription tasks like the one that is displayed over here, a typical task in our phonetics classes on the virtual linguistics campus. Or we can confront our students with listening tasks where they have to identify pulmonic consonants via mouse click after having heard them. And these tests can be used in a diagnostic fashion, in a formative fashion and even as summative assessments like we displayed it over here. Now these are our e-exams at the University of Marburg in linguistics where first of all our students have to register and then almost 100 students write their e-exam together in the computer pool of the computing department. Well and this of course not only enhances the quality of testing but also minimizes our effort of grading and correction. All these new challenges have changed our discipline most obviously as far as teaching and learning is concerned. So we obviously need a new teaching concept. Now with the availability of digital learning materials many linguistics classes can now be organized in such a way that content delivery takes place online and serves as a prerequisite for subsequent in-class meetings. The in-class meetings now serve a different purpose with the emphasis on practicing. This approach that flips or inverts the phases of content delivery and practicing is referred to as the inverted classroom model. Creative terms for this model are flipped classroom, a term which is especially used in the context of the North American high schools. But how does this model really work? To understand it let's first of all look at traditional teaching. In traditional teaching content delivery and content acquisition are realized in class where several dozens of students have to gather at the same time to be entertained by their teacher. Many of them fall asleep after 10 minutes isn't it? In a second phase students practice on their own on the basis of additional exercise material, homework tasks, data analysis and so on. Now the inverted classroom flips or inverts these two activities. Content acquisition is now self-guided, takes place first and is done online. The additional in-class phase is now dedicated to practicing, rehearsing, discussion analysis etc. This means that prior to each in-class meeting students must have worked through the online content of the respective e-learning unit. And how does this work on the virtual linguistics campus? Well in phase one students are now autonomous learners. They are given the content by means of what we call an e-learning unit. This contains the content they have to go through prior, emphasis on prior to each in-class meeting. The content of an e-learning unit is multimodal. It involves all sorts of channels. In all our classes we supply our students with highly interactive multimedia content, the so-called virtual sessions. In addition to this students can watch our e-lectures on YouTube. These are never longer than 20 minutes and they are closely interconnected with the virtual sessions. Furthermore, we have our optional workbooks. They supply our students with the text material of each unit but not with any other media and give them the opportunity to supplement the text with the online content. This combination of multimedia, video and text is unparalleled in the world of e-education. In short, mastering the content of a unit means, go through the virtual session first, use the guiding questions for help, watch the corresponding e-lecture video and use the tests in the interactive tutor and optionally supplement the workbook with the missing information. The important point is there is no pressure. Students now have access to the content as often as they like and from wherever they like. They can examine the virtual sessions with almost no limit. They can rewind their teacher in the e-lectures and they have numerous options to test themselves using the interactive tutor. Actually, this is what they really do. They take their workbooks, they use their computers or their mobile devices such as iPads, tablet computers or their smartphones wherever and whenever possible. And once they are ready and once they have mastered it all, they attend the subsequent in class meeting. Now the in class meeting is no longer any sort of frontal teaching. So this scenario is clearly out. Instead, the central, you might still want to call it teaching method is that of a cooperative interaction between instructor and students. Honestly, for us it doesn't make sense to teach in the traditional format anymore. Why should we repeat what's in an e-learning unit? So what we do in class instead is practicing, discussing problems, the analysis of data, all those things that students would have done on their own at home without any assistant. Now we do the homework, it's not really any homework anymore, in class. We discuss problems with our students and collaborate. Now typical in class scenarios look like these. And this is what happens in class. The instructor walks around and provides help or gives advice. And the students work on specific tasks on their own with guidance by their teacher. So this is a typical in class scenario in the inverted classroom where frontal teaching is no longer a central option. And how do we make sure that students do everything we want them to do in phase one? That is that they master the e-learning unit prior to the in class meeting? Well, for this purpose we have our formative electronic assessments, the e-tests that are mentioned earlier on. Each e-learning unit is connected with an electronic test, that is an electronic formative assessment in the true sense. Their results shape the organization of the in class phase. And what about the role of the instructor? Well, we have changed our role completely. We do not deliver content in the traditional way anymore. We do not have to explain term by term what a phoneme is or how the subject of a sentence can be identified. Rather, instead of teaching, we supervise our students' activities. We try to motivate them to do their tasks in time and we make sure that everyone is served as well as possible. As class instructors, we now have time to do more efficient things with our students beyond mere content delivery. So in accordance with Alison King, who as early as 1993 predicted a radical change of education in the 21st century, we can now clearly claim we are no longer sage on stage but guide on the side. And what about linguistic research? Well, today, no serious linguists dispenses with digital data, corpus analysis, whether spoken or written, special software for text editing or audio analysis support our research. And last but not least, we perform experiments using the computer in many ways. Here is an example from psycholinguistics. Now, lexical decision tasks, tasks where you have to decide whether something is a word or a non-word, can easily be implemented on a computer. On the virtual linguistics campus, you have a large number of such experiments. Here's just one of them, which I can only show how it works. You will be presented a word in the center of the screen and then you have to decide whether it's a non-word or a word. This is how it works. And now you would have to click. I don't have a keyboard, so I can't do it. So I leave that to you. You can use the linguistic laboratory on the virtual linguistics campus to find out how these experiments work in detail. So there have been dramatic changes in linguistic education and research in recent years. But there's no reason to be desperate. Rather, we should face the new challenge. We should make use of the wealth of material that is available on the web. We should use the superb digital tools that enhance teaching, learning and research. So the material is there. It's up for grabs. And new ideas about modern education exist too. The rest is up to you. Thanks for listening.
This E-Lecture was developed to honor the supervisor of my "Habilitation" at the University of Wuppertal, Prof. Dr. Gisa Rauh, who retired in February 2013. It shows how the digital revolution, in particular, the computer and the internet, have changed linguistics and discusses the options and chances for linguistic teaching, learning and assessment, as well as research in the 21st century.
10.5446/51102 (DOI)
So ladies and gentlemen, it's my big pleasure to welcome on stage my two panelists for very quick, but intense panel discussion First of all, I would like to introduce to you mr. Hosok Lee Maki Yama He is the director of the European Center for International Political Economy and a leading author in two areas essential to this panel Trade diplomacy and the digital economy his think tank has also published on the topic of discussion today Namely free flow of data one. Welcome to the panel And our second panelist is mr. Niklaus Vasa. He's senior vice president at M. R N We are glad to have a voice representing the business view in this discussion as Digitalization effects ever-rebusiness M. I am is a globally operating German firm mostly known for its trucks That is a very active in the field of digital transformation and before joining Emma and this summer mr. Vasa worked for IBM and left the IT company as head of strategic business development in Munich. I won't welcome also to you so Yeah, we have time for a round of applause So every digital process requires the exchange of data and more than international value chains This means the cross-border movement of data is the decisive factor the topic does not only affect the IT sector But every sector facing digital transformation and allowing a transnational data flows calls for the adequate regulatory frameworks So let me start with asking that why is it so important for companies to move data across countries and Mr. Vasa I would like to start with you. You're working for an international company So you may know all the challenges that comes along with that You know all the challenges that come with the digital transformation Maybe you can tell us a little a little bit more about which kind of data is crucial for MIM to share a cross-border So as you all know the the truck business is not maybe as simple as it seems so it's not just loading Stuff somewhere driving and unloading so a truck and transportation is part of an end-to-end supply chain and so We have very many involved parties in this end-to-end process So it's not just that a driver takes the paper and drives from A to B and in this context information between the different parties and we have 20-30 parties just for one transport being involved to have all the information being shared This is crucial for for the transport and logistics business Just very simple example Everybody wants to know what is a requested time of arrival of a truck of a transport? What is the estimated time of arrival and what is the actual time of arrival? So this is very simple, but if I want to run such a process very efficient very smooth We need to have data and not just data which is produced while driving this data which is produced while running this process and This process is not a national process So we are transporting goods and material across Europe and worldwide So the data we need to run this process from a logistics System perspective is needed where the central logistic system is placed And so we need to transfer the data so that everybody knows in this process What is going on? What happened and what is the next step so that the process is running? Absolutely, Mr. Limakayama when policy makers speak about free flow of data that seems to be kind of abstract to me Can you tell us a little bit more about which data we are really talking about? I think really that my co-panelists explained it extremely well Because in the end what we are talking about is the right to use our IT system when we export and go abroad I just happen to know that for example in the manufacturing sector in Germany Connectivity and data and software can for about 5% of the inputs the 5% of the cost Now if 10% well if 10 countries are asking German exporters to replicate these functions locally in their economies It basically means that the foreigners have taken away half of your turnover So it basically means the right to stay in business as far as I'm concerned And now we are increasingly using the IT and data and software in order to stay competitive German SMEs need the platforms and access to actually retail space online in order to sell in other countries German banks can develop while algorithms to understand investment patterns in markets they may not know in Asia In the same way a German auto manufacturer needs to collect data in for example in Japan In order to make sure that their cars are just as good as Toyota's This is very very simple stuff and if that's the reality of business There is also a reality for finance ministers We have to face the fact that actually almost 50% of German services export is depending on connectivity and data That means about 28 billion euros of Germany's export is actually depending on connectivity And that actually the overseas markets like China, India and Russia keep our markets open for our right to use data across borders That also actually entails that to be frank if the overseas market decides to shut down They can actually turn German trade surplus to zero simply by shutting off the data This is the reality we are living in And actually we have to deal with that right now So let's talk about another big topic about the localization of the data When regulation requires companies to locate certain data within national boundaries These restrictions can be burdened some for the affected companies, we heard about that Data localization requirements exist in different states worldwide We're talking about China, India, Russia, Vietnam, just to name a few of them Mr. Limakiyama, which obstacles for the cross-border data flow exist today? And which companies are especially affected? You kind of mentioned that And can you name some concrete examples where transnational data flows were hindered? Yeah, sure I mean the problem is that many of these data localization requirements are hidden in very benevolent packaging Which means that they are hidden as personal information laws, financial security laws And it could sometimes even be cyber security laws So if I'm just going to take one example amongst, let's say, around 80 countries in the world That are imposing some kind of condition of moving data And most of them are good safeguards in order to make sure that citizens and corporate assets are protected But in some cases, for example, in China, you are required to localize almost everything And since every data transmission contains information about users And it could be metadata, IP addresses, it could sometimes be phone numbers Employee ID, which means that basically the government has almost discretionary unlimited power To take a business out and to say that you are not welcome here And in addition to data localization, you also have licensing requirements Meaning that if for some reason the Chinese government feels that No, this German company is too strong competitor to our state-owned enterprises That means that they will not issue the license to collect data inside China We are not even moving data out It's actually the right to actually know a little bit more about our customers To be able to keep their phone and email addresses, it's not that advanced And this basically means that with combination of some of the governments Also being able to extract data from German companies Which they feel very uncomfortable about That creates a very dangerous environment to operate in Such a huge topic, let's talk about MAN How are trucks affected by data localization? For the truck itself, it's quite simple All the data produced or created by the truck is stored in the truck normally But if you want to run this truck business, you need to know the position of the truck You need to know the condition of the truck to go for predictive maintenance as an example You need to know by when do you need to fuel the truck By when do you need to repair something, run maintenance And so this is not what you need just located within the truck So you need to transfer and transmit it to the system to manage the fleet So that you know, okay, this truck has a range to go for 200 kilometers left So you know, okay, the next drop point needs to be in this range And so this is from a truck itself, it's quite simple But on the back of the truck you have good material And also there you have more often than you expect You have sensors, you create data And also this is very important to understand by when is my material where And also this information, this data needs to be stored somewhere It needs to be transferred to somewhere to a system so that you can use the data So it's all about the data you use to make the business efficient and to run the business successful And so many parties we have in this process It's not just the truck, it's not just the vehicle Even if you think about autonomous vehicles as a next step You need to have exchange of data of information between the vehicles So that every vehicle knows what is the other one doing And so this is how to run this ecosystem at all This is just based on data And you need to have this flow of data between the participants of this ecosystem And it's not just the vehicle itself, the vehicle is just one small And it's a very important and a hope important piece in the system Absolutely, Krüger, especially for the upcoming future Let's use the last minutes of this debate to talk more about the international policy debate going on The G20 ministerial statement of trade and the digital economy of 2019 As well as the Osaka leaders declaration and the Japanese presidency Includes the topic data free flow with trust The EU has a regulatory framework in place ensuring the free flow of data within the EU So Mr. Mazur, if you could write a wish list right now to our politicians What do they have to do to improve the situation for your business for all businesses? So for our business, and let's not talk about the truck business But the transportation and logistics business at all For others, it's very important that we have clear rules and guidelines we can use across borders Doesn't make sense that we have in one country this rules and regulations in another country have different ones So a truck is crossing borders, the material is crossing borders And it makes no sense from a business perspective to every border you cross You need to behave different with the data you have, with the data you need, you create, you store And so this is the biggest wish we have that we have really clear regulations, clear guidelines, clear answers How to handle the data, so it should not be an open playground Everybody can do whatever he wants or she wants with the data It needs to be clear and regulated somehow, but this is what we need from the politics As the free flow of data is crucial for us, as in a plant it's very simple You have just one place and you have a production line, here you have no free flow of data need But if you go out on the street and run the business on the street Then you need to have some data you manage your process with And you need to have for this the specific regulations Mr. Lee Makayama, how do you evaluate the G20 principles and maybe what's on your mind for the upcoming declaration 2020 Well, first of all, I think the Japanese presidency with actually the cooperation with the German presidency That was two years prior, has really really cracked the code When we're talking about free flow with trust, you're basically talking about there has to be a balance So if you're like-minded country, you can trust the jurisdiction If we can believe that the other side will offer you legal assistance If they can offer you good environment for your personal information That means that we can let everything flow And that is really a question about conditionality And this is an assessment that we need to make And I think also at the same time in parallel to the discussion we're having in G20 Europe is also going its own way We have invented GDPR, which is a wonderful instrument for protecting of fundamental rights But my criticism is actually relatively minor, but it's a consequence of this It protects fundamental rights, but it does not protect corporate assets We already know that the cyber espionage accounts for about 55 billion euros per year In lost corporate assets from European companies It will cost about 300,000 jobs, lost jobs opportunities across Europe We need to make sure that we also protect corporate assets in the same way The second question is actually does privacy stop us from negotiating trade agreements? Absolutely not Trade agreements are not designed to renegotiate privacy, but they are to make sure That the other side doesn't impose trade barriers using privacy as an excuse So this is the reason why we should never ever ask for more exceptions in trade agreements than we need Currently the EU trade agreements contain provisions that will excuse China, Indonesia, Vietnam We don't need a safeguard, we only need to make sure that the GDPR is well accepted by the trade agreements But we should never as a result of this accept hidden trade barriers on the other side So in your point of view, can the EU play in any kind of way a role model in the international context? Well, as an Asian who has actually chosen to become a European Europe will always be a role model and in this context of course, the wonderful German industry innovation is going to be a role model within Europe That's true, and this is the reason why we shouldn't be afraid We don't need to fear Americans or the Chinese, we can actually do quite a lot on our own And we are winning in every other step of the value chain And this is also another reason why we should never ask for more protection than we need Same goes to you Mr. Wajer, what do you think? Can the EU be a kind of role model? I think the EU is already for decades a role model in many businesses And if you look at the international exchanges of experience, competence and so on In the past many companies, many business areas have been looking to the US Today many are looking more to the East But to look at ourselves, I think it's the most important because we have what we need We just need to use it and I think this is what is one of the most important next steps from the EU perspective From the politics perspective but also from the business perspective So I think yes, the EU and especially in the EU, Germany can be a role model We just need to take it Wonderful, Mr. Wajer, Mr. Lee Makayama, thank you very much for this very quick but also very intense panel Like less than 20 minutes, we made it, wonderful I think at the end of the day, we kind of have to deal with this digital walls But I think also for the audience, this panel made it much better to understand what's going behind the scenes And to get more insights about the whole discussion And the good thing is, the international debate is going on Cross fingers for us and for you and once again, thank you very much for all your insights And this is your applause
Panel: Hosuk Lee-Makiyama, Niklaus Waser
10.5446/51115 (DOI)
Wir haben jetzt eine ganz kleine Programmänderung. Wir werden das Panel, was jetzt hier auf dieser Bühne hätte stattfinden, sollen verschieben. Wir werden es aber einfach nur drehen. Wir werden das wechseln. Das heißt, die digitale Infrastrukturen als Wegbereiter einer erfolgreichen Plattform Ökonomie in Deutschland. Dieses Panel hören Sie nach der Kino, die jetzt hier kommt. Und wir freuen uns ganz besonders. Ich werde jetzt in Englisch, als unser Gast, in Englisch präsentieren. Sie ist schon in der Bühne. Ich bin so glücklich, dass ihr hier willkommen ist. Wenn Sie sich auf die Stange kommen würden, meine Damen und Herren, erst einmal willkommen. Vielen Dank für das Komissionär für die Kompetition. Er hat über 15 Millionen Euro von Antitrust-Panel, die die Menge ihrer Prädestellern ist, und wir sind sehr glücklich, dass ihr hier willkommen ist, besonders weil sie natürlich auch in einem sehr spannenden Moment ist, als sie bald in einer neuen Position sein wird. Sie ist die nominierte und designierte Executive Vice-Präsidentin der EU-Kommission. Und in dieser Rolle haben ein paar internationaler Kommentarinnen die eigentlich die stärkste technologische Regulierung in der Welt gehalten haben. Bitte einen sehr wunderen Applaus für Margarete Westhayer. Vielen Dank für diese wunderte Willkommission. Und vielen Dank an die Ministerin für eine eventuelle Erfolgs-Empfehlung. In Zeiten wie diese, ich denke, ist es größtendlich wichtig, dass wir diskutieren, dass wir einen gesetzlichen Blick auf was wir vor uns selbst haben. Denn jetzt werden wir die Entscheidungen, die Europa erlauben, die Möglichkeiten, die die Digitalisierung bringt, bringen. Es bringt Möglichkeiten für die Konsumers, um freier, größer, mehr erfüllende Leben zu leben. Und es bringt Möglichkeiten für europäische Businesses, um neue Marken zu finden, um neue Erfolgs-Successen zu schaffen, um mehr Nehmen zwischen Konsumern zu erfüllen. Die Erfolgs-Empfehlung hat viele Industrie, in denen Europa die Welt bringt, wie die vorhandene Verkaufung, Transport, clean energy, Finanz. Und mit dieser Behandlung sind wir gut geplant, um die Verkaufung und die Erfolgs-Empfehlung von allen Industriem in die Digitalisierung zu finden. Das ist nicht zu sagen, dass Europa in der Führung eines offenen Goals ist, aber Europa ist natürlich aufgeregt. Und sie müssen mit einer schweren Konpetition von Businessen in Ländern wie China oder die USA. Das Ding ist, dass Europa auch dynamisch und innovativ ist. Aber vielleicht ist es wie eine Nacht in der Führung eines Verkaufungs-Successes, die sich die Schworte in ihrer eigene Hand vergessen. Wenn das eine Situation ist, dann können wir uns die falschen Entscheidungen, wenn wir uns über unsere eigene Kraft vergessen, in der Führung zurückzuholen, wenn wir uns auf die Grenze stehen müssen. Weil die Führung in Europa nicht mehr wie die USA oder China zu sein, sondern zu spielen an unserer eigenen Kraft. Es betrifft eine technische Entwicklung mit einer Führung. Weil in Europa unsere Übersetzung zu Digitalisierung ist nicht um die Führung von Waffen um die Führung zu bekommen. Es geht um die Führung von Taxi, um die Arbeit der Führung zu verabschieden. Es geht um die Führung von Menschen's realen Wissen, um Healthcare zu verbessern, um die Verkaufung zu verabschieden, um die Führung von Wissen zu verabschieden und um die Klimaverkaufung zu verabschieden. Und unser Erfolg wird mit unserer Führung zu der Führung zu der Führung zu Konpetition beteiligt. Weil es eine Konpetition, die innovativ ist, die Ruhe für alle Be phase ist erfolgswichtig. Das ist mein Grundased penal Nawide 25 cm zu commentate. Mathe mit island improve Telefon aber immer announiert Definition. Seine Handballحم AO gateway ist Boston, is almost a big question, whether this is a second and third party question we will not do it the right question will be the question question that comes along yet What the TXert die digitalen Welt. Und eine der kommenden Features ist die, die Sie für die Theme des Tages die Werte der Plattformen haben, weil die digitalen Ökonomie eine Plattformeconomie necessarierlich ist. Die Digitalisierung hat uns die Konnexion von Millionen von möglichen Konnexen geübt. Aber wir brauchen Plattformen, die uns genau finden, was wir brauchen, um uns die Map zu geben, die uns in einer digitalen Welt zu navigieren. Und das gibt Plattformen enormen Power. Und die Herausforderung ist sogar größer, wenn man sich das realisiert. Und das ist oft die Plattformen, die mit sehr wenig Konpetitionen auf die Plattformen befinden. In der Digitalisierung ist es sehr oft der Fall, dass es in der Bibel steht, dass sie heute geübt werden müssen. Das Geist kann Ihnen so einen großen Start geben, dass es für kleine Reifen sehr, sehr schwierig ist, eigentlich zu kompetieren. Und wenn die Plattformen digitalen Masken sind, kann der Markt schnell in den Fällen von diesem Geschäftsraum für niemanden anderen gehen. Wir müssen die Konpetitionen sicherlich auf die Plattformen zu enforceieren, um die Plattformen digital zu stoppen, um ihre Power zu benutzen, um ihre Reifen zu verdienen, eine Chance zu kompetieren. Und wir müssen schnell auf die Plattformen zu actieren. Weil, wenn ein Markt die Plattformen hat, wird es sehr schwer, die Konpetitionen zurück zu bringen, um die Plattformen zu verhindern. Es ist nicht genug für eine oder für die Plattformen, um die illegalen Behörden zu stoppen. Es ist eher wie ein Ecosystem, das so zähig ist, dass es, wenn der Plöter heute stoppt, das Ecosystem nie auf its own rechst. Und wenn wir finden, dass ein Markt vor dem Weg ist, dass es nicht genug ist, eine Plattform zu stoppen, unsere Verantwortung ist auch, die Ecosystem zu verhindern, um die Plattformen zu machen, dass sie positive Steppen nehmen, um sie zu erneut zu werden. Aber die Plattformen sind nicht nur eine Konpetition, sondern auch eine Konpetition zwischen diesen Plattformen. Diese Entscheidung kann auch die Städte von anderen Marken, wo die Plattformen auf die Plattformen zu verbinden, mit ihren Kunden zu verbinden. Und ihr wisst, dass sie mehr sind. Weil ihr auf Plattformen shopt, und ihr wisst, dass ihr die Plattformen auf die Plattformen zu verhindern, und eure Unternehmen werden durch Plattformen gefunden. Und diese Plattformen, die wir aufhören, können starten, die in Effekten wie ein Markenregulator beginnen. Die Wahl, über wie sie verschiedene Unternehmen ranken, kann sich sich sich vorstellen, wer eine Chance hat, um zu competieren. Und wenn ein Produkt oder ein Geschäft aus der Plattform, sie das Geschäft auf das Geschäft seriously effektieren kann. So, wenn Plattformen als Regulatoren actieren, müssen sie die Regulatoren in eine Art, die den Marken für die Konpetition aufhören. Aber die Erfahrung zeigt, dass sie nicht mehr die Plattformen, die sie haben, die sie nicht mehr haben, um ihre eigenen Services zu helfen. Zum Beispiel Google's Search Engine. Das war eine sehr wichtige Plattform für die Verkennung von den Plattformen, weil viele Konsumern die Google-Präsentat, die Resulten der Verkennung, hatte einen kruftvollen Effekt, wer es zu schützen würde und wer es nicht zu schützen würde. Und wenn Google startete, zu zeigen, dass die Resulten von Google Shopping auf der absoluten Topf der ersten Page, und gleichzeitig demotieren, die Reihen, weit, weit, weit nach unten auf die Liste, das hat es sehr schwierig, wenn es nicht möglich ist, für diese Reihen zu competed. Das ist ein besonderes Problem, dass wir noch die Monitoring haben. Aber jetzt schauen wir, ob Amazon ihre Kontrolle der Plattform zu den eigenen Services hat. Millionen Verkennungshäuser würden Amazon mit ihren Kunden verbinden. Aber Amazon ist auch ein Teil der Konpetition auf ihrer eigenen Plattform. Sie sind mit den gleichen Verkennungen, die sie verursachen. Wir haben uns auch investiert, ob die Verkennung die Daten, die sie verursachen, während die gesamte Plattform operiert, und das ist eine Chance, die anderen Verkennung zu competed. Das ist noch nur eine Investition, noch keine Konklusen. Ich weiß nicht, was wir hier finden, dass Amazon alles falsch gemacht hat. Aber es ist ein Erinnerung eines sehr wichtigen Punktes, dass Plattformen mit ihrer sehr centralen Position im Markt, die andere einfach nicht ihre Hände aufnehmen können. Und in dieser digitalen Zeit kann die Daten für die Unternehmen sehr wichtig sein, um zu competed zu können. Mit mehr Daten kann die Unternehmen verstehen, dass die Kunden besser brauchen. Es kann sie sich erlauben, die bestmöglichen Intelligenz zu entwickeln. Aber eine der wichtigsten Features ist, dass Daten die Daten für die Verkennung, die die richtige Anwendung für ein Produkt oder für das Thema eine politische Message, helfen können. Es ist nicht surprise, dass die Leaders in digitalen Verkennung, die Plattformen sind. Plattformen, die das Geschäftsmodell um Data bauen. In den USA haben sie 6 out of 10 digital advertising Dollars und haben nur 2 Plattformen. Google und Facebook. Und die mehr man sieht, wie diese Unternehmen operieren, die mehr man sieht, dass eine große Teil der Unternehmen eine wichtige Sache haben, die sie in den Kommunen haben. Die Hunde für Data. Wir müssen verstehen, wie diese Mäunzen von Data, wie sie die Verkennung im Markt befinden. Und wie wir das square können. Die Verkennung und die Verkennung sind nur eine Teil des Puzzles. Unsere digitalen Zukunft sollten nicht nur unsere Niedersammensauheit als Konsumers treffen. Es sollte auch unsere Werte als Zitizen respektieren. Also, alongside the competition rules, we also need regulation to make sure that platforms, they stick to those rules. And that digitisation serves us as consumers, as well as it serves European businesses of all sizes. Wusselafer und Alain has already made it very clear that one of the top priorities of the next commission is to put the right policies in place so that Europe is fit for the digital age. And I am indeed deeply honoured that you'd ask me to coordinate that work as Executive Vice President. Digitalisation is a complex thing. It affects so many different parts of our lives. So, our response will also have to how many different aspects. We'll need a new European approach to artificial intelligence. So that artificial intelligence supports human judgement instead of replacing it. We'll need to put a framework in place so that digital businesses pay their fare of taxes just as well as any other business do already. And we'd need rules, including a new Digital Services Act, which can make sure that platforms actually do serve people and not the other way around. And that means updating our safety and liability rules, as well as making sure that platform workers have the protection that they need. And we'll have to give European businesses the support they need to turn innovative ideas into world-beating products, including a new strategy for small and medium-sized businesses, being startups or not being startups, but to enable also smaller businesses to grasp the potential of digitalisation. Neredom kann man die für unsere Werte. Werte wie Freude und Fähigkeit und indeed Democracy in itself. Aber es wird nicht eine Technologie, die die Zukunft des Werte des Werte wie Freude beils 수가 eben ede. Ich temporaliere zu petty. In der Kommissioner Westhayer, thank you so much. Would you be up for one question? Yes, please. Is that be okay? So thank you so much. You've given us an overview of your ideas. And of course, we also heard the Commission has promised to issue ethical and human-centered AI rules in the first 100 days of your mandate. What can we expect to be your absolute priority here? You just sort of, you know, outlaid the fields. First of all, must be very busy, because 100 days is not much when you're dealing with very complex issues. Also, because of course we would like to hear that you should not just be the victims of sort of master thinking in an ivory tower. That would make any sense. So we will be very busy. We will listen hard and we will push these agendas. And I think the most important thing when it comes to artificial intelligence is, well, how can we create a framework so that we actually can trust it, that there's human oversight. And the second thing, of course, is how to feed it. Because access to data is the key when we want to build more artificial intelligence, when we want to train it better, train it more. And here, of course, the question of bias comes in. Because, of course, artificial intelligence discriminates. That's the nature. It chooses one over the other. But we do not want artificial intelligence to discriminate where we don't want discrimination. Because we're working for a gender balanced society. We're working for a society where minorities feel counted in. So how to make sure that we get these very fundamental things right. That is absolutely top priority. So trust and data. Exactly. Some might say China has all the data. The US has all the money you have set. But Europe has purpose. What is that purpose? Well, I think we all know it. That a good society is a society where you feel safe, where you stay healthy, where you easily a convenient get from A to B. And where you can do business. And I think this is the important purpose. And I think that AI, quantum computing, Blockchain, Technology, all the rest, all what we haven't even heard of yet. That's a great enabler for us at long last to achieve those goals for all citizens in society so that no one feels left out. Wonderful. Thank you so much, EU Commissioner Westania. It was my pleasure. Thank you very much. Thank you for making it and for being with us. Thank you.
Keynote von Margrethe Vestager, EU-Kommissarin für Wettbewerb
10.5446/51207 (DOI)
Among the essential features of the inverted classroom model in its mastery variant are formative electronic tests that provide the link between the digital phase of content delivery and the subsequent in-class phase. These formative tests, the so-called mastery worksheets, are not only used to find out whether our students have mastered the digital content but their results are also employed to influence the activities during the in-class phase. A high mastery level does not require much frontal input in class and leads to more intensive practicing whereas a low mastery level requires more reteaching activities. By the way, the average mastery level in classes offered via the virtual linguistics campus is around a high value of 67% that is the in-class phase can be dedicated to practicing and deepening with very few selected frontal parts. But what types of assessment do we take to generate the mastery level? Well, like most digital teaching scenarios, we have been employing multiple choice tests where you have either one or several choices. In this example taken from the worksheet on language evolution, you simply have to find one or four alternatives. Such simple tests have in many cases on the virtual linguistics campus been extended to what we call multiple choice plus, where the choice influences the set of alternatives. In this test on distinctive features, students have to construct a matrix making yes, no or plus minus decisions. Nevertheless, such tests have neither been standard ways of assessment in our field linguistics nor do they involve sophisticated questioning principles. Multiple choice tests can often be passed by selecting the correct answers by mere guessing or even accidentally. So we had to think about alternatives. One alternative are our input tasks where questions have to be answered by means of simple text or in some cases even by means of short text passages. The problem is that despite using elaborate parsing mechanisms, machine based evaluation is not 100% secure. Thus, we only use input tasks in those situations where answers can be kept short or unambiguous. But how can we get rid of the unwanted multiple choice formats? One idea is to make use of the multiple choice questions and then normally four to six suggested answers but not present all answers at the same time but successively. We have termed this alternative test format dynamic multiple choice. In such tasks, only one of the up to six possible answers is presented at a time requiring an immediate judgment without having seen the rest. In other words, such an assessment type combines single true false judgments with multiple choice. Let us demonstrate this task live. As you can see in this dynamic multiple choice task about proto languages, here is first of all the question with only one possible answer. And our students have to decide whether this answer is true or false. If their choice is correct, that's it. If it is false, well, then the next answer is suggested and so on and so forth. Here is another question. See what happens. You will probably agree with me that such a test format requires a much higher understanding of the topic than multiple choice proper and that the questions cannot that easily be answered by me a guessing. The dynamic multiple choice format involves several interesting side effects. First, we can use our former multiple choice questions without any change. All we have to change is we need a new template for the presentation of the questions. Furthermore, and this is another advantage, each question can now be used several times. Since in the case of failure, the correct answer will not be shown. So you can use the question again. And last but not least, you can apply to evaluation mechanisms and inclusive one where each correct true false decision is incorporated. Let's say you have four possible answers, then each correct one is worth 25% or an exclusive evaluation strategy where only the entire question is graded as either correct or wrong. As a consequence, simple multiple choice tests are no longer an option on the virtual linguistics campus. They have been replaced by dynamic multiple choice tests in most cases. These tests are far more challenging for our students. They involve more than just accidental clicks, and they can expand the set of questions of classical multiple choice tests considerably. Nevertheless, they only constitute a transition towards more intelligent testing formats where the machine can reliably evaluate free user input and allows test formats that can more precisely make statements about the student's mastery level. The virtual linguistics campus team will continue working on such assessment formats. Thanks for your attention.
In this short E-Lecture, Prof. Handke introduces the Dynamic Multiple Choice format that he and his team developed for their VLC courses. This new type of E-assessment is by no means perfect but it overcomes at least some of the shortcomings of simple multiple choice tests.
10.5446/51155 (DOI)
All right. I think we are ready to start. Thank you for coming to this session. Let me just recognize some phase, but how many of you were sitting here before and saw the previous session? How many of you didn't see my previous session? Guys, I'm sorry. Like, that was very nice session, wasn't it? I'm joking. So, this session is about Windows 8. In the past hour, we talked about Cut the Rope and I share some of the experience taking the Cut the Rope game that was initially a native application for iOS into an HTML5 website and then into Windows 8 app. In this session, I want to focus more on the concept of Windows 8 app. And I will break down and I will show you some of the tools. I will show you some of the platform, some of the capabilities, some of the frameworks. We'll talk about the store. I just want to give you an overview of what means building a metro application in HTML and JavaScript today. Before I start, my name is Giorgio Sardo. I work in the Windows Evangelist team in Redmond. I'm actually responsible for many of the applications that you see in the Windows Store today, as well as many of the beautiful HTML5 website that we built with the Internet Explorer team in the last year. You probably saw some of them like Pac-Man, BMW, Aston Martin. We've done hundreds of them. Before I start, let me ask you about you. Have you already looked into Windows 8? Have you already started building a metro application using Windows 8? Okay, some of you did. This is an intro session. So, I will start on the high level and eventually sometime I will go deeper and I will show some very deep code or something behind the scenes. If you have any questions, feel free to raise your hand. Don't be shy. I'm here to answer. I'm here to make sure that you enjoy the session. Let's start with HTML5. I will start with a couple of slides that I showed earlier. When we were working on I9, by the way, there is a phone if you want to answer the side, feel free to answer. When we started on I9, HTML5 was just at the beginning. Or I should say it was getting more solid, more stable. And we had a number of CSS features. We have a number of HTML features like Canvas, audio, and video. And it was a fairly good amount of capabilities to start building rich websites. Who is playing Kaderov there? Can you hear it? Is it me? I have devices everywhere but I don't think I'm playing Kaderov. No. Okay. More recently, we released Internet Explorer 10. And as you can see in the matter of two years, the number of standard features just doubled down. We have more than hundreds of capabilities in the browsers today. Most of these are also available in Chrome, Firefox, Opera, Safari. Those in bold are in particular the most recent. And so you see new things from the CSS world like Grid or Flexbox. And I'll show you some of these. You see APIs like to read files or to go offline or to do drag and drop. Or you have Sockets or Web Workers. Like yesterday, there was a very nice workshop about Sockets and how they're evolving. And Remy talked about this as well. And so I want to spend just a couple of minutes to show you some of the newest features of the HTML5 world because you will see that those become very important as you start building applications. So I'm going to go to the IE test drive. This is a Microsoft website where we publish a lot of new demos every time we add new features to the browsers. In particular, I'm going to click the Windows 8 web platform. And I like this page because it gives you a very quick idea of many of, a very quick demo for several features. And so, like, I always skip the usual one that you already saw, like text shadows, borders, et cetera. And I will start with the 3D transforms. Like now, using CSS, it's super easy to do 3D transform to your DOM content. So eventually, you can set a rotate function and then it comes to you. You can change the perspective. And in terms of code, this is just one line of code. So I'm just using the CSS3 transform property, prospect, rotate, why. Notice that we get rid of the prefix finally because these specifications are getting more solid and it's now safe to use without prefix. Another one that I really like is the ability to do animation or in general transitions from the browser. And so, for example, now I have a div element and I want to animate it from one position to the other. I can use a CSS transition and I can also change the easy in, easy out function. And again, here's the live code that I'm using. Here's what the real-time sample. And you can see that very easily you can start creating nice effects just using CSS3. It's also about layout. So let's say, for example, you have a website with some textual information, either it's like an article or it's some sort of news or a blog. In the past, if you wanted to have different columns, you probably start building in your layout different columns. What you can do now is using CSS3 standard called multi-column that just allow you to give a number of columns and then the user agent will take care for you of splitting the text into different columns. So you can see that as I increment this slider, like it just changed dynamically. Number of nine of code, this is it. Columns five and then set the width to automatic. So it kind of like a reserved space in an equal portion for each columns. And you can also customize the way you do iFanation. So the way you split one word between one line and the other with the number of characters, etc. So there is a very deep level of customization even in the multi-columns layout. And so when you look, for example, at newspaper, this is probably what newspapers on the web will look like very soon. Like the website will be smart enough to resize, to be responsive in its design and to resize the content in columns. My favorite, this is a new standard, is a new specification that Microsoft submitted to W3C sometime ago. It's called Grid. Let me tell you the story about the grids. Sometime ago, I guess that many of you built websites. And the first website that you built, at least when I was a kid, my very first website probably was my own personal page. Hello world. I can have my personal blog. And the first thing I did was creating a table. You remember the table? Table, TR, TD, TD for the different columns and the different rows. And you set all of the layout using the table and it's beautiful because it works. And you just split your page into a table. And then somebody came to you, Microsoft, or others, saying, bad. Don't use the tables. You remember the time? Like tables are bad. Don't use TR. Don't use the D. Use Dives. Use Float. So you start using, converting all of the tables to Dive. And you start using the Float left, Float right. And then as soon as you start having 20 Dives and a lot of floating elements in the page, you start asking, what the hell is going on here? I just want a table with this column and this row and this other column. And it's a mess. Have you been there before? I see people nodding like, yes. Like if you build websites, you've been there before. And the reason is that the Dive was not meant to be in the floating, was not meant to be a way to organize the layout of the page. It was for the content. It was for like text or images. And so we've been working on a new standard. We call it CSS3 Grid. And the idea is that you can define a virtual grid on your page using CSS. And so for example, the grid that you see on this page is actually defined by display grid and then by a number of columns and a number of rows. And eventually you can change the size of the column. So you can see now I changed the size of column. I'm just saying use 12 basically fraction of the space for the first column and then assign equally the space to the other three columns. And you can add new columns very easily. So you can see now I have more columns. And you can do this, you can edit and maintain over time super easily. So this is called CSS3 Grid. This is my favorite specification. If you don't want to use Grid but you want to stack the elements on the page, you can use the Flexbox. The Flexbox is similar to the Stack panel if you're coming from the XAML world. It's just going to stack elements one next to each other and feed the space as needed. SVG is another standard which is super, super powerful. SVG stands for Vettographic Content. So for example, if you have a flag, like if you go to Wikipedia and you look for the flag of your country, that's an SVG file because most likely, for example, the flag of Italy is made of three rectangles, right? Green, white, and red. And you don't need to have an image or a PNG to define the triangles. You just define the vector, the three rectangles so that it can scale easily and it saves space and is more linear. Some of the things that SVG working group has been working on in the last years are called filters. So for example, you have an image here and you can add a filter like the Gashan blur effect. And now the user agent is dynamically applying a filter to the DOM content. So what we did in IE 10 was taking the same syntax of the SVG filter, which is the syntax that you see below. And using CSS, we apply to any DOM element of the page. And so imagine like you have a page where you need to have transitions or animation of content like fading in out very quickly. It's easy as that to create that animation now using the filter. And this is obviously hardware accelerated in IE and I think it will be also hardware accelerated in other browsers. And so this is a very quick overview to demonstrate the fact that HTML5 is extremely powerful today. And there are so many things and so many futures. I don't pretend to know all of them. And I've been spending quite a lot of time in the last years working with HTML5. But, you know, like there is probably a specification for most of the needs that we need today. And if there isn't, let us know because we want to implement it. We want to make sure that, you know, developers can build complex applications and you're happy with the HTML5 platform. So now that we had a quick overview of some of the new features in HTML5, CSS, SVG world, let's move into the Windows 8 and Metro world. The basic idea behind the Windows 8 platform, and this is the key message that I want to leave for this session, is that if you know HTML and JavaScript today, then you're already a Windows 8 developer. You don't need to learn a new language. You're already a Windows 8 developer. Like in the previous session, I showed like the copy and paste from a web application, like at the open to a Metro application. There was really a copy and paste because the code is the same. Let me talk now about the platform. Let me explain you why that statement is true. So when we look at the Windows 8 platform, it all starts with the core OS, right? That's the kernel and that's the basic layer that sits at the bottom. In the past, we had, like when we look for example Windows 7, we have desktop applications. For example, right now I showed you Internet Explorer, which is the one in the right box. Internet Explorer is a desktop application, or you can have C and C++ application, or you can have.NET application, right? So you have desktop applications. In Windows 8, we continue to run all of the desktop applications. So if your application was working before in Windows 7, we continue to run it as is on Windows 8. But the new interesting part is what's there on the left of this chart, and it's called Metro applications. So the Metro application starts from the ground. We call it Windows runtime, WinRT. And Windows runtime is our way to re-imaging Windows. I know re-imaging sounds like a funny term, but we literally rewrote a lot of the code behind, a lot of the plumbing to get rid of all of the layers, all of the abstractions, and make it super easy. We have the kernel. On top of the kernel, there is Windows runtime, which is a set of APIs that the operating system exposes. And on top of the Windows runtime, there is the language that you use. We call them projections, and there are three projections. Number one, C++ and C. Number two, C-sharp VBnet with XAML as a user interface. And the third is the new one, which is HTML, CSS, and JavaScript. And this is what we're going to focus on on this session. So this HTML, CSS, and JavaScript is exactly the same that you have inside the browser. So inside the browser, we have the Trident engine that does the graphic and the Chakra engine that is the JavaScript engine. The same two engines are the same engines used to run a metro application in Windows 8. So it's the same code, right? Everything that was running in the browser now run in this box. So what's the difference? It's that this is the kind of standard compliant, this is the code that works in the browser, this is the code that you already know. This code have access to the Windows runtime. And the Windows runtime expose all of the OS APIs. And this is what makes your code powerful application on the Windows 8 platform. I also want to spend a note here to say that doesn't matter which language you use, the Windows runtime is the same for all of the languages. And that's the trick, right? It's not any more like if you're using C++, you can do more things than JavaScript or you have more APIs or you have more lower level access. No. Windows runtime is one. And all of these languages have access exactly to the same APIs of the Windows runtime. And the other nice thing about this model is that eventually you can mix different languages. So you can have, you know, like a core application built in HTML and JavaScript. And then, I don't know, you need access to a specific driver like the joypad or you need something specific. You can just have a C++ module inside your HTML5 application that just works fine as well. Or you can mix between C sharp and HTML that works fine as well. So we allow to mix between different projections. Again, because the plumbing, the runtime is the same. So what is the runtime? Like why do we need a runtime? Let me break down the Windows RT, the WinRT APIs into different segments. It starts from the fundamentals. And this is about cryptographies, about globalization, localization, it's about timers, threads, etc. Then there is a media component. It's a set of APIs that allow you to do media. And media is not just playing, you know, an audio or a video file. It's also, for example, streaming. It's also DRM. It's also being able to take, I'm not sure if you saw the E3 announcement, being able to take my tablet and connecting in real time with my Xbox and start streaming from here to the Xbox and have this become a remote controller for my television and have multiple devices connected to each other. We call that a play-to scenario. The Windows runtime includes a lot of devices. And so Windows 8 will run on a lot of different hardware and devices and form factors. And you will have access to all of the APIs of these devices. For example, near field communication, NFC. You can have two devices communicating just when they're closed. You can use accelerometer, gyroscope, inclinometer, or you can use the geolocation if you're lab to support GPS, etc. It's about communication, it's about data. And so, for example, you can send SMS, you can make phone calls, you can work with data stream, XML feed, etc. And lastly, it's about user interface. And so obviously, Windows runtime also gives you the API to draw on screen and to have complex input accessible and user friendly controls. So all of this is the Windows runtime. So enough talking. Now let's create the standard basic Hello world, Hello Metro world application. So I'll go slowly here, but feel free to interrupt me if you have any questions or you want me to go deeper in any topic. So I'm opening Visual Studio. Notice that this is the ExpressQ Visual Studio, specifically designed for Metro application. It's free today, so you can download it on devwindows.com. You can see that I have a number of different templates. And you notice that the languages here are exactly the three projections that I just talked about. And I will start with the, I will actually today, we're just going to talk about the JavaScript projection. I saw that in the agenda. I think after this session, there will be another one that we'll talk about the XAML and C sharp applications. I will start with the blank template. So it's easier and we can understand better what's going on. So I create a new blank template. And before I show anything, I will just run it. So I will, I can debug this on my local machine on the simulator on a remote machine. We'll just debug it on my local machine. That's it. This is my Metro application. Isn't it beautiful? Can you see the text? It's a bit small, right? So what can we do from the tool? And notice that I'm still in the bug mode, right? If you look here at the top, I'm still debugging the application. We have this page called DOM Explorer. And this is very similar to the DOM Explorer that you have in the browser, either in IE or so other browsers have it today. And what allows you to do is, for example, you can click here, select element, and then you go back to the page and you can in real time select an element of that page. And it will go straight to that element. And then you can start doing some live editing. So this is similar to the same editing you will do on the flight on a web page. So for example, I can say change the color of the text to green. And also let's make it bigger. So font size 110. And now you see that I'm in real time editing my Metro application. Notice that I'm still debugging. And this is an application, right? It's not like a website in a hit save redeploy or repackage. I'm debugging and I'm in real time editing CSS property on my application. Can I do more? Of course I can do more. What a question. So let me show you, for example, another feature of the tools is called JavaScript console. And the JavaScript console allows you to query dynamically on the page. So for example, I have a P element here. And so inside my JavaScript console, I'm going to say P dot document query selector. I'm going to take my P element. And then I'm going to change the content to hello, and you see did it work? There it is. And so you can also call dynamically, you can also inject or dynamically call JavaScript code inside your application. That by the way, it's still in the bug mode. So I didn't interrupt. I didn't restart. I'm still editing a live running application. And those are just two simple capabilities, right? So what is an application? Like what's going on here behind the scenes? And so let me break down the different files that you have in this template. It all starts from a page, an HTML page. This is basically the home page of your application. Let's keep this part. We will talk about this later. We have a CSS and a JavaScript file, just default file that are almost empty, like there isn't any useful code at this point. And then there is some HTML. And so when we launch the application, basically Visual Studio is taking this code and it's just packaging it into a package and then is running the application. Let me show you how, for example, to use the Windows runtime from here, right? So far, you already know everything. So far, it's just HTML and JavaScript. Now let's go and use, for example, a new API that is called FilePicker that allow you to select objects from your file system. Okay. This is very complex. I'm joking. Super simple. So a button and we call this, we give it on click do, do. Then I'm going to create a script block. I'm going to create a function do. Okay. It's a keyword. And now I'm going to define a variable called Picker. And then I'm going to call a namespace called Windows. Windows is basically the namespace that define all of the WinRT APIs. So all of the capabilities of Windows RT are defined under the Windows namespace. And the Windows namespace, if you look quickly, has other namespaces like media, networking, exactly the same namespaces that showed you in the previous slide. And so, for example, I'm going to go into the storage. By the way, not is also the IntelliSense. That's something we've been working on a lot in Visual Studio. We support full IntelliSense on any JavaScript file, project resource that you have in the project. And it works very, very well, including the helpers and the comments and all of that. So storage, then I'm going to go into the Pickers. And inside the Pickers, I'm going to take the file open Picker. And as I say, the file open Picker is a way to open file from a file system. So now with my variable Picker, I want to filter the type of elements that I will see. And so I say, sorry, replace. I want to see only, for example, JPG and say PNG. So only images. And then Picker.open, no, pick single file sync. Okay. So free runners code. Let's see what happened when I ran this on my machine. So I have my button do, I click do. And this is the Picker control. So this is one of the controls that comes with Windows 8. And eventually I can go to my pictures folders, I select an image, I click open, nothing happened. Of course, I didn't write any code. So let's write some code to make something happen. Let's add an image tag here. So I want to select an image from the file system. And then I want to show the image inside my project. So I'm creating a tag called image and I'm calling this tag image. And now I have this call, right, which is opening the file Picker and is returning an image. And I want, I need to get that image. How do I do it? How do I do it? This operation is asynchronous. You see the async method, which means normally, probably, if you're using other APIs today, you're creating a callback and then you're registering to that event and that when event, that event is raised, you get the image and then you share variables and you do all of that. We use a smart pattern. It's called promise. This is a JavaScript pattern. So it's a standard way to write JavaScript. It's a good best practice that allow you to concatenate and to make an asynchronous method, allow you to very easily manage the output of that method. And so what you can do is using the keyword then. And so then it's going to, you see, give you the ability to connect immediately to another function that is called when there is a result for the complete event. So I'm going to create an anonymous function here. And so basically, now I know that the code here inside this function will be called when the file Picker is done and the user already selected the image and everything went correctly. So this is a pattern that you will see everywhere in the framework. It doesn't matter which API you're using. All of the APIs are asynchronous and all of the projections allow you to very easily work with asynchronous code. And so at this point, I can go to my image and I say image.source equal URL.createObjectUrl from my file. File is actually returned from the file Picker. This URL.createObjectUrl again is a standard way to take an image and convert it into a blob that can be processed by the tried engine. So let's run this again. I go back to my folder. I select my image. Select open. There it is. It works. Right? Easy. Now let me do it again. I go do. Not sure if you noticed, but in this view, you can select different folders in my field system. So you can go to documents, pictures, etc., etc. But here at the bottom, you also have other applications that are installed in my system. And the idea is that I can go, for example, to any application, for example, the camera and say, hey, let's take a picture. Okay. Then I take a picture and then I crop the picture to the best part of this picture. Right? Then I say, okay. Open. There it is. Back into my application. How many line of code did I write for this? I didn't. So what's the trick? What's the magic? I talk about contract in the previous session. This is, again, another contract. So this is a very simple example of where you, as app developer, know that you can do something with the image. In this case, you're just displaying the image on the page. And you don't care. And you don't need to care about where that image comes from. The operating system allows you to connect to any other application installed on your machine that can deal with images and can give you images. In this case, it's the camera. But I can go and take it from the cloud. So eventually, I can go to my sky drive and just pull one of my photos from my cloud. I can go to the photos application and I can get into my Facebook profile or get other images from there. So this is extremely powerful. And you will see this pattern in many other controls of the Windows runtime, including eventually some little error. I'm joking. Like, this is obviously just a demo. So this gives you an idea of the power of the Windows runtime, right? And the easy of access from JavaScript to all of these APIs. A question that I get asked very often is, you know, eventually, we start building touch-friendly applications. And not everybody probably already has a laptop. Like, my laptop is nice because there's a touch screen. So I can write. I can touch. And it's just feel beautiful. Another feature that, so basically, let's say that you want to build a metro application today and you want to make it touch-friendly. How can you do it, right? You don't have a touch machine. Option one, recommended, you go to your manager or your boss and you say, hey, boss, can you buy me a new laptop? This usually works. I'm joking. It doesn't. But you should try. You should show in some of this demo and then explain why you need to get a new laptop which has a touch screen. The other option, if you don't have that budget yet, is to use a simulator. And so in Visual Studio, you can go and select simulator. And then, let's see if it works. Instead of running the application inside Windows, you're actually running the application inside a simulator that runs a virtual machine with Windows. And so now I have exactly the same application running inside this simulated environment. What's the benefit of the simulator? Let's see here. Here on the side, you actually have a number of buttons. And the idea is that the simulator allows you also to simulate the touch. And so if you click this, it simulates single touch points. If you select the pinched zoom touch mode, it will simulate like the gesture of pinching on the screen. I have too many zoom things going on. Or you can simulate the rotation done with two fingers inside your application. So there are a number of capabilities given by the simulator. Okay, there it is. And you can also simulate things like location or sensors or the camera or you can take a screenshot of the app. And eventually, also, the simulator is touch friendly. So now I'm navigating inside a virtual machine that represents a copy of my real machine. And I can deploy my app inside the simulator. So this is very nice, very powerful. The third option, I said that you're building an application for a device like this or an ARM device. What you can do there is saying remote machine. So if you go to the property of the project inside the bugging, machine name, you can actually try to locate a machine in your network. Now, this is not connected to the Internet, so you will not see it. But if this was connected to the Internet, it would recognize my machine and it would allow me to connect dynamically to this machine. And as you start remote debugging here, the application runs on this device. And so that is a very simple way to debug application on real hardware. All right. So we talked about the tools. Let me share a few more details and let me close the simulator for now. Any questions so far? Are we good? You like it? Good. Now, what we did so far was playing with a very simple application. You probably know that as you start building more complex applications, sooner or later, you will end up going and importing inside your project some framework. It's good to use framework to just help you make the development easier. So today, you might use jQuery, you might use Modernizer, Prototype UI, there are a lot of frameworks already. When we look at Windows 8 and when we look at the Metro apps, there are a number of user experience UI elements or just patterns that are consistent across all of the apps. For example, now I'm taking four apps and you can see that the way the distance between the header and the content is, it's kind of like there is 120 pixels on the left and I think 92 pixels from the top. So it's kind of like well-defined best practices. There are a number of controls that are also well-defined. So what we decided to do was to create a new framework and we call it WinJS, which stands for Windows Library for JavaScript. So this is a JavaScript framework that comes as part of your Visual Studio template and it serves two purposes. The first will make your development much easier and I will show you why in a second. And the second will make your application more beautiful. So if you are a developer like me that sucks at designing, eventually you just say, hey, I want a nice table and the framework is going to give you automatically the nice table with all of the CSS rules and all of that. And the framework also comes with a number of controls. For example, in this slide you can see things like ListView, GridView, or ScrollBars rating. Like all of these elements are just using standard HTML control that are styled or are enunciated with this JavaScript framework. So let me show you in a second what the framework looks like. But I also want to say that you don't have to use WinJS. So if you want to use jQuery, you can absolutely use jQuery. If you want to use jQuery and WinJS, that's fine too. So we don't want to force you to use our framework. We made our framework to simplify your life. There are a lot of things that are just easier, but eventually you can make the decision of using your own frameworks or other existing frameworks in the market. So let me show you, let's create a new project. So same process, find new project, Windows MetaStyle JavaScript template. This time I'm going to use the GridApp template. You can see that there are different templates here, like GridApp, SplitApp, FixedLayout, and Navigation. I will choose the GridApp. And this is a bit more complex than what we saw earlier. And so let's run it first on my machine. Let's see what it looks like. So this is like my template before I do anything. So there are a number of tiles. If I click a tile, it goes to the detail page of that article. Let's go back. If I click group title, it's going to another page that presents group items. And then if I click on the item, I go back to the detail view. Other things that are already implemented are the side by side. I showed earlier how you can snap application to the side of the screen and have multiple applications run inside by side. And so the template already takes care of that. And so when you snap the application to the right, you already reflow and redesign, give you a different layout, which is just responsive design to fit on that specific dimensions. Okay. So let's take a look at, you have this in mind now. This is the basic Grid template. Let's have a look to how it works. So opening the full page. In the body element, I have only one div, right? We can get rid of this comment. I have only one div, content host. And the content host define a data wing control. So this is a standard property. And we call the control application.page control navigator. And then we also give some options. And in particular, we give the options of group items, which is another HTML page. So this is a way to use a wing.js. So this time we're using the wing.js control called page control navigator that allows you to organize the navigation inside your app. Where is this thing defined? Where is this page control navigator coming from? You remember that I showed you this very quickly earlier, but I didn't get into the detail. When you create a new project, we add automatically references to wing.js.rc blah, blah, base.js and UI.js. This is wing.js. This is the window.js framework. Right. So this is just a framework inside the page. The reference, like you see this path, it's because it's not part of your package. There is one that is shared in the machine, and all of the applications share the same framework. So how does this base.js and UI.js work? The beauty of this framework is that it's open. So if you go to references here on the top of your solution explorer, and you expand the Windows library for JavaScript, you expand the JS folder, you can see here exactly the same framework we are talking about. So let's open base.js. And this, 11,000 line of code. This is the base framework that we build for you. So eventually we try to make this easier for you because we believe the sooner or later that you will need to use some of these functionalities. We just wanted to write them for you so that you don't need to write 11,000 line of code every time you write a new app. Happy. The other thing is that we made it clear, right? It's not obfuscated, it's not crunched. It's very clear what we did. And we even left the comments inside the code so you can understand how our developers thought and organized this framework. And eventually sometimes, really, these are comments from developers in the Visual Studio in the Windows team. Sometimes I think you will still see some comments saying, hey, this line of code is buggy. Please fix it slow. We need to improve. We just want to be honest and say where we are at. And if you have feedbacks or if you want to improve it or change it, it's all here so you know what's going on. And you have a base that controls the base functionality of the application. It gives you the promise pattern that I showed you earlier. And then you have a UI file that includes controls. So the UI includes all of the controls. And we have a number of functionalities. I will not get into the detail of this, but it's very powerful framework. Now, back to my page. I show you all of this navigation with the ties and you click the tile and you go to another page. Then we open the default HTML and there is only one div. Where is the rest of the content? Is it a joke? Is it fake? No. Actually, the div loads this page. So let's go and look at fragment pages, grouped items, group items. So this is the very first view. Let me run it again. The view that you see here, these ties, group ties, etc., is defined by this few line of HTML. And there are mainly two elements. The first one is this. Div, group item list. Data win control. It's winjs. So we're using our framework, UI list view. So we're basically saying, hey, framework, take my div element and promote it to a list view. And then the other thing that we're doing on top of this page is define a template for that list. And so this time, we have a template and we say this div, promote it to a winjs binding template. And the template is a title. So this is my title of the tile. And then there is a link. So actually, this is the title of each group. The template of each item is actually this one with an image. And then two lines of DOM. One is the title and one is the subtitle. And so very easily, we're doing the binding using the declarative code in HTML. On the JavaScript side, we have the data structure. And the frameworks connect very easily the data with your HTML and make it simple to develop and to use and to maintain over time. If you're a designer and you want to visualize this in real time, you can do it using an expression blend. And so I showed this tool before in the previous session. Now it will make even more sense. And let's start by saying this, right? This is an HTML page. However, what happens if you run this code, this HTML code in, I don't know, Dreamweaver, for example, or in some older version of Visual Studio? What would you see when you run only this HTML code? What would you see? Nothing? Nothing. I mean, it's a div. Look again, it's a div with an adder, a button, H1, a span, like, this is nothing, right? You need JavaScript in order to populate it with real data. And so if you open it in some Dreamweaver, you're just going to see the black spot because it's waiting for JavaScript to be called when you run the application at runtime. The idea behind blend, this is a tool that allows you to visualize in real time what the application will look at runtime. And so all of the content that you see here is actually not just coming from the DOM, but it's coming from the JavaScript that does the data binding for you and populates elements of the DOM. And so, for example, if I click one of these elements, you can see here on the left, you see that there is a live DOM. This is a live view of the DOM. So it starts from the HTML page, it goes to the body, and then you see all of the controls, right? And you see that there is an eye, and the eye is basically the basic control to toggle on and off the visibility of that specific component. And then there is also this flash. And the flash means, hey, this element is created at runtime. This is created by JavaScript, which basically means that this is what you would see using Dreamweaver, using some older version of tools in general, right? You try to run HTML, but the tool doesn't know yet what the HTML will look like because it's not running JavaScript. With blend, you can run the JavaScript of the application in real time. It doesn't matter what JavaScript is, you can just run it, and it gives you the idea of what the application will look like when you run it. And on top of that, these elements are live, so now you're editing a live version of the DOM, and you can start modifying CSS property of this live version and change dynamically the CSS of your projects. And you can see that as I modify one, all of the others get also arranged because, obviously, we're using the same CSS property to multiple elements. And we're editing real time a complex application like this. Make sense? Do you like it? Yeah. Sure. Is blend part of the express package now? The question is, is blend part of the express package? When you install Visual Studio Express, it gets, blend gets installed with it. Obviously, today in this session, I'm talking about HTML and JavaScript, you get something similar using SAML or other languages before. The difference is that those are compiled languages. This time, we're talking about JavaScript. It's not a compiled language. And so in real time, being able to have a live view of the DOM and run JavaScript behind and edit what the DOM will look like eventually in some step of the application is powerful. And other things you can do with the tool is that you can go into a live editing. So now I'm running the application. Let's say you're probably wondering, okay, how do I change the detail page? Because now with the tool, I can only see the very first screen. So you can go into this live editing. Now I can go into the detail and then I click again, turn off interactive mode. And the tool takes a snapshot of that page and now allows you to go back to that page and change specifically property of that page. And again, all of this is just happening in real time. This is all code that is generated from JavaScript. It's not something that we already wrote in the HTML page. The tool also allows you to test these on, for example, the snap view. So you can snap it on the right or you can see how the application will look like in portrait mode. You can see how these applications render different screen resolutions, which is extremely important, right? As soon as we get, you get your application on different type of hardware with high resolution screens, et cetera, you will need to care about 2560 or different type of resolutions. And of course, like on the right, there is a very complex and robust system to manage your CSS property to filter only for property that are used to go and edit transitions, animations. You can do all of this using just an expression blend. All right. Now I talk about the fact that WinJS is a framework that comes within your template. You can use it and it will give you all of these controls, binding, et cetera, for free. It will also give you accessibility. So all of the controls that you saw today are already accessible. They already implement all of this area, standard, et cetera. If you want to use your framework, go for it. So if you want to use jQuery, you can absolutely use it. Some of the applications that we have in the store today are actually using both jQuery and WinJS together. Just be careful with the way we handle applications, meta application. I will not get into the detail of the platform, but the very high concept is that each application run in its own process, which is a sandbox, that sandbox has two different contexts. One is the local and one is the web. The local content run everything that is inside your package. It's coming from Visual Studio. So that's safe, that's trusted. It's okay for that code to go back, to go down to the operating system and query API in the operating system. The web context is the unsafe area. So if your application, for example, downloads some content from the Internet, downloads a script from the web, that script will be maintained inside this red box in my slide, the web context. And that script will not have access to the WinRT APIs. It's not safe, right? So we cannot trust him. You have APIs to trust him, but by default, we isolate it inside this web content area. And so just be mindful of that. Otherwise, you can go and mix different frameworks together. We talk about some of the technical details. Now, how many of you are interested also to monetize application, to make some money? I'm building an application myself in my spare time, which is limited, but on the plane sometime I have some. We have a store. We have a Windows Store. And the Windows Store is the channel to distribute your applications. I repeat, the Windows Store is the channel to distribute your applications. So there is one channel and is the Windows Store. Clear? There is one store, which means that imagine million of devices, either tablets, computer, laptop, desktop, television, et cetera, and one store. So the beauty of the HTML application is that you create one package and that package will run on all of these applications, sorry, all of these platform and devices. In terms of monetization, in terms of business opportunity, we try to be very flexible with the store offers as well. And so you can have free app, you can have paid apps, or you can have trials, limited or functional trials. When you submit the application at the beginning, there is a split revenue of 70, 30%. So initially, for the few first days, we are going, you maintain 70% of the revenue of your app and Microsoft keep 30%. This is in line with other platform. The difference is that as soon as your application is successful, so as soon as you basically meet the $25K margin, and that's not a lot, believe me, it's not a lot when you think about the number of devices that will run Windows 8. We change the percentage. And so you keep 80 and we just keep 20. And that's very, very competitive compared to other platforms. The other thing is about commerce. So if your application wants to do commerce, you want to have transactions inside the application, you have two possibilities. The first, you use the Microsoft commerce APIs. Those are great APIs that give you all of the analytics, like all of the performance security, blah, blah, blah. They are part of the framework, they're part of the Windows runtime. So you have APIs in the Windows runtime that allow you to have transactions inside the application with the care of managing taxes and all of the distribution credit cards and all of that, with the care of all of that for you. And if you do this, there is again the same split revenue model of the apps. The second option is you use your own model and you keep 100% of the revenue. So this is big. This is not something that other platform are allowing today. But let's say that you're selling, I don't know, items and you already have your own back end to collect the credit card and to process the payments. Great. Go for it. Like we don't even need to approve it. Like if you have it, it's fine. Go for it and you keep 100% of your revenue. In terms of ads, if you want to include advertisement inside your application, is again the same story. Microsoft is providing a first-class advertising platform on Windows 8. We have great content, great publishers, great distribution. We have great analytics and all of these comes with an SDK that is already available. It's in beta as well. If you want to use other advertisement framework from other companies, feel free to do it. The only requirement here is that whatever libraries or component to you import inside the metro application is compliant from a technical perspective with the technical certification for the store. Right? So when you submit the application to the store, we run a number of technical tests. Like we check if it's secure, we check if it's fast enough, we check a number of things. And by the way, all of these tests are public. So you can just look, see and run them locally on your machine to understand better what's going on. And so as long as it passes the technical certification, you're good. And lastly, if you're using our APIs, you are going to get a very robust analytics system which is going to give you information like downloads, demographic, across markets, segmentations, trends, you will be able to see the performance of your application compared to other applications in the same category. So let's say that you have a game in the puzzle category, you will see the trend of your application in terms of how many downloads, how many transactions, and you will also get an average of other puzzle games in the same category at the same time so that you can balance and you can understand how it's ranking across the store in general. Now, do we have time left? Seven minutes? So we have seven minutes left. There is another demo that I never showed before. It's nice. So we can show you, I can show you this quick demo. Or we can do question. Which one do you prefer? Demo? Okay, questions? I ask questions by myself. Georgia, tell me about... So there is this other game that I started building some time ago and I still didn't finish it. So I apologize if it's rough and everything. Actually, I should have a link here. This is a simple, very, very simple HTML5 game. And you notice it's taking some time. The reason it's taking some time is that it's actually connecting to TFS, to our team foundation server. And so I'm already, you see the little locks next to the code. Like you used in Visual Studio in the past, you can do it with Metro application. You can just check in the code to terminal files TFS. And now you're doing source control over your code. In particular, I'm using the online version of TFS, which is super cool. Like I didn't have to set up any backend environment. I just went online to the TFS page. I created my account and now I'm creating source control of my project with a cloud version of TFS. Definitely recommend it. It's working very, very well. And so let's run my game. So let me show you how it works. I have a spaceship. I can control it with the mouse or I can control it with the finger. And the idea is to get as many points as I can trying to avoid the, this one, right? So if I go here, now you're going to see the explosion. So this is a very, very simple game completely built using HTML file. Do you like it? It's very funny. And this is just a bit of code. It's actually all of the code is probably, I don't know. Let's see. This is it, like 1000 line of code. So it's a very short, very small application. The reason I wanted to show you this is that let's try to see this application using Blende again. Let's have a look to the different components of this app. And I was having some fun and Blende as well is connected to TFS, right? So it's checking the source control from the cloud now. You saw different elements on the page. You saw the two bars on the right and then you saw the line at the center. Notice what is going to happen now. As soon as I run Blende, the game actually starts. So this is what I mean by you're editing a live page, a live code. Because all of this is a canvas element with JavaScript running behind and drawing dynamically. And you can just edit in real time the different parts. And I wanted to show you that the parts here on the right, for example, you might think it's an image. But actually, let's take a look. You see where I define it? Background image, right? UI.SVG. So it's not an image. I'm using some SVG content to give that effect. So let's have a look to... Let me show you that file, that SVG file, how it looks like. Images, left UI. There it is. You see? This is not an image. This is SVG. So if I go and view source, you will see that this is just an SVG standard document. So it's scaled very easily and very nicely. So that's one thing that was fun to use. The other thing, you notice that there is a 3D animation. Well, I'm doing that using a transform and using a CSS 3D transform. So I'm rotating the UI axis. So eventually, I can go here and change it to 65. In real time, we can re-update all of the value. And if I restart the application, you will see that actually all of the data and all of the settings there, they change in real time. So you can play dynamically and you can say, no, this is too much, actually. Let's try to do it 35%. And it goes back to a more proper view. The other thing is that right now I'm working on a tablet. So I'm using the Microsoft pointer, the MS pointer API that I showed in the previous session. But what if I'm running this game on a tablet, right, where I have more sensors? And so what I did inside the premium.js file was create my little own wrapper. It's called rotation helper. And look at the code, right? There is a promise. So I'm creating my own promise from WinJS. Then I'm using the Windows runtime to connect to a sensor, and in particular to an inclinometer. So I'm saying, hey, give me the default inclinometer if it's available on this machine. If it's not available, that's fine. I'm just going to fall back to mouse and touch. If it's available, I'm going to start listening for reading change. And then the other thing I also using below, I also have another helper called shaker helper that is taking an accelerometer this time. And on the accelerometer, I'm taking this gives me back a quaternion data, and I'm taking back in particular the zeta axis, and I'm checking the difference between different zeta values. So all of that to say that now the same game, you can control it just using, oops, you're just using an accelerometer, right? So the same game deployed on this machine, this machine has an inclinometer and accelerometer, and I can just control it using here. And if you want to jump, you just do this, any jumps. Let me show you the jump. I didn't show you the jump. The jump is the thing I'm more happy about this game. Now, like I said, if you want to jump an obstacle, voila. How about that? And so if you're on the device, you do this, and you will just, you're going to get the accelerometer zeta axis, and you're just going to jump. And so this was a very quick and funny project to work on to test different bits and different parts together. All right. We go to the end of the session, but I hope I gave you the idea of the initial message, right? If you know HTML and JavaScript, you're already a Windows 8 developer. The Windows runtime gives you access to the low level APIs, and the Windows Store gives you a way to monetize your application on millions of devices. You can find all of the tools and hundreds of samples at dev.windows.com. So if you go there, you can download the Windows release preview, which is the build that I have of Windows on my machine. You can download the Visual Studio and expression blend. And we also have really hundreds of samples, NSDK samples, that drive you through all of the capabilities if you need specific scenarios. My Twitter handle is there if you have any questions or any comments. Thank you for coming here. I hope you enjoyed and enjoy the rest of the conference. Thank you.
With Windows 8 you can use HTML and JavaScript to build native apps that have full access to the OS. In this session you will see all the latest news of the Windows 8 platform; you will learn to develop and design a Metro app, and take it to the next level with WinRT and WinJS. You will discover some of the best practices to reach hundreds of millions of customers through the Windows Store. Expect to see lot of hands-on code, including some of the upcoming standards from the HTML5 and CSS3 world.
10.5446/51157 (DOI)
Hello, hello. Oh, I think I've got sound now. Good morning. Okay, fine. So, it's a piece of green candy. My laser. Can you see the green sparkles in there? This light doesn't look smooth. It looks grainy. Those of you who have young eyes could try to defocus those sparkles, change the focal plane of your eyes. You will note that you cannot defocus the sparkles. Those of you with glasses on, remove your glasses. You will find that the sparkles are still in focus. The reason for that is that these sparkles are not here in the candy. They are on the back of your retina. That's an interference pattern. The light coming from the laser is coherent. All one color, all in the same phase. So the wave fronts interfere with each other, creating this nice sparkly effect. If you move your head back and forth, you will note that the sparkles move with your head. This is because your eyes are moving through the interference pattern. Isn't that cool? Our discussion today is about professionalism in software development. This is a talk I've given many times before, although there are bits of it that change from time to time. The purpose of the talk is to impress upon all of us the need for professional behavior. Our industry doesn't really know what it means to behave professionally. The definition of a software developer is someone who sits in a room, you throw meat in, and code comes out. No one really understands how or why that happens. We get very confused about deadlines and dates and estimates and all of the things that we're supposed to be doing, and we do them rather badly. Now that's not unusual. Our industry is young. How many of you have been programmers for more than ten years? Look at that. We've got quite a crowd here. More than twenty. Quite a few of you, I'd say about five percent. More than thirty. Okay, we've got maybe half a dozen. The next question is a much harder one. More than forty leaves me. In most professions, you would find that there would be far more hands raised in the older crowd. Oh, there'd be plenty of hands raised in the younger crowd as well, but if you went to a convention of plumbers, the number of people who had been plumbers for thirty years would still be rather large. If you went to a convention of hardware engineers, electronics engineers, the number of people who would be over thirty years would still be fairly large. But in our industry, we see this cut-off at about twenty years. Now that's growing, and if we came back in another ten years, we'd find that the number of people who were thirty-year veterans at programming would have gotten larger. There is a force in our industry that discourages older programmers. The idea is that you should have moved into management after ten years or so, or even after five years. If you're not a manager after five or six or seven years, something's wrong with you. And that's probably true. At least there's certainly something wrong with me. I have tried to be a manager a number of times. I have managed teams, I've managed groups, I've managed divisions, I've managed whole companies, but I'm still a programmer. It's what I want to do. It's what I like to do. It's what I'm happiest doing when I'm writing code, or if I'm on stage yelling at a bunch of programmers, that makes me happy too. And I happen to enjoy doing my videos, which is just yelling at a bunch of programmers through an indirect means. The idea that someone would remain a programmer for decades is rather foreign to our industry, but it's something that we're going to have to get used to, because there are a lot of us who would just rather stay programmers for a very long time and enjoy the process. And those folks will gather a great deal of experience and will help us define what professionalism is. The slide says that our craft is defined. Ten years ago, I would not have been able to make that statement. Ten years ago, we did not know exactly what it was that a programmer did, other than write code. But we couldn't define on a day-by-day basis what the process was, what the set of disciplines was, what the professional behavior was. Now we can. And that's because of something that happened about ten years ago. How many of you are now doing agile development? Now, this is a remarkable number. That's probably over half of you, maybe over two-thirds of you. Most of you are lying without knowing it. But that doesn't matter. The notion that most of you believe you are doing agile development is fascinating. Agile development is a set of practices. It began, oh gosh, it began a very long time ago, but in earnest it began around 1998 to 1999, when Kent Beck published his rather controversial book, Extreme Programming Explained. Did anybody read that book? It caused a firestorm. The industry at the time was immersed in waterfall. Who's still doing waterfall? Look at that, right? Hardly anybody, a couple of unfortunate people raised their hands without realizing that they're not supposed to do that. Hardly anybody does waterfall anymore, but ten years ago, twelve years ago, that was what everybody was doing. That's what everybody thought they were supposed to do. We have managed to turn the industry around in ten years, 180 degrees. That's a fantastic result. And in the midst of all of that, we defined a group of disciplines and behaviors, which I'm going to outline here, that define to a large degree what it means to be a professional software developer. But professionalism goes beyond simply following disciplines. There's an entire ethical topic involved here as well. What does it mean to be an ethical programmer? How do you know if you are being right? How do you know if you are defending your profession? That's an interesting topic as well, which we're going to discuss here along with the disciplines. I wear this green band. I've had this band or a band like it on my hand now for six or seven years. I don't know exactly when it was. The woman who put it on me is named Elizabeth Hendrickson. She just walked up to me and handed me a green band and said, I hear wear this, and at the time the band said Test Obsessed, which is kind of her trademark. And it was a marketing gimmick for her company. I put it on and I found that it took on a meaning, an ethical meaning. I couldn't take it off, not because it was glued on, but because there was an ethical impact to having this band on my hand. The band said, I will follow the disciplines that I know. I will behave professionally. I will not do those things that I know are wrong. Even though I'm put under pressure by managers and other people who have influence over me, I will not do those things I know I should not do. A remarkable effect that it had on me. If you want a band like this, you can get one at that URL. I don't take money for them. I do ask that you make a donation to a charity. I figure that if you're going to put something like this on, you should pay for it. But don't pay me. Give the money to some worthy cause and then I will send you a band. One of the obvious disciplines that we learned, starting around 1999 and why we didn't learn it before that I don't know, is that things are best done in short iterations. Now everybody here is doing agile, almost everybody. So what is your iteration size? Who is doing four weeks? So a few of you are still following the old-fashioned scrum thing that said four weeks. Almost nobody does this anymore because it was found to be too long. How many of you are doing three weeks? A few more. Two. Look at that. One. Okay, I think we found the norm of the curve. The norm of the curve seems to be around two. But if we had asked that question five years ago, it would have been closer to three. If I had asked that question eight years ago, it would have been closer to four. The mean of that number is moving to the small side. Come back in five years, there will be far more of you doing one-week iterations. I know that seems impossible, but iteration sizes are shrinking every year. People have found that four weeks is too long. Too much can go wrong. You can get way off the rails in four weeks, and then it takes too long to recover. Three weeks is too long. Two weeks, okay. All right, we can do two weeks. One week. Not a lot can go wrong in one week. You can't make a big mess in one week. And if you close the feedback loop once a week, you can stay pretty close to the rails, not Ruby rails. Rails. You can stay pretty close to your goal. You can do quick course corrections. Now, of course, everybody says, yeah, but holy cow, if we're doing one-week iterations, all we'll ever be doing is really planning and retrospecting and demoing, and we'll never write any code. No. All those meetings, the planning meetings, retrospective meetings, and the demos, they all get shorter too. You do a week iteration. The planning meeting for a week doesn't take more than an hour. The demo, well, you're not going to get a lot done in a week, so the demo's not going to take more than an hour. Retrospective, what can you talk about after a week? Yeah. So, everything shortens up, and you get very nice cycle time. So, I'd like you to think about that. Short iterations are a moral issue. The shorter we make them, the more control we have over our process. The more accuracy we can get in our process. The longer we make them, the more can go wrong, but more importantly, the longer you can hide. If your iteration is a month long, you can hide for three and a half weeks. Three and a half weeks can go by, and no one will know that you've been surfing the web, doing some day trading that you're just behind. No one will know that you're depending on someone else to finish the job for you. Has anybody ever worked at a company where you knew there were programmers who simply weren't doing anything, or worse, doing stuff and you wished they wouldn't? Stop writing that code, because you're going to have to come in on Saturday and fix the mess they've made. Has anybody been the guy who comes in on Saturday and fixes the mess that everybody else made during the week? Okay, nobody wants to raise their hands on that either. I consulted for a company like that. They had about 40 guys working on code. Five of them knew what they were doing. The other 35 made horrible messes during the week, and then those five would come in on Saturday and Sunday, and they'd make everything work. One day, the manager, this is several years into this project, the manager finally said, this project's going too slowly. We need everybody working on Saturdays. And then nothing got done. Short iterations go beyond just the iterations that we see in Agile. We do everything in short iterations. We write our code in tiny little iterations. We will write our code in periods that last perhaps 30 seconds if you're following a test-driven development cycle, something I'll be talking about a little bit later. We do everything. We can in tiny little iterations because, frankly, we're not good at doing anything. So the more we check it, the more we close that feedback loop, the better we are at keeping on course. Has anybody ever had this lovely dream that you would be given a specification? You would take that specification and study it and internalize it, and then in a burst of insight and great creativity, the code would pour out of your fingers, and it would be perfect. You would hit Control S for one and only one time to save that source file. You wouldn't even bother to compile it because you knew it would work. You'd check it into source code control, and everyone else would stare in awe at your mastery. Huh! That person does not exist. No such person has ever been able to do that. The best way to get anything done is to iterate in tiny little cycles. By the way, we're taught this from about first grade on. Who remembers the teacher who first told them that in order to write a story, you need to first write a rough draft? This might have been in second grade or possibly third grade, and the teacher says this, and I remember the teacher saying that to me, and my thought in my head was, what? I'm not doing that. What a waste. I'm just going to write the damn story. And I did. I just wrote the damn story, and in second grade, no one could tell. But you get into high school, you get into the upper grades, and all of a sudden the teacher starts kicking it back at you, saying, what's this crap? They don't use those terms. They actually just use numbers. You know what the numbers mean. There's some numbers that mean really good, and there's some numbers that mean crap. And you start getting these bad numbers, and you think, oh, what am I doing wrong? And maybe you go to the teacher and ask, what are you doing wrong? And the teacher looks at you and says, well, did you do a rough draft? Well, no. Well, that's what you did wrong. And so then you start doing the rough draft, and then you do another draft, and you read it over. That's a little thing we learned after a while. You write something, it's a really good idea to read it. So you read it over, and then as you read it, you realize, oh, this is awful, and you make some changes to it, and you do another draft. And as you get into college, that gets worse and worse. So you're doing draft after draft after draft. You're handing it to other people. Would you read this for me, please? Because I don't know if I'm doing any good. You get this feedback, and changes and changes, and you finally hand it in, and people are going, oh, man, you did a very good job on that. And you realize that it requires this kind of constant iteration and feedback to do anything creative. Then you get your job as a programmer, and the rules completely change. You're expected to write it correctly the first time. You must now abandon all of that stuff that you learned about creative writing and iteration and rough drafts, and then you just have to write that code the first time. And we are taught, not taught, we are made to feel that any change to existing code is rework, and it's evil, and it's wasteful. It's not. It's part of the natural process. You cannot write the code correctly the first time. You have to iterate on it. Does anybody write their code and then read it? It's very depressing. You write the code, you actually get it working, then you read it and go, oh, my God, I wrote that. Some of you will have that feeling. You'll go away to lunch. You'll come back and there's this code on the screen and you look at the code on the screen and say, I didn't write that. That's not my code. And then eventually you learn, OK. Another practice, never be blocked. This is an ethical issue. We sometimes get caught in this notion that this is my job, that's their job. And I'm waiting for something they're supposed to do. That means I can't do anything because it's their job and I'm not going to do it. It's their job. It doesn't work that way. You can try to make it work that way, but that is a huge pathology, a disease of dysfunctional organizations. If you're waiting for something because some poor guy couldn't get it to you, do it. Go do it. Find some way not to be blocked. Or if you don't want to do it, find some way to isolate yourself from it so that you don't have to wait. Do not be blocked. The worst thing you can do as a professional is nothing. What you can do is to sit and go, well, I can't work right now because Bob over there hasn't finished what he's supposed to do. Or that group over there, they haven't delivered their part of the bargain. They haven't delivered their module to us. So we can't work. I once was consulting for a team. The team had a set of QA people, three QA people who were supposed to be testing the system. And they were just sitting there. I said, you're not testing the system. They said, no, our QA system is not working at the moment and the computer is not working, the server is not working. And we've called the people who are supposed to fix it, but they're busy so we can't work. And I looked around the room and there were all these laptops sitting on desks because all the developers were at lunch. And I said, well, why don't you use one of those laptops? Oh, no. We can't use those laptops because they're not bonafide production environments. We can only do our QA tests on real production environments. I said, programmers are testing their stuff all the time on those machines. Why don't you open up one of those machines and run some tests? If you find a bug, you can write it down. Oh, no. We have to wait. I was amazed at that attitude. That is an attitude that accepts blockage as though it were normal. Being blocked is immoral. It is wrong. It is unprofessional. It is irresponsible. It is something that you as a creative engineer ought to find some solution to. When you open up the top level directory of your application, what do you see? Do you see directories that are named models, views, and controllers? Do you see top level directories that tell you that you are in a Rails framework or a Spring framework or a.NET framework? If you were to look at that top level directory, what would it tell you about the application that you were writing? Would it tell you what that application did or would it tell you what technology that application was using? I'm going to be doing a talk a little bit later tomorrow, I believe, called clean architecture. The idea behind our architectures is that a good architecture tells you what it's for, not what it's made out of. Many of us, if asked, what is the architecture of your system, we would say something like, oh, yeah, it's Spring, hibernate, Tomcat, with a lot of dependency injection. Or we might say, oh, well, it's based on.NET with an ADO back end, and we're doing a lot of link, and we got this view model thing that we're doing. Yeah, but that's not an architecture. That's a set of tools. The frameworks you're using, the technology you're using are a bunch of tools. They're not an architecture. An architecture tells you what the application is for. If I were to show you the blueprints of this building, you would look at those blueprints and you'd say, well, that's some kind of sports arena or convention center or something. You could see where all the bleachers were. You could see that big open area out there. You could see the area around the outside. And you would go, ah, that's some kind of arena. You would not say, hmm, looks like they used hammers and saws and bricks. You would know exactly what that building was for. The architecture of your system should scream at you. I am an accounting application. I am a trading application. I am a shipping application. The architecture should yell at you what it's for. First time you open up those directories, you should look at that and go, ah, yeah, this is some kind of accounting application. Look, there's transactions. There's ledgers. There's credits. There's dabbits. This is an accounting application. You shouldn't have to hunt for that. What should you have to hunt for? You should have to hunt for the fact that it's delivered over the web. The initial architecture should not say, ah, I am a web application. That's a detail. The fact that it's a web application is some nasty little detail. I don't want to know that it's a web application. You can tell me about that later. I'll hunt for how the damn system is delivered. I don't want to know web, web, web, web. I want to know that it's an accounting system. I don't want to know what database you're using. I don't want to know that there is a database. The database is a detail. Get it out of my way. I want to know what this system does. Do you know what the goal of a good architecture is? Other than to announce to the world the purpose of the application, the goal of a good architecture, the goal of a good architect is to delay as many decisions as possible. Differ decisions about frameworks and databases, web servers, and all those frameworks you're going to use. Differ those decisions. Get them out of the way. Make them late. The later you make a decision, the more information you have about how good that decision is. One of the big mistakes we make as software developers is we make a whole raft of decisions far too early. We say, oh gosh, we've got to have a database. What database? Well, this is.NET. We don't have a lot of choice, do we? Well, we'll use one. And we've got to have a framework. And we've got to have a language. And we've got to have all these things. And we make all those decisions, and then we start to write the code. Now you've got to pick your language early. But other than that, you want to be able to write your business rules early and defer everything else for as long as possible. I'll have more to say about that later in my clean architecture talk. Anybody here got a mess? Bunch of code that you think is crap? Look around the room here. This is a large fraction of the room. Why is it a large fraction of the room? Because we've been writing a lot of crap. Why? Why have we been writing so much crap? Well, there's a reason. And it's not just down to carelessness. And it's not just down to rushing. Oh, rushing is certainly part of it. We get into the mode where we've got a deadline, and we have to get this thing working, and we just forget all about the rules, and we spit this code out on the screen, and we think it works, and we hand it to QA, and we run home. And we've made a mess. But then later, something more interesting happens. Later we come back, and we open up a module on our screen, and we look at that module, and we say, ah, that's awful. I should clean it. And this, for a moment, for a moment in your mind, you think, I'm going to clean this code. But then something else takes over. And the next thought in your mind is, I'm not touching it. Because if I touch it, I'll break it. And if I break it, it will become mine. Walk away from the code. And so we allow the code to sit there, dirty, rotten. Our opportunity to clean it is gone. And we allow the entire system to rot. I will talk a little bit later today in this talk about a means to get around that problem. However, how do you solve it? Has anybody here ever told their boss, when the boss has come to them and said, how come it takes so long for you guys to do anything? And you say, well, the code is a mess. The boss says, well, what are we going to do about it? And you see your opportunity. We could redesign it. Let's just rewrite it. If we rewrite it, we can do it clean and neat. We can throw away all this old crap. Let's just rewrite the whole thing. This is the great seduction. Everybody wants to rewrite the application. Everybody wants to walk away from the mess they made, leave it in the room where it's messy, close the door on that room, and never open up the room again. That's what we like to do. This is incredibly irresponsible. If you did that in your house, eventually you'd be out of rooms. All right? But don't go in that room. There's another reason why we don't do this. This is horrifically expensive. And what we have found is that these big redesigns just create another messy room. They don't actually clean anything up. You spend an immense amount of time and effort to get to the exact same place you were a year ago. What we need to do is face the fact that we've made messes and clean them. But you cannot clean them all in one great shot. You cannot clean things up with one great cleaning project. Do not, please do not dedicate a month of your time to nothing but refactoring. All right? Instead, you just every day look at some code and you clean it up a little bit. You do some random act of kindness to the code. If you follow a basic rule, always check the code in a little cleaner than when you checked it out. Always do some random act of kindness. Check out a module. Do whatever you have to do in that module. Then make sure it's just a little bit cleaner than when you checked it out. Check it back in in a slightly cleaner state. If everybody did just that, the code base would just get gradually better and better and better. I'll talk a little later about means to do that. But for now, no grand redesigns. Resist the temptation to redesign everything. The project is almost certainly going to fail. Oh, not a month long. When a month long will probably succeed. But a year long redesign, man years of effort into a redesign, probably going to fail. And it's fundamentally a flawed technique. It's irresponsible. Clean the damn mess up. You made it. Clean it up. And keep your code clean. Write clean code. Clean the code that you've written. Emphasize in your mind that it is unethical to write code that is messy. Learn the simple rules of clean code and then follow them because in the end, the only way to get things done in a hurry is to do them right. Your grandparents taught you this. Your parents taught you this. It's an old saw. Everybody knows it. Anything worth doing is worth doing well. It is certainly true. And although we are tempted to rush and make a mess to meet some stupid deadline, we will actually meet that deadline better if we keep everything clean and do the best job we can. Has anybody here been slowed down by a big batch of messy code? That's everybody. What if the code were always clean? We could go faster. The lines would be shorter. We'd actually have fewer programmers. How many of you are practicing test-driven development? Hands up, test-driven developers. So this looks like about a third of you. Maybe a quarter of you. What's the matter with the rest of you? This is a discipline where the jury is in and was in five years ago, where the technique is well-known to be very effective, that you can produce much better code by following it, much better performance. You get a lot of benefits out of test-driven development. So why wouldn't you practice this? Now let me describe it for you, just in case you weren't aware of what this is. Test-driven development is composed of three laws. The three laws sound stupid. Anybody who hears them goes, you can't possibly mean that. So here's the first law. You are not allowed to write any production code until you have first written a unit test that fails. Now right away, that sounds dumb. I mean, what test are you supposed to write? There's no code to test. So how can you write a test if there's no code to test? There's a whole bunch of arguments that you could go on about this. It just sounds stupid to write a test first. But the second law is worse. The second law says you're not allowed to write more of a test than is sufficient to fail and not compiling is failing. Which means that you've got to start by writing a test, but you can't write more than a line or two. Because instantly you're going to call some function that doesn't exist or mention some class that doesn't exist. So you have to stop writing the test before you're even begun writing it almost. And you have to start writing production code. And then the third law kicks in. The third law says you're not allowed to write more production code than is sufficient to pass the currently failing test. Which means that you write a little line of test code and oh, that doesn't compile. And then you write a line of production code and oh, that makes it compile. And then you have to go back to the test. Add another line to the test. Oh, that doesn't pass. You write some more production code. Oh, that makes it pass. And you're going back and forth between these two streams of code every 30 seconds. If you're a programmer of some years and most of us in the room are, that just sounds stupid. I mean, why would you do that? Why would you interrupt your flow of thought by bouncing back and forth between these two streams of tests? Why would anybody do that? It would be boring. It would be tedious. It would be slow. You'd never be able to write a whole algorithm without bouncing back and forth. You'd never be able to focus on anything. It's all good points. All true. But imagine a group of people following these three laws. Pick one of them. It doesn't matter who. It doesn't matter when. Everything they were working on executed and passed all its tests a minute or so ago. And it doesn't matter who you pick and it doesn't matter when you pick them. Everything they were working on executed and passed its tests a minute or so ago. Maybe two, maybe five. But some very short time ago, everything worked. What would your life be like if you were never more than a minute or so away from seeing everything work? How much debugging do you think you would do? The answer to that is, well, there's not a lot to debug if it worked a minute ago. If it all worked a minute ago. And now it doesn't, it's something in the last two lines of code that I wrote. The most common debugging technique is to hit control Z enough times to back away and then just retype what you wrote. Because typically you just typed it wrong. Does that mean I don't use a debugger? No, I use a debugger. Every once in a while there are bugs that escape. It's just still software. It's still hard. I still use a debugger. I don't live in the debugger. How many of you are really good at the debugger? You know all the hotkeys, right? You are debugging God. You know how to do a step into and a step over. You can set 20 break points here. Wait for that one to get to it 10 times. Then you get to this one three times. Now you watch that variable. Wait for it to be a 37. And now you can debug. This is not a skill to be desired. You don't want to be good at the debugger. If you are good at the debugger, it means you spend a lot of time debugging. I don't want you spending a lot of time debugging. The debugger should not be your tool. It should be something that you use every once in a while because you know you screwed up. Not because it's, oh, I'm going to debug this now and I'm really good at debugging. That's a shame. I'm sorry you're good at debugging. I believe that if you were to practice those three laws, you could cut your debugging time by half, maybe more. And maybe you could think, well, that's a good reason to practice those three laws, or maybe not. Maybe that's not enough. So never mind that. How many of you have integrated a third party package? So you can everybody, everybody. OK, so you get this zip file from somebody and you unpack the zip file and there's a PDF in there along with all of the DLLs and stuff that you need. The PDF has got a nice little manual that describes the third party package. At the end of the PDF, there's a little appendix area with all the code examples. Where's the first place you go? You go to the code examples. Why? Because your programmers, you don't want to read the manual that some tech writer wrote. You want to go to the code. The code will tell you the truth. You'll read the code and go, oh, I see how this damn thing works. And if you're lucky, you can copy and paste the code out of the manual and put it in your application and fiddle with it to get it to work. When you are writing unit tests, you are writing the code examples for the whole system. If you're following those three laws, every part of the system will have a unit test. And that unit test will be a short little snippet of code, an example for how that part of the system works. You want to know how to call some API function. There is a test that calls that API function every way it can be called. You want to know how to create some object or set up some relationship. There are tests that do that every way it can be done. And those tests are short, little unambiguous snippets of code. They are written in a language you understand. They are completely unambiguous. They are so formal they execute and they cannot get out of sync with the application. They are the perfect kind of documentation. The perfect kind of low-level documentation. If you follow those three laws, you get massive, comprehensive, thorough, low-level documentation that cannot get out of sync is utterly unambiguous. It's the perfect kind of documentation. If you like documentation, Test Driven Development will give it to you for free, or at least for the cost of writing those tests. But maybe that doesn't impress you, so never mind it. How many of you have written tests after the fact? How much fun is this? It's not a lot of fun, right? You spend a lot of time writing the code and that's the fun part. And then you have to test it manually and debug it. That's kind of the fun part, too. And then you get it all working and then there's this rule that some guy wrote and says, well, you've got to write tests now. Okay, test here and all right, it says stupid rules and write a test there. That part over there is going to be hard to test. It's all kind of coupled into the code. I know it works. I'm not going to write that. I got other things to do and so you leave holes in the test suite. Writing tests after the fact is something that you do almost under protest. And under protest, you leave holes in the test suite. You don't want to test everything for crying out loud. You've already done it manually. When you write the test first, something else happens. First of all, writing the test first is fun because you're into the problem. You're specifying the problem. You've set yourself a challenge past that test, you sucker. And now you will demonstrate your prowess as a programmer by making that test pass. I got it to pass. How many of you are programmers because you got something to run once and you want that feeling again? You get that feeling all the time. You write a test. I wonder if I can make that pass and then you write the thing that makes it. Oh, I got it to pass. All right, next test. That would be hard. I got it to pass. You get this lovely charge every time. So the enthusiasm does not wane. It's not like this. Oh, crazy. I got to write a bunch of tests. There's a level of enthusiasm that stays up. And because of that, the test suite tends to be complete. Moreover, you're writing the test first and that forces you to design the code so that it can be testable. Another word for testable code is decoupled code. So you are going to decouple massively simply to get your test done. You'll have a better design. Okay. Maybe that's not enough for you. Maybe you don't want to shorten your debug times and maybe you don't want to stream a perfect documentation and maybe you don't want a really good decoupled design because you just think it's stupid to write test first. But there's one other thing you get out of it. In fact, you can throw all that other stuff away. It's one other thing you get out of this. You get a suite of tests that you trust with your life. You get a suite of tests where you can click a button and see all those tests pass and you know the system works. And when you have that and then up on your screen comes some dirty code and you think, oh, I should clean it, you've got those tests. So you try. You make a small change. Maybe I can change that variable's name. And you run the tests. Oh, they still pass. Maybe I can take that function and split it into two functions. Run the tests. Oh, they still pass. Maybe I can take that class and split it up into two classes. Run the test. Oh, I don't know if I can walk, put it back. When you have those tests and when you trust them, you can clean the code. You will clean the code because suddenly the cost of changing code approaches zero. The risk goes way down. You've got a suite of tests that tells you it's okay. Don't worry. I'll just make a little change here. Oh, the tests still pass. This is so nice. Of course, the tests have to run quickly. It doesn't help you if the tests take five hours to run. So you need the tests to run like this. That's a design challenge, by the way. I pose it to you as a design challenge. Keep your tests running fast. Your designers, you can figure out a way to do that. Software is a sensitive discipline. And by that I mean that if I have a binary executable that's maybe a megabyte long, how many bits are in that? Eight million bits, small program. Eight million bits. How many of those bits could I flip from one to zero or zero to one and crash the system? Well, there is one in there that will do it. How many of those one bits are in there that will do that? Probably thousands. There are probably thousands of bits inside that eight million, which if I just flip them from one to zero, it will crash the system. Our systems are sensitive to failure at the bit level. Hardly any systems are that sensitive. We could walk around this building and find some bolts and some girders, take a few of them out. The building wouldn't fall down. We could go out onto the highway and start removing bolts off the bridges. The bridges would stay up there for a while. We could go out drinking tonight and kill 10,000 brain cells. Probably enjoy the process. Humans are not sensitive at the one cell level. Bridges are not sensitive at the one bolt level, but software is sensitive at the one bit level. The only way to create correct software is to make sure that all eight million of those bits is correct. That's hard to do. Most engineering disciplines don't have that problem. Most engineering disciplines live inside a robust world where there's room for error. We don't have that. What other discipline is similar to ours? Well, there is one. It's accounting. The accountants, they work with spreadsheets and there are digits on those spreadsheets, not all of them, but certain digits on those spreadsheets which if they get wrong will take the company down and send the executives to jail. How do accountants prevent that from happening? Sometimes they don't. But how do they try to prevent that from happening? They have a discipline. That discipline is called double entry bookkeeping. They say everything twice. They enter every transaction once on the debit side, once on the credit side, and those two transactions follow separate mathematical pathways until there's this lovely subtraction on the balance sheet that must yield a zero. If it doesn't yield a zero, something went wrong. If it does yield a zero, we hope nothing went wrong. It's possible that you could make complementary entries that gave you a false zero. That can happen. The odds are very unlikely. No judge would throw you in jail if you destroyed the company based on that kind of an error. When accountants enter their transactions, they enter it in the debit side, they enter it on the credit side, they get the zero on the balance sheet, then they enter the next ones. They invented this 500 years ago, probably after a king started cutting off some heads. Test-driven development is double entry bookkeeping. We say everything twice, once on the test side, once on the production code side. We say them at the same time, just like they enter the transactions at the same time. We do that for exactly the same reason. We have to get all our bits right. They've got to get all their digits right. We've got to get all our bits right. How can we as professionals treat our material, our code, with less respect than accountants treat theirs? Are they better than us at the moment they are? But should they be? Is their stuff, are their spreadsheets more important to the company than our code? Why do they have a discipline like that? Do we don't, or at least we don't follow it? Why is it that we, thumb our nose and say, ah, there's test-driven development stuff. Let me write the code. Can you imagine an accountant saying, ah, there's double entry bookkeeping stuff. Let me just enter the transactions. I'll enter them right. What a lot of crap. Do accountants have a QA department that they hand their spreadsheets to? Here, we're done with our spreadsheet's QA. Now see if you can find any problems with them. Why do we have QA departments? Why has our industry invented an entire department to make sure that we don't screw up? Why? Because we screwed up. And our companies are going, ah, well, what are we going to do about these developers? They're just pushing crap out to the door. And, well, let's create another group, QA. That'll make sure it doesn't happen. And then you know what happens. The developers say, ah, we got a QA department now. That means I can meet any deadline. Give me a deadline. I'll check the code in by that deadline. It'll have some bugs, but that's their problem. I worked for a company once where everything was always done on time. Right? The schedule was perfect. Everything was always done on time. Sometimes a programmer would achieve his goal by checking in a blank source file. There was nothing in it, but he checked it in. Okay, check that off the schedule, and then QA gets it. Hey, wait, there's no code here. Well, that's a bug. Put that on the schedule to fix it on the bug. That's not on the development side of the schedule. If you are practicing test-driven development or any kind of testing discipline, the goal is to cover the code with tests to make sure that you have tested every line of code you write. Does it make sense to have any lesser goal? For example, should you be satisfied with 60% code coverage? Let's say you've got a tool that measures code coverage, and it says, oh, 60% of your code is covered. Is that good? A lot of us would look at it and think that's great. But what it means is that you don't know if 40% of the code works. Is that acceptable? No. What if an accountant said, 40% of our transactions are double entry? Yeah, they're 60%, we're not so sure about. But 40% of them we're pretty sure are right. Isn't that good enough? No, of course it's not good enough. There is no meaningful goal short of 100%. If you're looking at code coverage, that's the only number to think of. How am I going to get this up to 100%? You never will get it up to 100%. That's okay. It's an asymptotic goal. You never fall for the silly line that says, you know, 70% is pretty damn good. No, it's not. It's awful. 70% is terrible. It means there's 30% of your code that you've got no idea about. This is the end result of QA. When you write a bunch of code, you hand it to QA, what should they find? Nothing. QA should find nothing. They should wonder why the heck they have a job. And that should be your goal. You should be looking over at those QA guys and going, you're not going to find anything. Next year you won't be here. You'd like to take QA and have them do a slightly different job, actually, because having them work at the back end is an awful thing to do to people. Imagine that this is your job. You're going to, has anybody seen what the QA test plan looks like? There it is. Anybody seen the QA test plan? That's not the QA test plan. That's the table of contents for the QA test plan. The guy holding it out to me is the QA manager of a large internet travel organization. He's got 80,000 manual tests that he farms out to India every six weeks. It costs him a million dollars every six weeks to run those manual tests. He's holding it out to me because he's just come back from a meeting with his boss who just came back from a meeting with the CFO. The CFO said, what the hell is this million dollars every six weeks that we're spending? He's talking to me now saying, I just got my budget cut by 50%. Which half of these tests should I not run? I told him, it doesn't matter. You can cut the document any one of a bunch of different ways. You're not going to know if half your system works. This is the end result of manual testing. Manual tests grow and grow because the features of the system grow and grow. At some point, the effort required to execute those manual tests will become so expensive that some financial officer will spot it and say that must be cut. You will lose your tests. It will happen. The only solution to that is to automate those tests. I don't want QA finding anything and I want all the tests that QA writes. I want QA writing the tests, by the way. I want all the tests that QA writes to be executed automatically. I don't want any manual tests in the system. Manual tests are fundamentally immoral. That doesn't mean that you can't have people sitting there operating the system. I like the idea of exploratory tests, but I don't want them given a script. These are the tests you must pass. Imagine that's your job. You've got to follow that script. Has anybody looked at what that's like? Enter P9-537 into the username field. Hit return. Enter QX723 into the password field. Click login. Did the welcome screen come up? This is your job. And you're going to repeat that every six weeks. One last. One of the things a professional learns how to do, and it's very, very difficult, is to say no. If your boss comes to you and says, we've got to have this by Friday, and you know it cannot be done by Friday, you must say no. You don't understand. We must have it by Friday. No. Now, you understand that if we don't have this by Friday, we're going to lose a lot of money. I understand that. So will you get it to us by Friday? No. Why not? Because it's not possible. Well, can you try? No. You must never say that you will try. What behavior are you going to change because you said you would try? How will your strategy now be altered because you said you would try? How are the odds of succeeding made different because you said you would try? The reason you said you would try was to make him go away. And what he heard was that you would do it. And what you meant was get out of my face by saying that you would try, you told a lie. And that lie will come back, of course, because you won't get it done by Friday. And then they'll say, well, you said you'd try. Well, yeah, I tried. Well, you didn't try enough. Thank you for your attention. I'll see you all at another talk. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
We’ve come a long way in the last 20 years. We start our journey in the late 80s and our "discovery" of design principles such as The Open Closed Principle and the Liskov Substitution Principle. In the middle 90s, we discovered that these principle led to repeating patterns of design. We gave those patterns names such as Visitor and Decorator. At the turn of the millennium we found that the benefits gained from the principles and patterns could be amplified by combining them with practices such as Test Driven Development and Continuous Integration. And now, as the decade is coming to a close, we have found that these principles, patterns, and practices have driven us to define a true profession. What will that profession require of us, and who among us can truly claim to be professional?
10.5446/51158 (DOI)
Does it work? No. Does it work? Yes. Good afternoon. I've been experiencing quite a tremendous evolution in computing. Most of it in hardware. That's revolutionary. But sometimes I wonder, has anything happened in software really, you know, at the bottom? And I'm doubtful. Sometimes I think that software evolution has followed more or less a path like my life. When I was 20, everything was okay. When I was 40, I started needing reading glasses. When I was 60, I needed an hearing aid. And at 80, I also needed a walking stick. Has software developed the same way? No, not quite, I think. But software has not, we have, you know, steady progression from better to better to better to better. To me, it's more like an envelope, you know, the curve that goes like this. Round and round and round. There's a slight movement, but very small. And very often I get a very horrid feeling that haven't I experienced this before? Didn't we find that that was useless at some time ago? Business Churchill once said something like, people who do not know history are doomed to repeat it. And in our business, we are at least doomed to repeat all mistakes. I wrote my very first program in about 1957. And my programming technique was, I thought of something that could be nice to get running on a computer. I wrote some code and put it into the machine, binary of course, everything binary. And then I sat in front of the control panel, changed the code, added some new features, added some new, removed some bugs, added features. The result was disastrous. Today it's called extreme programming. My programs in the 50s had this characteristic that if I made a completely simple change in what corner of the program, something broke in the opposite corner. Spaghetti wasn't the first name for it. Then in 1960 something got more serious. I became V, a team. And actually we had the real financing. We had real visions. We had real users. And we knew that what we were going to attack was going to be a very large and very hard and difficult project. So we couldn't use the programming techniques from the 50s. We have to do something better. And the first sort of slogan was, any method that prevents the programming from writing code is a good method. Think, think, think first design and then write the code when you know exactly what you want to do and then do it like that. Now, all through the 60s we were actually working on the same system. It was the first, I think in the world first, world first system with a database-centered architecture. It was the first software product to be put on the market. Anyway, through the 60s we evolved our software engineering techniques. And by the end of the 60s we got it right the first time when we wrote code. The last program we did, I admit it was small, it was Fortran. Three out of four subroutines worked first go. The whole system worked immediately. No bugs. And that's the kind of world I like. And then in 1960, 1970, I had to move on. And then I wanted to move on to what I thought would be the thing for the 70s, communication. Distributed systems. Simla was invented by Dal and Igor in an office across the yard from my office, both in Oslo here. And communication, the objects. So I wanted to go object oriented. And to make a long story short, none of the things we had found out in the 60s for programming worked anymore. And what's the reason? Actually, if you see the Gang of Four book on design patterns, they say this. Object oriented program, runtime structure often bears little resemblance to its code structure. The code structure is frozen at compile time, consists of classes in fixed inheritance relationships. The runtime structure consists of rapidly changing networks of communicating objects. It's clear that code won't reveal everything about how a system will work. And these guys wrote this and didn't do anything about it. I mean, I'm shocked. These are first class bright people and I can't see that I want to do something about it. I mean, today, people are creating mission critical programs that contain instructions that have been written once, never read, never execute it. And I said, this program is now tested. Here you are. Use it. It's, I haven't worked for it. I, of course, have been hitting this wall also since 1970. We have been programming our stuff. But it's hard, it's hard, hard to get it right. And then I retired, I thought, now I want to do something about it. And I did. And that's what I'm going to talk about today. How to do something about that, how to make code that reveals everything about how the system will work. The runtime. If you heard the keynote this morning, you will realize that the user, the end user is a hero. It's a purpose of everything we do. It's actually where value is created. Value is only created when a user executes a program. And it has some benefits for them. And the code does tell us what happens. How can people live with that? I don't know. Anyway, the end user is the hero. Users, to users, a system is the user interface. The things behind it are just props that they are not interested in and shouldn't be seen. So today I will focus on the end user's mental model. The mental model that the user has said. And then, and this is also something very, very old, Douglas Engelbart, in the end of the 50s, I think his paper is from 62 or something. He said he wanted to use the computer to augment the human intellect. An extension of the human brain. That requires that there is a sort of harmony between what happens within the computer and what happens in the user's brain. And that takes us a bit beyond what we heard this morning. It's not only a question, well, that might be empathy for all I know. But it takes us beyond the user interface and into actually the symbiosis between user and computer. And in mind sort of what I'm talking about is that objects, not classes, but objects are a very good thing to use as a building block for building mental models, for building programs, and for explaining what happens at runtime. Object were originally invented by Dal and Nugoy, as I said. This is an example from the first textbook for Simula. And that was a simulation example. And it was a simulation of a post office. So they say that a post office, oh yes, we have counters. In front of the counter there's a queue, there's always a queue in front of the counter in a Norwegian post office. And behind the counter you have a clerk. The queue consists of customers. And customers, well, they have a list of tasks they want to perform when they get to the clerk. And also they know how to perform those tasks. State and behavior. And that's what they then combined into what's called an object. So an object, the name is a customer. It has a list of tasks. And it knows how to carry out those tasks. A very, very powerful idea. And in the Nugoy and Dal got the Turing Award once, and in the citation there they say Simula has a strength in being used for describing a system, making me understand the system. And then of course for programming. But that's incidental in my mind. The point I'll make here is that people use, appears to find it natural and simple to think in terms of objects. Things that can do things, object can do things and remember things. I like to think of them as perfect bureaucrats. They work in a bureaucracy together with others. They do things and they always, always follow the rules. They never do anything outside the rules. Of every stupid it is, they'll follow the rules. And my object always do. So that's the first, Simula, an object and thinking. Then the next is the inventor of the term object orientation, was Alan Kay. He came to that and he made the first pure object systems in small talk. And his idea was that his dream was a dinabook. That's a personal computer that contains all the data wanted and needed by a person. And then he said, and that's important. The most important data are the data that controls the execution. The programs must be personal too. Can people, ordinary people learn how to program? How to test, how do you test that well? You take, remember this was a time when our research institute had one computer. At Xerox Palalti Research Center, when this happened, he had his own computer and he got in some kids from a nearby school and I got a set of computers to play with. And the result was yes, yes, yes, yes. No problem. Of course I can think in terms of objects. Of course I can program in terms of object. And we'll see it on the wall. You see drawings. These are not drawings that are drawn by the kids. These are drawings that are drawn by programs created by the kids. And on the screen, which you may or may not be able to see, there are some horses and what they were doing there was to create an animation with horses running around the ring in a circus. So a kid could do it. And of course, in one way, Alan Kay was wrong. He thought if kids can do it, anybody can do it. That's wrong. Grownups can't do it because they know they don't know how to do it. Kids are used to learning new things so that's quite happy for them. But we go on. Some years later, we had a project with some telecommunications people and at that time the telecommunications industry was interested in something called intelligent networks. In the old telephones, all the intelligence was in the local office. And now, of course, it's out in the PCs. But this was then how to structure what are the parts when you're thinking of intelligent networks. Fortunately, we don't have to go into the details. But we did that by then making object models of what intelligent networks are. And object models, it's actually not the object itself, but it's a role that the object plays in this setting. Of course, no problem. Objects are well suited for the human mind for building mental models. Here's another example. We did role modeling and we had some project for banks. Here's a business organization and you can see it by giving it a structure. The people in the organization have certain jobs to do. And now I want to see what happens when somebody in this business organization wants to travel, let's say to a conference or to an NDC or whatever. And then we have to model the process. First, who's going to travel? Well, it happens to be Peter. And then the first that happens is that the traveler asks somebody responsible for permission to travel. I called it the authorizer. And the authorizer says, okay. And then the traveler travels, fills in the expense report, sends it back to the authorizer to sign it, to say this is okay. The authorizer sends it on to a pay master. Who then will check it and sign it and then send it out to a cashier for payment and the traveler gets his money. This is a simple business process. In business process modeling, we always think of people having roles. So we call this what we see in red here, we call that the context. There's context for everything that the system needs to do for each use case if you like. And it's split into roles and the roles do things and together that do our job. Back to actually back to this morning's talk, really. Back to the user, our hero. The user accesses a system by commands and getting responses. But they're much more happening because by using the system, the user gets an intuitive feeling of what the system is. And if you think of now Engelbart, computer augmentation, there is a magic if there is resonance in the computer with what's in the user's mind. And this is what we would look for. We will try to build a system so that the user gets no surprises. Preferably, the user should have a feeling anyway of what goes on in the computer to do the job for him. The kids could do it. The similar users could do it. The telecom people could understand it. Why not even a banker? So what I'm after is to use object-based descriptions to describe what's in the user's head, to describe what's in the programmer's head, to describe what's in the computer, what's the program. I should all be the same, almost. So what's this? Example. What's this? I am willing to bet that there's somebody in this world, probably only one, who will accept it when I say, oh, it's a piece of art. Or this. It's all in the eye of the beholder. And if now the beholder, the one looking at this, actually, it could be me, working in a project, I would recognize the activity A, B, C, and D and think activities in my project. I would recognize the ones in red as the activities I need to do. I would understand that would be part of my mental model of what planning is all about. That activity A, if that start in week one, it takes six weeks. What is it? Six weeks? I finished in six weeks. Six. And then B can start in seven. C can start in seven. D can start. B takes four weeks, start in seven, finish 10. Then D should start in 11 of it can't because it has to wait for C, right? Don't have to say more about that. So that's the project workers mental model. It's if that project worker also had a mindset of objects, he will also see these activities as objects. So he will expect to find properties for them and expect to find descriptions of how they work together. There's another one. What's this? Well, actually, it isn't a plan. I know it isn't because I made it. It's a PowerPoint drawing. All right? Where is it? Let me see if I can find it. Something can happen. A drawing. So I can add circles and lines and I can do things. I can pick out, I can pick up this box and I can move it. Oops. I moved it and two lines moved at the same time. I'll come back to that. It's very important. It's not one object that was involved in my operation. Two. Let me do control C so that I don't under damage my presentation. So it's a drawing. What is this thing if I am a programmer programming PowerPoint? So then suddenly there's a completely different model. The mental model is completely different. I say that when the class shape, the subclass is line and circle and rectangle and group and subclass of group is actually a full drawing and the rest of the class is a full drawing. However, apart from a very, very, very specialized mind, we'll ever think of a drawing like that. It's very, very artificial. And the trouble is as programmers, it's all we have got. How do you expect this to explain what happens at runtime? There's nothing to do with it. How do you here, where do you put the knowledge that I want to move two arrowheads when I move a box? None of these classes knows about that. This is my main theme. Object orientation was the term object orientation was coinball allen key the event was both in computer terms. Small talk is a recursion of the notion of computer itself. Instead of dividing computer stuff like think like less strong and like data structure procedures function, things like that. Each small talk object is a recursion on the entire possibilities of a computer. This is just it's semantics are a bit like having thousands and thousands of computers all hooked together by very fast network. So the essence of object orientation is that we have fully capable object communicating. And object on itself is not interesting. It's only when objects cooperate that it is interesting. And you can't express that with a class. Let's see what his, yeah, let's see what his, his vision what it looks like. Because I thought, well, it's, it's, it's all very well to say those words. But what do they mean? So I made an animation. What it means we have a universe of objects. And they come and go as execution goes on, right? In the typical system where I've checked it that I'm running there are about 300,000 objects. Each of them my slave doing work for me in a way that I wanted to work. And as I say, I object by itself is an interest. It's only when they communicate when I work together that is interesting. Object flow. So in order to find this, of course, we had to trace what happens at runtime. And you see there is a repetition. This is meant to illustrate a use case that run again and again and again. And you see there's something here we would need to capture if we want to describe it. First is of course to see, well, what if we record, what if we record the object identities, the objects? And we will find that there's nothing here. No, no pattern at all. The classes, classes, I know classes are wonderful. So we record the classes of the objects. Here they come. Star circles, star circles, star, star circles. No pattern there either. So we say, well, let's catch the structure and have a name, name the object. I can't do it fast enough. We name the object and as I've said before, we use, we call them rolls. And then of course, there is a pattern. Three, four, five, one, two, roll, one, two, three, four, five, that's what we see. And that's what we do. We describe what happens at runtime by capturing the topology of the network or communicating object. The actual network changes all the time, but topology is the same. And then we attach code to each of the rolls and then we have a code that describes exactly what happens at runtime. Come on. Stop animation. Close that. Am I where I want to be? Yes. So that's what it's all about. Capture the runtime interaction between objects, describing by the topology of the network or communicating other objects, and then describe those statically and then we know what's happening. And then we have code that describes what happens at runtime. And then we have to go to a bit technical because how do we capture system behavior? Well, we have seen, we now want to do front loading, what's called front loading, and that is to compute early start for each activity in turn. At the moment, we cannot compute the early start of F because the formula down here says early start is the maximum of the early finish of the predecessors. All predecessors have to be known before we can take this one. And that means we have to be careful when we select which object to focus on at one time. The D we can do because both C and B and C are known. And this is the hardest part of my talk. So we select an activity and say this is the one we want to look at. We call it the current activity and we select one which we now can compute. And then that current activity has predecessors and these are the two roles we're interested in. So we have selected object to play roles and then we can compute early start is maximum of predecessors early finish. And we do that in the code attached to the role itself. So regardless of the class, regardless of the identity of the activity object, it will do exactly what we say. There's no polymorphism at this stage. So we can do the formula and we can compute the early start and early finish of the current activity, record it and then that's done. And then the next one of course then we do activity E and we say activity E is now a current activity and we do that one and we apply the formula and we go on. Okay. I said that it would be very nice if we could develop systems together with users and develop the algorithms for this kind of object interaction. And from other thing I've said, seen how people, it is possible to understand what we're talking about. One common method for finding the algorithm together with users is called the CRC cards. It used to be class responsibility collaboration. Rebecca Wilsbrock was the one who made the responsibility driven design and I had once a quite heated discussion with her. Classes can't have responsibilities. Classes don't do anything. Objects can have responsibility. Objects can do things. So this role, the object space, the responsibility belongs to the role and not to the class. And she was very quiet for a while and I said, yes, you're right. And if you read her latest book you'll find that she's talking about roles and not classes in this context. And the CRC cards, two of them here, there's one card for each step in an algorithm. So we have a card for the role current activity. We note its responsibility is to compute the early start and finish for an activity. And it has collaborators, the predecessors. It talks to the predecessors. It talks. It computes what it needs to compute the formula for algorithm. And then of course there's no one, but predecessors are not dead. I mean it doesn't do anything. It's just a holder of information. So its responsibility is to know the early finish and it doesn't collaborate with anybody. Now CRC cards I used quite a lot around to elicit I think it is to get on to algorithms, what the system shall do. This is lean principle. Everybody altogether from early on. So this way developers can go together with the users and find out what is the algorithm for doing this particular use case. And this way the user can get an impression of what actually the system does. If we program exactly what we find. It's astounding. So what is it we are doing? If we go back to programming. What's the point here? What are we doing? Here is a system or a universe of objects. Instances of some classes. One more. Doesn't matter. But we have some objects. Then we have a use case. A system command if you like. A command user gives a command. And that will trigger some method in some object. Which in turn will trigger another method in another object. Which in turn will trigger on so on like this. But I've dotted the lines because nowhere in the code can you see those lines. You only see the snippets of behavior in the blue message. And then of course there are several other things we can do. Like this. Now the red one is interesting because the next time I run that use case. I don't. My method is in this object rather than this object which could be in a different class. And you see now why code gets very unreadable. Because each class is filled up with bits. I would call them noodles of different interactions. But we never see the interaction itself. And that's why the gang of four could say that the code doesn't tell us what happens at runtime. Of course not because we only see snippets. And we don't see the whole. So what we do with the DCI which I'm talking about is that we pull out the pieces of code that has to do with different use cases. For example. And now the whole lines because now they are separated out. In context which I talked about for the travel expense example. We see the roles together. And we see the code that drives the roles together. And we see the whole algorithm together. From different algorithms for different use cases. And the package. The roles in context one for each use case. That's what we do. And now. Okay. And of course the role binding. Now the yellow things. Are very, very simple. Because the yellow of things. The class is describing the object. It's not very, very simple. Because there's no interaction code there. It's only each object has a self sufficient thing. Standing by itself. So we simplify the classes we used to write. And we simplify the interaction. This means that we are now looking at the world of objects from two different perspectives. If you program with classes. You are actually positioning yourself within an object. And you can see everything about that object. What are its attributes? What are its methods? What do the methods do? You can even find out what do I do when I receive a certain message but you cannot even define this idea. Why you receive that message or where it came from. You can't see that when you are inside the boundary. And here I have not sent any messages out. Because if I send messages out that means I'm in an interaction and that doesn't belong here. That belongs in the other view. And the most important view in my point. When I observe the system from within, from between the object. I can see the message is flowing. I can trigger for certain use cases and I can observe what happens. And you see that's a two completely orthogonal different ways of observing a system. And with the class we only have one of them which I think is a very very sad restriction. We want both. To sum up that. A class says everything about the inside of an object and nothing about its environment. I said this here class because you can cheat. A DCI context says everything about the network of communicating object and nothing about the inside of the object. I'm trying to say when I talk to people I'm saying that these are actually two fundamental abstractions on objects. The class and the context and they are orthogonal. People don't catch on to that but never mind. I think that's true. Now what's happening? There is of course quite a lot of technology here. People are doing DCI, the two different points of view on objects and they're trying to implement that in different languages. C++ was one of the first. My colleague Jim Coplin here and with his wife Gertrude Bernwick has written a book that's about lean architecture. Based on the idea that you separate the system, you separate the description of what the system is, that is the domain objects, isolated entities and what the system does. Which is what it does for different of course for every use case. So, and he's very into lean. I think actually we're doing lean in the 60s too but never mind. Everybody altogether from early on. You want to have your users and you want, we had that at one stage, we wanted to hire a programmer to do something for us. We had the capacity to do it ourselves. And we had a meeting with him and he, and we spent two or three minutes, we started talking, explaining what we wanted and he interrupted after I think two or three minutes and said, oh yes, I understand. And then he spent ten minutes telling us what we wanted. And then we said after, well that wasn't what we wanted, he was angry. How could we dare to say that he was wrong? But he was. Get people together, oh every time. Get the users to understand the algorithms he proposed to make and how they work. The very, very important, that was, I think the essence of this morning's keynote too. System architectures that reflect the end users mental model. So this, I mean, huge reports, huge, much documentation about architecture. Jim Copleen doesn't believe in that at all. And what I said just now, the architecture has two parts. What the system is, its state and what the system does, its behavior. And as I showed, it's two different observation points when you look at your objects for those two. So they're very fundamentally different. No waste. And this looks like a contradiction. Get it right the first time and be prepared for change. Get it right the first time means that when we implement something, it's a work and we're ready. But we talk to our users all the time and of course we learn as we go along and we find out that we want to do it slightly differently and then we move on and make a different program and get it right the first time. It's people's reliance on testing to get quality into a program is surprising. It says no industry, no engineering discipline, nowhere can you ever test quality into a bad product. There's another one. Jim Gay. He's a guy who is writing a book, a guide to doing DCI in Ruby. He calls it clean Ruby. And I'll read this blurb because it's also a blurb for DCI. Make your code more obvious and put the logic right where you expect it to be. Read clean Ruby and enhance your MVC. He has that too in which they work together very nicely. Write code that announces how it works with the techniques in the clean Ruby and your news developers will not be interrupting your instructions. I know. The code tells the story. It's so much easier for new people coming into the project. It's so much easier for maintainers coming, being asked to change or do something to the system. If the code tells you actually what it does, it's as simple as that and it's as hard as that. And our experience is quite a lot of people now are interested in DCI and working with DCI and different things. And the leap of the mind to go from pure class-oriented programming to object-oriented programming is very, very hard. It's always very hard to change one's mindset. And this requires not a change but an addition to the mindset. So it's not easy. So, goodness gracious me. Okay. It's your problem. I'll be finished before time. I'm sure I'm very, very sorry. DCI, you know, I could talk about a good old database. DCI paradigm, it works with code in different perspectives. D stands for data. Objects that belong to what the system is. They're domain objects but they're also order objects that we need. And what I call the restricted or restricted means that the object is not allowed to send a message out to its environment because that, then he goes out in the, you know, from the class you go out into space. You don't know what's happening. So it's not allowed to do that. We do that on the second. Oh, goodness gracious me, it's alive. That's where we're very much. How do I switch that off? I mean, I, I, I, I, this is, I make me mad. I'm really mad. It's, I think it's a touchpad that's alive. My sister-in-law is 92 and she was giving, buying a new computer, new laptop and the touchpad. I mean, it was, it was really, you couldn't come near it until lots of things happened. She didn't want that. She didn't understand what was happening. I spent an hour trying to find how to switch off the damn thing. Goodness gracious me. It's heavy progressed at all. Context for each use case. You see, I'm feeling out of time now. Context for each use case. Now we are outside and seeing the object from the outside. It's what you call a full OOO. So the context, marshals, when you wanted to do something, it marshals the object that going to play the roles, new ones every time. That one half and the other half is, so it creates the runtime communication network and then it kicks off the, the interaction and the interaction is described by the methods or the procedures attached to the roles. It's as simple and as complicated as that. Two of two uses, so it's hope yet. Dijkstra, the greatest, I think the greatest thinker in, in, in software ever. He observes testing shows the presence and not the absence of bugs. You can test. We actually left off testing as a main, main means of quality after we have thought about it about a week. No way we can test the system. We just said, okay, we'll use testing to confirm that we're doing everything right. And the only thing we will require is that the test, that to have tested system every instruction has to be executed at least once. That's all we require, very, very, very weak requirements and people don't even do that. Dijkstra also said, well, debugging, the best way to avoid bugs in the program surely is not to introduce them in the first place. That's quite sensible, isn't it? But then you have to think, which is harder. So we say use a good design for separation or concerns as I show this separations in order to avoid complex faults. Because by this separation, obviously, we can add a new use case and we don't have to touch the old ones. It's separate. So much easier, so much simpler, so much more productive. The DCI gives readable code that we can reason about. You can give an interaction code to a colleague and read it and he'll be very, very happy when he can find your bugs. It's much more fun to find bugs in other people's code and to find it in their own. And DCI gives powerful mental models because now the DCI model is shared by the user, by the programmer and the code. No surprises. And you get two computer augmentation, extension of the human brain. That's what you can achieve. Yeah, use code reading to get this right and use test to confirm that you got it right. That's all. This is our case children programming and using that was this morning too. Using a program that really responds responsive to what you need is fun. So it is I'm not all for I managed near and not all for that all that fun, but it can be fun. Okay, that's all. Thank you. There is room for comments and questions. There is a slight problem with it because you can give a question. I probably won't hear it. I was at the conference and one guy asked the question and the speaker gave a long answer. The first question is, I apologize for raising a question to beside your answer. Anyway, any questions, any comments? Dead. Okay, but then thank you again. And that's all.
Nygaard and Dahl invented Simula’s classes and objects to master complex problems. Alan Kay invented object orientation (OO) with its networks of communicating objects to facilitate simple and powerful mental models. His goal was to make computers and programming comprehensible to children of all ages. Mainstream programmers missed the importance of communication in Kay’s OO and misused the term. A better name for their approach is ‘class orientation’.I have extended Kay’s original OO with explicit concepts and code that specify how objects interact at runtime. With this new paradigm, Data-Context-Interaction (DCI), programmers reason about their code, new team members get quickly up to speed, maintainers lead a better and more productive life. The DCI Context is a new abstraction that supplements the ubiquitous class. While a class says everything about the inside of an object and nothing about the objects surrounding it, a DCI Context says everything about a network of communicating objects and nothing about their insides. The class is great for describing autonomous objects such as the domain objects in the Data part of DCI. The Context is great for describing how use cases are realized by networks of communicating objects. Communication is now a first-class citizen of programming.
10.5446/51128 (DOI)
Come on guys, it's, I know it's after lunch, it's security, but still. Hi. Hi. Thank you. So we only have 60 minutes and this is actually quite a big topic, so let's get started right away. So exactly one year before I was standing here and basically saying something like, I think in, you know, in one year time you will all do claims based identity and claims based security. And it turns out that this will be true because the moment you are compiling your application against.NET 4.5 you will be a claims enabled application. And that is a big, big change that Microsoft did changing the guts of the.NET framework to new based glasses and so on to give you this modern idea of claims based security. And my assumption is that everyone in this room does some sort of security work, meaning he writes an application that requires a login, he writes an application where users are authorized after they have authenticated with the application. So this is really, really important for you because this will change things substantially how they work in the new version of.NET. Okay. So just, you know, the typical blurb. The only important or the two important things on this slide is my email address. So if you have any questions after the talk about that, feel free to write me. I'm happy to help you. And this here is the link to slides and demos and everything I'm showing you during actually all my talks. So you can download that from the Google link. But I will show it again to you at the end of the presentation. Another nice thing is I've been working with Microsoft to produce this book here. It's called a guide to claim space identity and access control. It's just quite nice. It's geared towards beginners. So if you want to start with that stuff and you don't know really where to start and, you know, the design issues around that, it is basically divided in number of chapters, talks about ASP.NET, WCF, SharePoint, Azure and all these things. And this was only available as a PDF to download from Microsoft. And I asked them and they were kindly enough to give us 100 copies for free. Okay. So I have 20 or 30 copies with me here. We'll have 30 copies in my talk tomorrow and in the talk after that 30 copies again. So it's first come, first serve. If you want to have that book, just grab it. Okay. Let's get started. So actually how many of you by show of hands have used this library called the Windows Identity Foundation, Dabayef? Okay. A number of you. So for those, the introduction will be a little bit boring, but we also have people who didn't do that. So I want to give you basically a little introduction where we are at the moment, where we will be in around three months time when.NET 4.5 is released and what has changed in between and what are the benefits of that. Okay. Cool. So basically since the very first version of.NET in 2002, Microsoft thought like, let's have a standard interface or a standard API in the.NET framework itself, how an application can represent its current client. Okay. That may be the guy sitting in front of the computer, like in a desktop application, but maybe the guy that is initiating a request to a web application, that maybe the guy that is initiating a request to a WCF service, for example. All of these applications have one thing in common. They typically need to know who is my client so I can make some decisions based on that identity information I get. So as you can see, and I guess everybody has seen that already in his life, that there are two interfaces in.NET called I identity and I principle. And the idea of identity was that it describes the name of the user, so it has really only one very interesting property called name. And the principle idea was that this can provide APIs or a single API in that case to do authorization. And as you can imagine, when.NET was released in 2002, these APIs were created like in the late 90s. So the whole thinking around that was about role-based security. So if the user is in the role, whatever, then he is allowed to do that and this. As we all know, roles are not enough to write an authorization system because typically you want to do more fine-grained stuff like what's his purchase limit? How much can he put in his shopping cart? Things like that. And modeling that with roles just doesn't make any sense. Interfaces are more like binary decisions, yes or no, but if you want to do more, you basically had to write your own. And since we had these interfaces, people did that. They derived these interfaces and their own domain-specific functionality to it. Sometimes for good, sometimes for bad because these things were not playing well with other frameworks and so on. So one goal of Microsoft was after now 10 years of.NET to re... Well, I don't want to use the word re-imagine, but to rethink how these things could work. The other thing is that in between 2002 and today, applications have changed. So even simple applications typically have the requirement that you have different types of clients, desktop clients, browser clients, mobile clients, and so on. These clients talk to services or applications and the way they do that from a security point of view totally depends on the client. So the browser is limited to whatever HTTP has to offer. The desktop has a much richer, well, at least more options, how to authenticate by using more advanced stuff. And for all those guys that did mobile development, it's kind of a hack in between and we're getting there to get a good, solid authentication model for mobile devices as well. The next thing is that these services typically talk to other services. So one very, very typical way of modeling such a system is called the trusted subsystem design, whereas basically Alice is in front of the browser. She talks to an application. The application talks to a service and the service does some work. The only thing the service cares about is that he trusts the direct caller. So if I'm called from this guy, I just do whatever he told me. There is another competing model. Competing is the wrong word, complementary model, I should say, which is called the impersonation and delegation design. Whereas in this case, Alice sits in front of the browser. She talks to a server. The server talks to a service, but the service needs to know who is this guy in front of the browser because he needs to do authorization based on who is my ultimate client in the system. Think of applications like SharePoint. You log in with a browser. You search something. SharePoint goes to the SharePoint service. This goes to SQL Server. But you only should see the documents in the database which you are authorized to see. So we have to kind of move that identity all the way along over the various hops in our system. And obviously, the big thing is what do we do with users which are not part of our internal network, of our internal security domain? So what if we need to give partners or customers access to that application or to that service here? How can we authenticate them? And we certainly don't want to replicate accounts over and over again because that creates a big management headache and so on. So how can we do that? And obviously, the big elephant in the room, the cloud. So let's say you're coming to your desk on Monday morning and your boss was on a business class trip over the weekend reading this high-closy paper in the airplane which said like, the cloud is going to save costs. And he comes to you first thing in the morning and says, hey, we have to move that service to the cloud because it's cheaper. How do you authenticate your existing users when they are in the cloud? When the service is running in the cloud which has by definition no connection to any home security system. And obviously, how would we give those partners access to that and so on and so forth? So applications have become more complex in the sense of the authentication requirements have changed. Obviously, with these really simple interfaces where only the dot name property is something defined by Microsoft. This is going to hard to implement if you have to model everything using custom code and so on. So Microsoft tried to solve that problem. They released something in 2006 called the WCF, the Windows Communication Foundation, which basically had an answer to every problem in the universe. It was just hard to implement it. And obviously, they created their own little thing. They dig out this theory of claims-based identity. They introduced something called a security token, which is basically just a way to serialize a credential over the wire. And some other things. The problem was that this was totally incompatible with everything we used to know before, like ASP.NET identity, iPrincipal and these things. So this didn't help too much because if you were in this exotic situation where you had to provide a web application and a service, you had to write the code twice. And obviously, writing code twice with different APIs leads to problems, especially if it's WCF. So in 2009 at PDC actually, Microsoft released a library called the Windows Identity Foundation. And the idea of that was to unify all these security models again to something which can be understood by a normal human being. So they basically took the best ideas from here, from this area, and that is basically around these simple interfaces and this notion of thread.currentPrincipal, which is like a well-defined slot where you can store the current user for the current operation, and these newer concepts of tokens and claims and combine that to something which, well, combines these things. And we came up with something called the iClaimsIdentity and the iClaimsPrincipal and they again use thread.currentPrincipal. So this stuff was automatically compatible with ASP.NET, WCF and all the other frameworks on the.NET framework. So some of you raised their hand when I said, do you use WIF? And yeah, many people have used that. The problem with WIF was a little bit that it was a separate download. You had to deploy that stuff. And sometimes admins, you know, don't like to deploy new stuff, especially if you don't know what it is. So the really, really big thing that happened after the release of WIF is that they are now moving the WIF model into the core framework. And the rest of the talk is basically about how this looks like. So let's have a quick look what a claim is. Because some of you maybe knew to that term. So if you look up the, well, the first occurrence of that term claim, I saw that in the WS Security spec 1.0 from, I think, 2001. And it said something like, a claim is a statement about an entity made by some other entity. Okay, cool. Let's implement it. But one thing is really true. A claim is a statement. Okay, so it turns out you can describe any part of your system, typically users, with statements. Like Bob is an administrator. Jim's email is Jim at food.com. Alice is allowed to add new customers. These are statements about a user. It turns out that statements you make about yourself are not as trustworthy as statements that others make about you. So typically there's also this notion of an issuer of a statement. Who says that? Who says Bob is an administrator? Do I trust this guy? And for example, Bob is an administrator could come from our active directory. Jim's email address is Jim at food.com says our exchange server. Alice is allowed to add new customers says our authorization system. So it's always about who makes that statement and do you trust the guy making that statement. So for example, if I would stand here and say, hey, by the way, I'm the new CTO of Microsoft, you probably wouldn't believe me. But if you know Steve Ballmer coming in here, sweating and shouting, saying, hey, Dominic, the new CTO, then you would believe me. So it's all about, I mean, I'm not saying that you shouldn't necessarily trust everything that Steve Ballmer says. I'm just saying that in that case you would trust him. So they took this idea of a claim and said, okay, strictly think about it. A role is a claim as well. This here is a claim. Stop is an administrator, but the outcome of that question can only result in a true or false value. Whereas this here has a name and a value. And by just doing this simple thing, by allowing to add values to a statement, you have much, much richer ways of modeling identity in your application. So they basically took their pride as architects and all these technical fellows and everything and they sat down in the dark room and said, okay, how can we model that in code? And after a while, they found a way to do that. It looks like this. Okay? It's a class called claim. It has a type, a value, and an issuer. The type could be role. The value could be administrator and the issuer could be active directory. The type could be email address. The value could be, you know, whatever. And the issuer could be our customer relationship management system, whatever. Something that you trust for that current business case. So the only thing that's missing really between having a class that can hold a statement or a claim and coupling that with a user are new implementations of these i-principle and i-identity interfaces. And what they actually did is they didn't re-implement them because, as you know, interfaces are not versionable. But what they did is they created new implementations of that. And one is called a class called claims identity. And it derives from i-identity. And what this basically holds is a little bit more stuff than that, but in essence it holds a collection of claim. Okay? So you can now describe a user not only with that here, but in addition with a collection of statements. Okay? And that is built in. Many people have rolled their own thing, which looks very much like this, but since they were all handwritten, they didn't work together when it comes to, you know, interop with other systems. So it's built in the framework and we can just use it. Yeah? Same thing happened with claims principle, which is an implementation of i-principle and just contains a collection of claims identity. So a claim principle is in theory just a container for multiple identities. Yeah? Where would we have that? Well, that's rarely happening that this collection contains more than one identity, but there are situations. For example, let's say you're having a web application which just forms authentication, so the user logs in with his name and password. But in addition, you are securing an application with client certificates. So you have two types of identities. One is the name and one is the client certificate. And just to cater for that, this principle is really a container of identities. So that brings us to the radical change I talked about. It's not just that Microsoft now added these classes to the framework. By the way, they are in MS Core Lib now, which kind of tells you how serious they are about integrating claims into the system. It's not just about adding the WIF classes to some namespaces, even if it's MS Core Lib. They did a fundamental change. So this is how it looks like from.NET 1.0 to.NET 4.0. We have this principle, this interface i-principle. We have the role principle, the generic principle, and the windows principle. These were the three standard implementations of these classes in.NET. The moment you are going to.NET 4.5, it looks like this. You're seeing that they injected the claims principle class at the top of the inheritance hierarchy. So basically now, every application which uses these standard ones like WCF or ASP.NET are now deriving from claims principle, which gives you the claims functionality out of the box. Okay? So nothing you have to do extra, you just get it. And that is great news, I think, for everyone who's doing this type of work in.NET. Same happened for i-identity. So we have generic identity, forms identity for ASP.NET, and windows identity for Windows authentication. And in.NET 4.5, it looks like this. We have claims identity in between, and all these classes now derive from claims identity. Again, you get the benefits of this claims-based model by default. Okay? So let's have a quick look, actually, how that feels like. So let's do this. And you can see there's a new namespace. Yay. If you are a security guy, you love that, system-security.claims. Okay? So it's really now a first-class citizen in the framework. And you know, a claim has a type and a value. Okay? So this basically says the name of this user that I'm describing is Dominic. Okay? You can use arbitrary strings here, if you like. But when you think about federation and interoperating with other systems, there are also a set of standard claims, which are defined by the OASIS. So there's a class called Claim Types. And you can see it has things like, you know, country, date of birth, email, gender, given name, home phone, mobile phone, and so on. Okay? So if you care about interop, or if you want to do a SAML, which is a token type, which I'll briefly talk about later on, you want to use these standard claims because in SAML there are certain requirements. And if you look at that, you see it's actually much like in XML namespace URI. It says Schema's XML soap org, identity claims, and name, for example. So claims typically don't come on their own. So let's do claims equals new list of claim. And do this. New claim, claim types.email, for example. Like this. Rolls. I mean, I'm not saying rolls are useless. I'm just saying they have their place, but not everywhere. So there's also a standard claim type called roll. So something like this. And if you want to have a custom claim, you can do something, whatever you want, like my claims slash location or slow. Yeah? And then you can take these claims and create an identity from that. So it's claims identity. You pass in the claims. That is, by the way, an interesting piece here. If you pass in just the claims, you will have an identity that holds these claims, but the is authenticated property will be false. Yeah? I'm just saying that because I ran into that and took me some time to figure it out. So in other words, this model also supports the idea of attaching statements to anonymous users. Okay? So if you have users which are not authenticated yet in your system, you can add any information about that. Maybe their preferences, favorite color, whatever. And when they authenticate, you can migrate that information to some sort of store. Very much like the anonymous identifier module that they have in ASP.net. So that would be an anonymous user and when you add an authentication type, let's call it NDC, whatever, it's up to you. Now this is an authenticated user now in your system. Okay? And from that, you can create a principle, pass in the ID, and then you can store that principle on.net's standard place where principles are stored, which is this. Okay? And because that is still working, because ultimately they derive from iPrincipal and this is of type iPrincipal, you can integrate that in existing applications like ASP.net, WCF, and so on, because if old applications will only see the old interfaces, but if you do a downcast to claims principle, you will get the benefits of the claims. Okay? So let's assume that this code here is some sort of plumbing code, you know, maybe our ASP.net pipeline or our WCF plumbing. And if you have old code, what this code would basically do is something like var p equals thread.currentPrincipal and then it could do something like consoleWriteLine ID, p.identity.name, just the usual stuff. As you can see, this still works. So you don't have to fear that the moment you're compiling your app to 45, everything breaks, they've been careful making sure that there's a backwards compatibility layer. So basically what happens when I do the dot name is that they go through the claims collection and search for a name claim and return that as a string. And if you don't like that, you can also reassign that. So you can say the name claim type, should be claim types dot email, for example, and then whenever somebody does dot name on that principle, that has changed in WIF. From WIF you have to specify it here, I think. String name type would be claim types dot email and the role type would be claim types dot role. And when I run that, you see now that the dot name returns the email address and not the name of the user, that is something that might be useful. Let's remove that. Likewise, you can do things like is in role, geek, so that's what your existing code might do and you see that it is still working. Again, same idea, what they really do under the covers is they go to the claims collection, search for the role claims and look if something matches the name geek. So it's an easy backwards compatibility layer. Let's say you're writing new code, you want to take advantage of that new stuff. In that case, what you had to do, or what you still have to do is something like thread dot current principle, but now you're casting it to claims principle. And now from that point on, you have all these extra methods, they added like find all, gives you back all claims of a certain type, find first, gives you back the first claim of a certain type, you can add claims to it and so on. And because that was so common to write this line of code here, they have basically done something like this, equals claims principle dot current. What that does under the covers is it basically fetches the principle from the current principle and casts it down to a claims principle. So that is like the shorthand API now. And then we can do something like, email equals p dot find first and which type of claim are we looking for? Same types dot email, for example, that has a value and here we go with the email address. Okay. Also for those people coming from dub I F, they were used to do all the operations on the identity and not on the principle. This is now discouraged in dot net 4.5. You've seen that this find and find all methods are now on the principle itself and they would iterate all the containing identities in the container if you do that. So it is now recommended that you always work on the principle and not on the identity inside the principle because the claims principle class knows what to do and to do the right thing. That is something that dub I F people don't see at first. I didn't see it but looking at the source code it became clear. Okay. So let's go back to the slides. Now when we look at the build in principle classes, for example in that case Windows identity, you'll see that now in your application you have the choice at which level of abstraction you want to work with these classes. So if it's an application that knows nothing about claims and all that stuff, then you still can use the name property, the authentication type and the authenticated property. If you downcast to claims identity you get all the goodness of the claims and find all, find first, has claim and so on methods. And if you really want to work with the domain specific implementation like Windows identity which gives you things like impersonation, access to the raw Win32 handle that wraps the token and so on, you can downcast to that but in general if you don't need access to these specific APIs. Well, okay. Good. Enough for that. In general it's recommended that you're programming against the claims identity or claims principle classes, right? Because the good thing is now that is totally independent of your authentication method in your application, right? If you're writing code against a Windows authenticated application and someone chooses to switch that to forms, you don't care because this class will always stay the same and you have all the information in there, okay? So there's no need to special case that anymore. Good. So that's where we are very soon. Visual Studio 2012.net45. We have all principles derived from claims principle. We have principle back. We have this new namespace, system security claims which is in MS Core Lib as I said and we have an assembly called system identity model which existed before but has now been intended to contain all the structural classes that we had in WIF before. Okay. So now that we have this standard claims-based infrastructure in our code, this gives Microsoft the chance to put standard reusable services on top of that, okay? Same idea as with the original I principle and identity. So for those who are doing 8.net, you know the authorization element in the config, authorization, deny users equals question mark, things like that. What that really did under the covers was calling is authenticated on the identity or calling the is in role method on the principle. And same idea here. We now have a much, much richer base class, yeah? So we can put much, much richer support on top of that. And for me, one of the biggest things is they now support arbitrary, no, well, arbitrary is wrong but a good chunk of standard credential types and know how to turn these types into a claims principle, okay? So if you're writing an application that uses windows, authentication or forms authentication or HTTP basic authentication or SSL or WS security or SAML, whatever, you don't care because in your code you are programming against the collection of claims and that's it, okay? And if someone decides to change the authentication method, you don't care at all because you're programming against claims and not about specific, you know, details of these authentication protocols or credential types, okay? This makes it really, really easy to switch the authentication method or maybe you are in a situation like an ISV for example, where you write standard software and you don't know beforehand what type of security infrastructure will our customers use. Some use Novel, some use IBM, some use Microsoft, some use Windows for work groups, yeah? Things like that. The good thing is you don't care anymore because it's all abstracted away by the idea that a user can be described by claims and that's all you care about. How is this new credential support implemented? Basically by a class called security token handler. So a security token is just an abstract idea of a credential, yeah, of varying complexity if you like. So we have really, really simple credentials like a username credential, yeah? We have more complex ones like for example Kerberos, RSA, SAML11, SAML2, X509 and so on and basically these token handlers, they basically provide three essential methods. One is called read token which turns that credential from the wire into a.NET type of security token. One is write token for the way out if you need that but the really most important one you care is called validate token and here you pass in a security token and you get back a claims identity, okay? That's how they implement the support for all of these credential types. So when you have to deal with these guys, there's always a way to turn that into claims and from that point on your application can program against claims, yeah? Good, let's do this. Let's get rid of that. So what I have here is a standard ASP.NET application that also has a WCF service. It's the standard MVC template and as you can see in that config, I have configured that for Windows authentication, so the standard stuff really. And I have basically a few which is called identity and what I'm doing in this few is I'm just printing out everything we know about this current user. It's the name of the user if he's authenticated the authentication type and scrolling down, we look at the.NET types that implement these principles and we scroll down here and here we iterate for the claims collection and just show them on the screen so we basically know what we're dealing with. Let's run that. Okay, go here. What do we see? Basically that's my username, I'm authenticated, that's the authentication type, that's the domain authentication. You see that the principle type is Windows principle and the base type of that is claims principle now and the identity type is identity, Windows identity and the base class is claims identity. Here we have the claims. What are these claims? I'm sure you all seen them before. That is basically what's inside the Windows security token, so when I log on to Active Directory I get a token and what this is here are the Active Directory groups I remember of. Okay, so if I go to who am I, slash groups, slash list, you basically see that who am I is a tool which opens up the Windows token and looks at its contents and you can see that these are basically the contents of the Windows token. These are my groups that I remember of. Let's switch that to forms authentication. Run it again. If I now click here, I get my login dialog, login and you now see it's a generic principle deriving from claims principle, it's a forms identity deriving from claims identity and obviously the only thing the system can know about me is my user name. Okay, so that's the only claim that gets generated here. Now these claims are not really directly useful for your application, right? I mean who cares that I'm in the group called S1, 5, 20 minus a long number. So we need a way basically to take these raw claims that resulted from the conversion from the security token to the claims collection and turn them into something which is useful for our application. And for that, again,.NET provides a standard pipeline. So basically we are at this point here where a request came into our application. We did the token handler, deserialized it, validated it, turns it into a claims principle. And now there's a process called claims transformation. And the idea of claims transformation is that you pass in that claims principle that fell out of here and you return another claims principle which is the one that your application code will see. So you basically take these windows groups, maybe you do some querying on that and turn that into your purchase limit claim or into your email address claim, things like that. And there's a standard class for that called the claims authentication manager that is now part of the.NET framework. And you see the idea is very simple. There's an authenticate method. They pass in the incoming principle and you return another principle. So that is basically your role manager if you did that in.NET on steroids, yeah? Because the role manager could only do roles, but now you can do arbitrary information within that piece of code. So how do we enable that? You see that there's also a new configuration section. It's called system identity model. And there's an entry called the claims authentication manager and here's my implementation of that. And what this does is basically it simulates something like a database lookup and fetch some information about the user and turn that into claims. So our application will see these claims now and not the raw claims coming from the token. So let's run that. Go here. Log in. And you now can see that my code under the covers did preserve the login name, but looked up in the database what is the email address of this guy, yeah? And added another claim, which is the current date time. And I'm only adding that to show you something. When I refresh now, you can see that our claims transformation code basically runs on every request. Okay? That may be exactly what you want. Yeah? Or the opposite, depending on your application. Yeah? So if you're doing really database lookups in here, then maybe you don't want to make that round trip on every single, you know, request. Yeah? If you want to base the claims on the location where the user is going to, then maybe you want to be called on every single request. Yeah? So that's up to you. The good thing is that dot net includes a session, a session system. So we already have this notion of session in dot net. For example, the session object in a field net, yeah, which is just, you know, just a general purpose property back if you want to store information, which is not security related. So session is used for everything besides security. And if you use it for security, then write me an email. And we had the forms authentication cookie, which could only hold the name of the user. Yeah? And we had this role manager thing, which was designed to hold the roles of that user. So we ended up with, you know, two cookies at least. And for all the other information we needed as well, we need to find our own little, you know, state preserving mechanism. Yeah? With the stuff in dot net for five, there's a new thing called the session security token. And the idea of a session security token is that it's a serialized claims principle. So what you can do is you can build up your claims principle with all these claims like name, email, you know, roles and so on, and then serialize that to a cookie. And from that point on, this identity round trips between the client and your server. And you never have to make this database lookups again after you have established that session. Okay? So it's kind of a replacement for all the other mechanisms. Not replacing it because it's obviously all backwards compatible, but it's a new way of doing it. And in my opinion, a much better way. So basically you see you can create a token here. You pass in the principle you want to cash, if you like. You pass in how long should that cookie be or the token actually be valid. And then you just say right session token to cookie. And from that point on, you get this principle. The nice thing is that there are much more features around that. For example, so let's say you want to minimize the data on the wire. Okay? So maybe that principle has like, you know, a number of claims and you don't want to pay that tax of round tripping that principle every time going back and forth. So what you can do is you can, for example, store the principle on the server side, just issue an ID and then the net takes care of taking the ID and coupling it back with the cached principle on your server side. Okay? You can add the caching mechanism is expand, extensible. So if you have web farm situations, for example, you can start it in a distributed cache, for example. Yeah. So we did that for a customer using velocity or what is now called a fabric caching, I think. Yeah. So the protection mechanism of that cookie is totally extensible. So by default, we are using DP API, which is the built in crypto mechanism in Windows. In four, five, they support the machine key to protect that. Or you can protect it with RSA keys. Again, that's interesting for web farm scenarios. And it's all built in. You don't have to, you know, come up with your own framework for that. And by the way, another byproduct of that is that the same mechanism also works for WCF. So you don't have to write that code twice. Once for WCF. So let's go here. Let's go to my claims transformer and just say established session. And the code for session is really just what I showed on the slide. It's wrapping the principle in a token and then setting, writing out the token. And if I run that now. And I log in. And you see now it's 1544. And it just stays the same because we're now caching this thing. There's another thing I want to make you aware of and that is around claims authorization, but we don't have the time to go through all the details here. But the whole idea is to also provide a new way to do authorization in your application. So many, many customers I have like to write code like this. Let's go to the controller. Whatever. Like in here, something like if user. What? Let's do this. If red dot current principle, what's going on with my intelligence here? Well, whatever. If user is in role, whatever, do some business logic. And the fundamental problem with that code is that you're kind of tightly coupling security infrastructure with your business logic. And you can be really, really sure that this requirement will change over time. Maybe it's another role that needs access as well. Maybe there's another guy who isn't in a role. He needs access to that as well. And whenever these things occur, you have to do major refactoring of your applications, go through all the code, have a look at where are my is in role calls and so on and so forth. Make sure you don't miss anything. Retest the whole application, redeploy, restage and so on and so forth. So this here is not encouraged. It never was, but the framework with its APIs kind of led you to doing that. In.NET 4.5, we have this new class called claims authorization manager. And the idea is that that is a separate class, typically living in a separate assembly, and that there's a method called check access. And basically you call that method from within your code. And the way this works is, let me quickly show you that. You do something like claims authorization manager.check, sorry, claims principle permission.check. Yeah, something is wrong here. And what you provide is not details about who is allowed to do that, but more details about what you're currently doing. Let's say this method is adding a customer. You would say add customer. And then the.NET would call your claims authorization manager, pass in these strings, add and customer, and in addition injecting the current claims principle of the user who is trying to do that. So your business logic code is free of security decisions. You're only saying, hey, I'm doing this. Someone else must decide if that's okay. And then you would implement that security logic in your claims authorization manager class. And if you do it properly, like separating it from your code, you can version and deploy these things independently. You can update your authorization policy independently of your application and vice versa. It's not that big chunk that you have to deploy just because someone decided a role name has to change, things like that. The last thing I want to show you is, well, not the last thing, but the last thing in this demo is WCF. So we have a WCF service here. And let's quickly look at the implementation of that. So all this thing is doing is basically it again goes through the claims collection and sends it back to the client so we can see that. So the same idea as our web page, but now as a WCF service. So if I run the client here, you can see what WCF sees is exactly the same thing as your web application sees. It's the name of the user, it's his email address, and this date time claim. So by design and by default, they share that code. So you're writing your claims transformation logic once and you can reuse it between your apps and your services. And that is good, good stuff. Okay, so the last thing, the 12 minute run through external authentication. So what you've seen so far is stuff that everybody can use immediately because it's now baked in the framework. Another thing that the world is moving to is actually removing that authentication code like my forms log in and so on from the actual application and moving it to a separate piece of infrastructure called a security token service. So the idea is that you don't have to implement authentication in your application anymore. You implement it here and this thing does only authentication and that is just the business logic. You can separate authentication from the application and that has the byproduct obviously that you can now share that service between multiple applications giving you patterns like single sign on for example. So the whole idea is now that whenever a client tries to access an application, you first get redirected to a token service, you authenticate with this guy, you get back a token and the token gets sent to the application. And since.NET has the support for all these token types, what do you think you will get in your application? Claims, that's it, right? There's no difference anymore between this kind of more complicated authentication flow versus your built-in forms page, forms log in page. From your point of view as a developer, you just write against the claims collection, right? Now the nice benefit of that is obviously since you are bringing your identity with you, with the request, it doesn't matter anymore where this application is hosted. It could be on your intranet, it could be at a hoster, it could be in the cloud, doesn't matter anymore. There's no need that this application needs to call back from to here, which obviously wouldn't work because there are firewalls in between and so on. So this enables scenarios like single sign-on from your local desktop to a cloud hosted application. Again, WIF had that for the last two years, now it's part of.NET 4.5 and you can just use it. The token we typically use for these types of scenarios is called the SAML, the security assertion markup language, which is XML. And you can see basically what this thing contains, it contains information about the user issued by this token service. We have a user ID, we have a name, we have a department and so on. And you can see attribute name, attribute value, this kind of nicely matches to claim type and claim value, obviously. Where's the issuer? I said earlier, we only trust statements if we trust the issuer. The issuer is here because this thing is digitally signed, which gives the recipient the confidence it hasn't been changed after the issuer issued it. And you can figure out where it came from and if you trust that issuer, you trust the information that's inside here and you're good to go. So the two main mechanisms these days to do these types of authentication flows, one is called WS Federation that is made for web applications. So the idea is someone tries to access your application. This guy redirects you back to the security token service and this here is basically part of this WS Federation protocol. It says, hey, this guy wants to sign in and the token you emit is destined for this application. That's typically the physical address of this guy here. Okay? Why do we do that? Well, first of all, the security token service may need that information because he wants to emit different claims for different applications. So one application might only care about the email address. The other application might care about the manager of this guy. So they can distinguish based on that address here what type of information to pull. And the other thing is that the nice trick is now, how can this guy here send that token back to the application? And the way they're doing that, it's quite funny, they have a form with a post and the action of that post is this address here. Okay? And inside of that form, here's the SAML token I just showed you as a form field and this piece of JavaScript here triggers the submit. So basically this form travels back to here. Yeah? Here is.NET for five running or.byf extracts the SAML token from the post, turns it into a claims principle. Okay? So let's quickly show you that. So let's close that down and open this. So this application is just the same, but now we are not doing our own authentication anymore. We are using a security token service to do the authentication. In that case, we are using a product from Microsoft called Active Directory Federation Services ADFS version two, which is part of the.NET server license. So it comes with this, you know, add mini MMC console thing. Here you can configure your applications that you want to supply with tokens. Here should be somewhere. Yeah. So this is my ADFS demo app. So that basically means now that this application can ask for tokens to be issued. Again, we don't have the time to go into much detail here, but just to show you how it basically works, there's another new configuration section called identity services, system.identity model.services. And here you can specify where on the network does the authentication server live. And that is kind of the iteration of the forms authentication configuration element where you specified the location of the logon page, you know, like here in that config, you said, here's my logon page. But this was always restricted. This page must be application local. Okay. So you couldn't put an absolute URL in here. With this new system in 4.5, you can see the logon page can live wherever you want. Okay. And that is the redirect being done to the logon page. This is the name of the application that gets sent to the token service. And after that, we come back with a token and we get the information that the token service has put into this token. Yeah. We can actually run fit like the same time. You basically see what's going on. So if we run that. And I go to the identity link. Now something will happen hopefully very quickly. Maybe not. Yeah. Because the VM is freshly booted. But what happens is we go to ADFS. ADFS does windows in the credit authentication in that case. And then we go back to the application. So I often call it the click protocol because you see the clicks in the browser. And whenever you did live ID authentication, you're seeing exactly the same thing. Right? It does some redirects back and forth in the browser. Yeah. It was pretty fast. And now you see that we now get the claims that were emitted by ADFS. In that case, it's just my user name. And ADFS gave me some information about how the user authenticated and when the user authenticated as well. So now, for example, you can go to ADFS and say, hey, you know what? This application, it needs some information from our Active Directory, for example. So we can say, you know what? Can you please pull the email address from AD, turn it into a token and send along. And maybe you want to do something like good morning Dominic or so. So we need the first name of the user as well. Again, that is stored in Active Directory. So we can say, give me the given name and turn that into a given name claim. Here it is. Okay? So the next time now, the user logs on to the application. The application will see these new claims. Yep. Here's my email address. Here's my first name. No need to ever, ever, ever use LDAP to connect back to AD. I hope that some people can appreciate that. Oh, and by the way, same idea again. We have the same WCF service. This time dubbed the WCF service. It's exactly the same service. We just use a different binding in WCF called the WS 2007 federation HTTP binding. Besides that, everything stayed the same. So when I run the client, again, you can see we get the same claims in WCF. My windows account name, my email address, my given name, the authentication method and the authentication time. Okay? Again, they are sharing the code. They are sharing the logic because they are in the same application. No need to duplicate that stuff anymore. Cool. So tomorrow I'm doing a talk about Web API, which is Microsoft's new REST HTTP kind of framework. We can do the same stuff in there as well. So if you're interested in that, come to my talk tomorrow. I also have more books then. The last thing to mention, this security token service thing, there are a number of products which work directly with.NET 4.5. Obviously, Microsoft's one, there's IBM, Tivoli, Federation Manager, Oracle Identity Manager, there's Ping Federate. They all implement the same set of protocols. So they are completely compatible with.NET, which is great news. Basically you can write, let's say, a.NET application which can be accessed by Java clients because they support WS trust or WS Federation. Or you can write your own token service and again, all the classes are inside of.NET 4.5. Actually ADFS is written using.BIF. It's not trivial. Be aware that you're building critical security infrastructure if you're doing that. I have an open source token service on CodeBlock, which is kind of popular. So that's a good starting point if you want to go down that route. And the last thing, some things I found when migrating my code from.40 to.45. So if you're doing plain.NET today, actually you don't have to do anything. Things just continue to work and then you start downcasting wherever it makes sense for you to get access to the claims collection and can do extra stuff. Unfortunately, for those people that use.BIF, they have to pay the early adopter tax. So there are lots of breaking changes between.BIF and.45. Nothing fundamental, but many, many small changes. So basically you will be frustrated the first time you go to your project properties, switch to.45 and compile. You will have a screen of red exclamation marks. The thing is nothing has changed from the philosophical point of view. It's just different namespaces, different class names, different names of configuration sections and so on. So if you don't feel like making the move right now,.BIF is now part of Windows, meaning on Windows 7 and 2008, you can just install that on Windows 8 and Windows Server 2012. It's part of this AdRemove Windows feature dialog. So you can just click that and you have.BIF. So you can run that side by side and still use your old.BIF code, even moving forward to new operating systems and newer frameworks. So that's up to you. I would recommend making the move. For my token service, which is a fairly substantial piece of code in the sense of using.BIF APIs, it took three hours to port the thing to.45, so not really catastrophic. Okay, so that's it, basically. Obviously, I could talk all day about that, or multiple days, maybe. But I guess the main points are clear that claims-based identity is now part of.NET 4.5. It's rolled directly into the new operating systems like Windows 8 and 2012. And you have full access to that, turning tokens into claims, claims transformation, and claims-based authorization. So with that, I'm done. So if you have any questions, I'll be around. If you want a book, take one. And enjoy the rest of the conference. Thank you. Thanks, Cambrianus Woo or Taplex.
Ten years after the release of .NET 1.0 Microsoft decided to revamp the built-in infrastructure for authentication & authorization. All identities in .NET are now modeled using the claims-based paradigm, and token based authentication (which is also the basis for federation) is now a first class citizen in the framework. This has been achieved by tightly integrating the Windows Identity Foundation (WIF) into the core class library. Since these changes have been made in the base classes, all application level frameworks like ASP.NET, WCF and WPF inherit these new features. Learn what these new mechanisms have to offer, what that means to existing applications and how you migrate either from stock .NET or WIF enabled applications.
10.5446/51130 (DOI)
Pretty good. 20 past four in the afternoon, you're still going. Well done everybody. Anyone feeling tired yet? Anyone's head hurt yet? You know, it's probably all right. We're on... Okay, that sounded good. Was that me? We're on day one, so this is probably quite good. Okay, it's here, isn't it? Yeah, it's right there. Okay, all right, not in that spot. Yeah, so we're gonna... We're good to go? All righty then. Yeah, hello everyone. Yay, very good. I love that bit we have. This is not gonna work, is it? What am I doing wrong, by the way? I'll keep going over here. I will try not to dance around. So this is the bit of the stage that I can walk on is getting smaller and smaller by the minute. In fact, I'm going out the door now. Right, so here we are. Today I'm going to be talking to you about mind control, which just gives me a thrill to say those words. If I knew when I was a kid in school and I was in class and I was going to grow up to say the words, today I'd like to talk to you about mind control, then I really would have paid attention. That would have been a great idea. So I'm gonna spend the next hour talking about this. This is an Emotive Epoch Neuro headset. It is going to read my brain waves and I'm gonna control my computer with it, and we are gonna have fun doing it. That's it, basically. I'm supposed to like give a synopsis of my presentation. That was it. That's the synopsis. I put on a headset and I read my brain waves and then you can go in an hour's time. You will be free to go. So yeah, this is me. I didn't walk across the state here. Here we go. No, yeah. So that's me there, Guy Smith Ferrier. You've got my email address, guyatguysmithferrier.com. There's my Twitter name at Guy Smith Ferrier. I've got a website, Guy Smith Ferrier. There's a clue. There's really, there's a kind of a theme I've got going here with the Guy Smith Ferrier bit. So all you pretty much need from this presentation is my website, Guy Smith Ferrier.com. If you can get a Wi-Fi connection right now, which I'm sure you can, you can download the slides and the source code for this presentation right now. And that's pretty much it. I need to do a disclaimer before we start. I do not work for Emotive. I do not own shares in Emotive. I do not sell products related to Emotive. I stand to gain nothing by this presentation. I was told that this headset existed and I thought, I've got to buy one. I must own one of these devices. I must see is this actually true? Can we do this? Like today? You're welcome. Come in. Come in. Come in. And that's why I bought it. I honestly, I'm not trying to sell this to you at all, but for the next hour, I'm going to be talking about the Emotive Epoch Neuro headset. I'm sorry if you feel that's a sales pitch. Basically, this is the device I've got. There are other devices out there. There's a Mind Play. There's a Neuro Sky. There's plenty of them out there. I don't have those devices. This one's a good one. So that's the one I'm using. Yep. Coming to that. Great question. How much is it? It's a tease. I'm making sure that you stay for the next three minutes, so we get to that point. So I'll take questions as we go. You need to kind of butt in because I talk quickly and I go quickly and I got a whole bunch of stuff to cover in the next hour. So you need to like butt in and shout out. So here's what we're going to do. We're going to use the, I'm going to put the Emotive Epoch Neuro headset on my head and you're going to see how it works. That's like, that's the first demo that I'm going to do. Next, we're going to get it to read my facial expressions. So this thing reads brain waves, EEG data and EMG data. That's an electromyograph data. And it's going to use that to read my facial expressions. Then I'm going to take it off. We're going to have a look at what the headset actually is. We're going to have a look at how much it costs. Thank you for coming along today. And then we're going to use it to read my emotions. Now, this is the only demo that I can't do live. This is a canned demo. I hope you're going to appreciate here. I am not such a Jedi mind master that I can go, right, now watch me being happy. Now watch me being sad, happy, sad, happy, sad, happy, sad. I can't do that. So I recorded one earlier when I was experiencing different emotions and you're going to have to watch the recorded version. This is a demo I'm not doing. This is the one which talks about headset movement. In the back of this headset, there is a gyro. And you can use that as you move your head around like this up, down, left, right. It tracks the movement of your head. It's just a gyro. There's no super mind power stuff going on here. You've probably seen gyros before. So I'm not doing that demo. The point of it would be you could have a mouse cursor on the screen and you can control the mouse cursor moving around by moving your head. There you go. That's the end of that subject. I'm not doing it anymore. The last demo that I'll do, actually no, it's not the last demo. The last subject that I'll cover is something called the cognitive suite which reads conscious thought. So my intention to perform an action is read by this headset. And we will see later how successful or otherwise this might be. I find that demo particularly difficult to do when I'm standing in front of 100 odd people here. When I'm at home in my study, when I'm by myself, that's a pretty easy demo to do. But when you guys are watching me, in fact, when we get to that demo, if you could just all turn around, that would make my life so much better. Then we'll look at the expressive API. This thing has an API. It's got a managed API and an unmanaged API. So we can read the data coming out of this thing using C sharp. And that's what I'm going to do. And then lastly, no, not quite lastly, there's something called the emo composer and the emo key. We'll take a look at that. And then if we have time, I'll look at some of the applications which are available for this headset. Good. Okay. That's it. Let's get going. Let's start off by putting this thing on. Now, the people at the front of the room now get an excellent view of the bit where I'm going really bold. Thank you. Right. So this is the epoch neuro headset. Let's just take that one there. What I'm loading up here is something called the control panel. I'm going to log in as me. This is a preset version of me. And at the back of this headset, there's a little button I can turn it on. And this should start to send data out to this should start send data back out any moment now. Go on. Okay. This worked like five minutes ago. Okay. That's, I had to see, I knew I had, there was no doubt in my mind that this would work. So this is picking, I'm going, you can see here this thing is green and yellow and the yellow ones are turning green. This is showing me the quality of the reception of each one of these sensors. So on my head, I don't know how much you can see here, but on my head, I have this, this headset, which has got lots of different sensors on it. It's got 16 different sensors scattered across my head. Now, let me, I don't ask many questions of the audience because I don't find that a particularly useful thing to do. But I think this is my one question. Is anyone here either a neuroscientist or has a background in neuroscience or brain computer interfaces? That was no one. Okay. So for the record, everything that is say on the subject of neuroscience is exactly true. I'm not making it up as I go along. It's exactly true. So there are 16 bio potential sensors on my head here. These are not just randomly placed. They're placed at specific places on my head. So if I touch, although you can't see it, if I touch the two at the back, these, these aren't so in fact, I can, if I gently go across here, that's an 01. That's an 02. These are placed on my occipital lobe, which is used for processing visual information. These ones at the front here, that's an A3 and an A4. So A3, A4, that's a P7 and a P8. So the A3 and A4 are for my frontal lobe, the P7, P8, parietal lobe, T7 here, T8 here, temporal lobe, and so on and so on. So if we talk to a neuroscientist and say the data coming out of the 01 spiked at this point or did something or other at this point, a neuroscientist goes, oh, I know what no one is. It's not an emotive designation. This is in the world of neuroscience. That's a known location and the T7 and the T8. So these are known places. I was deliberately talking there because I wanted to wait for all of these to go green. The longer you wear this headset, the better the contacts go. So I know now that, apart from the ones going orange, this is, I've got a reasonable contact on my head here. Okay, so let's go to the first thing I'm going to look at here, which is called the expressive suite. Now, this is reading, this is still reading my brain waves. There's no camera on my, apart from the thing recording us today. There's no camera on my face here. This is reading electrical signals coming off my head. As I said earlier, it reads EEG and EMG. The EMG is electromyograph information, and that's what you generate when you use the muscles on your face. That's the data that you are generating. So it's still brain waves still being picked up by this headset here. Now, as I move the muscles on my face, this is an accurate representation of me here, as you can clearly see. This is how I look in the evening when there's no one around, it's just me by myself. So what I'm going to try and do is I'm going to try and get this thing to react to exactly what I'm doing. So you will have to simultaneously look at my face and also look at this face over here and see, is it matching what I am doing? I will say up front, I am not very good at smiling on demand. When someone says something funny, I smile, but to do that, go on smile kind of thing, I can't do it. So it comes out as a real kind of nasty smile. So let's try and make this, we'll start off, it's already smiling. So let's not do the smile. Let's start off with the clenched teeth and the growl. See? See it growl. See? It's growling. Let's go back to the smile. Okay, let's do the blinking thing. Now, for the blinking thing, watch the top line at the top there. That would be the top line at the top. I probably didn't need to say that. Okay, so I'm going to try and make this thing blink. Yeah, I'm deliberately blinking all the time. So hopefully, if you were able to see my eyes and this thing at the same time, hopefully you could see there was a fairly clear correlation between me blinking and this thing registering that I was blinking. What else can we do here? Let's do look left, look right. Okay, so let's try I'm looking left. Let's try look right. Now, this is a demo where I can't see what it's doing. I'm looking right. I can't look right and then go, where are you looking? It doesn't work that way. So that was a look right. That's a look right. I don't know, you'll have to tell me was this person looking left or right when I was doing that? All right, I can normally get this to work. So let's try it again. One more time. Okay, yeah, see, I did manage to get it to work that time. So this thing's reading my facial expressions. Let's just do before I explain why that's interesting and useful, let me just do one more demo. So I'm going to do similar kind of thing again. But this time, there we go. Where did you there you go? So this thing comes in different versions. The version that you just saw a moment ago was the research headset. I'm going to use the cheap version. And as you can clearly see, this is this one's a lot cheaper. This is this is me during the daytime where I'm looking at strong jawline. That's clearly me. So I've got this guy here. I'm going to make a couple of changes here. Let me just bring up word. There's Microsoft Word. And so you can see this, let's bump up the font a bit. So what can I do with this? Given this thing can recognize my facial expressions, what can I do? Let's take this smile thing over here. I can pump keystrokes into the keyboard buffer when it detects me doing something. So you can see here it says send specific keystrokes. I could pump a keystroke into the keyboard buffer. I could also apply some hot keys. So control right arrow, something like that. Or I could send a mouse click. I could say when I blink, left mouse click on something. Okay, so let's send some specific keystrokes. I'll do full colon, closed round parentheses. So let's try that now. Every time I smile, this thing sends a colon and a round parentheses to word, which word interprets as a smile. Now the harder demo to do is this one where I do the clenched teeth. Now this is actually harder because when I smile now, it pumps a smile into the keyboard buffer. Normally I don't smile this much. Right, fine. I'm going with that. Now I've got to, I've got to try and clench my teeth and growl. Don't make me smile, don't make me laugh. Okay. Now I can't, I can't, I can't grimace. I can only laugh now. Now just quiet down for a moment. Oh, no. Okay, we're going to abandon this demo. You're far too happy. Right, the hardest part of this demo is turning it off. Yes. Now I've just got to do this other one. Nope, nope, nope. Stop, stop. There. Right, so why do we care? We can see that this is particularly difficult for me to do on stage, but we care because there are two types of what is called locked in syndrome. This headset is aimed at the gaming market. So you would use it to control a game. That's what it's aimed at. Actually, I don't buy into that idea. I don't find that especially useful if I want to play a game. In today's world, I would use a console. So some game pad of some kind, use my thumbs and wear them out. That's what I'd use. Maybe in the future, I might use one of these. Absolutely. But in today's world, I probably wouldn't. I feel that I would get a much faster response by using some kind of console. This is, for me, the point of this is aimed at the disability market. So people who are in some way impaired. And specifically in the area of facial, reading facial expressions, we'd be looking at people with locked in syndrome. So there are two types of locked in syndrome. There's locked in syndrome where typically your whole body is paralyzed. In that scenario, often, it is that your body is paralyzed, but you still have some control over some part of your facial muscles. Total locked in syndrome is where you have no control over even your face anymore. But if you just simply have locked in syndrome, then you could use your eyes to perform an action. Or if you still have use of the rest of your facial muscles, then you could smile or grimace or look left to say arrow left, look right to say arrow right, blink to say left click. You could control your computer. And that would be a huge deal for someone who can't use a computer. Okay, so let's go back to the slides. This is where we were. We did that. Right, so here's what this neuro headset looks like. Obviously, I'm wearing it at the moment. This is looking down on top of it. So you can just about see that it's got lots of different electrodes on it called bio potential sensors. So that this is these are the recesses into which I'm going to screw. I was earlier doing this down here. I screw in these little pieces here. These pieces here, the sensors themselves, that little gray fuzzy bit, that's felt. So these are felt that is felt on the bit that touches my skin. It's a felt sensor. Behind that, it's a gold plated back. And we screw each one of these 16 different sensors in. That's the back end of the sensor. It's a gold plated sensor. It screws into the little recess inside the pad here. Then having screwed it in, I take a saline solution and put a few drops of this saline solution onto the felt pad. And then I'm done. It's fully assembled. This is a semi wet solution. It's non intrusive, so I can take the headset off. There's no surgery involved in using it. The next version of this will be a semi dry version where there's no need to apply saline solution. Instantly, this saline solution, this is a contact lens solution. There's nothing special about it. It's not contact lens cleaner. That would ruin the headset. Don't do that. This is just the solution. So it's a saline solution. That's it. Any questions before I move on to the next bit? It would help me because I need to take a drink now. So any questions? Oh, yeah. Good. If you could pan out your question a bit. Just one second. So the question was, what if you have a lot of hair? Does this thing still work? It needs to make the sensor needs to get reasonably close to your skin. I'm preferably touching your skin. It doesn't actually have to touch it. And yes, it does make a lot of difference. If you've got a whole mass of hair, I tried this on my son and he does have a whole load of hair. A lot more than me. In fact, everybody in the room has got a lot more than me. And it didn't work as well. You don't get quite as good connection. But if you move the hair out the way and part it, then it's much better than. Question down here? Okay. So the question there is, how long does this headset actually last? What about those these pieces here? Yeah. So eventually you will get a little green deposit on the back of these things, which you could carefully wipe off with a cotton bud. There's advice on the emotive website for how to clean these sensors. I'm on my second set of sensors after a year and a half. But that's because I wanted a second set of sensors. I probably could have gotten away with it. However, I'm not using it like someone who has locked in syndrome is using it. I'm not sitting here all day every day. I'm using it probably two or three times a month, which is low usage. Okay. I'm going to move on then. Yes. Do you have to assemble it each time? Yeah. Absolutely. You stick these little sensors back in little pack and it tries to keep them sort of wet-ish. So it's a perfect seal around the edge of the thing. Okay. I'm going to move on then. So there are different versions of this device out there. Right at the top, there is the consumer version, 300 US dollars. No, I swear I'm not selling anything here. You can't buy this thing from me. The consumer headset used to be only available in the US. It's now available worldwide. It is locked in to applications that you can buy from the emotive store. It will not work with any other applications. So if your friend writes an application, the only way that you could use a consumer headset to use his application is to get them to upload it to the emotive store. Next down from that, we have the developer headset, 500 US dollars. That's the one that you can write code against. You get an SDK for that, both managed and unmanaged, so C sharp, VB.net, whatever. The one up from that is a research edition. That's the one I'm wearing at the moment. Physically, you cannot tell the difference between these. This one's got different software that comes with it. Obviously, a much better looking avatar turns up. That's what you get for your 250 US dollars. Actually, a bit more than that. A few more things. One of the things we'll look at later. There are a few more versions here which are aimed at people doing research, so 2500, 7500, et cetera, et cetera. Way down the bottom for the people at the back who can never, ever see this is the most important piece way down the bottom. So I'll read it out. I'm not that cruel. Way down the bottom there, it says SDK light, which is free. And obviously, you don't get a headset when it's free. What you get is you get the software. So you can download the software and you get something called Emo Composer. Emo Composer is a fake headset. It's a software fake headset. And if I have time later, I'll show you that. What happens is you say, now I'm sending the blink message to this program and it receives a blink. So if you want to do this today, then you can go and download that stuff today and you can start playing with it. Bear in mind, it's 100% successful. When you send a blink, your software listening for the blink receives the blink. Now, as you saw earlier when I was doing that facial expression thing, there is some element of doubt as to whether this is actually going to happen or not. But then again, that's also pretty good because if you're testing, I have been testing my software. When you're testing software and you're trying to go blink, blink, blink, is that the software not working or is that my inability to blink? So it's actually quite handy. Okay, let's move on them. Hardware and software requirements. So it works on Windows 7, Windows Vista, Windows XP Service Pack 2. That's what it works on. It also, as of late last year, works on Mac OS X and the Linux version, I think is out or is out imminently. So it works on three operating systems. You need one USB port, one 2.0 USB port. That's what I've got here. So this thing is sending a signal out to this USB dongle here, which is sending off to the, what's called the Emo engine, which is doing all the processing. You need one of these USB ports per headset. Now, the Emo engine understands up to two headsets. But as developers, I hope you can appreciate making your software understand more than one headset is a whole lot different from making it understand one headset. So even though this understands up to two headsets, you would have thought now they've done the hard work of it being up to two, you could probably some future version would be three headsets or four headsets. So using these two headsets, I could plug these two things into this laptop and I could play Pong, for example. Mine's Pong. I know that's an incredible waste of this amazing technology, but seriously, you have two people playing Pong against each other on the same laptop. I'm not impressed either. And you need one gig of RAM. Frankly, if you haven't got one gig of RAM, then you've got way bigger problems than worrying about whether your Emo-tive headset works. So a quick word on safety here. Is this thing actually safe to wear? I mean, let's hope so because I'm currently wearing it. Is it safe? Well, actually, I don't know, but I will tell you what I am told. So this thing gives off the signal in the same range as Wi-Fi and Bluetooth. So it's 2.4 gigahertz. Now, there's an antenna. It says it's located at the back. Actually, it's located on the right hand side on this node here. And it's about 20 millimeters off your skin. Now, if this is a subject which you're interested in, you'll know that the closer to your skin the radio signal gets, the more damage it will do to your skin. So putting it 20 millimeters off your skin is relatively safe. The analogy that I would use is if you've got a Bluetooth headset and you clip it on your ear here, that's actually touching your ear. Whereas this thing's not touching your ear. So according to a motive, they say a one-minute phone call with a mobile phone will give your ear about the same dose as six months projected use of this headset. Now, I have no idea whether that's true or not, but the implication is it's quite safe. You can read it as well as I can. Okay, then, before I move on, any questions? Okay, good. That's a great question. Given the false positives and the limited success that I have with the first demo, can I actually use this to control my computer? A couple of things there. Firstly, my brain state when I am standing in front of 100 people is not the same as my brain state when I'm at home. I can really notice quite a significant difference, apart from the fact you guys making me laugh when I'm not supposed to be laughing. So I do notice a real difference there. However, it is not 100 percent accurate. It's not 100 percent accurate. There are various dials there that you can change to change sensitivity. I don't know if you saw that on the first screen. So you can change the sensitivity there or you can say, I am now trying to do a smile. You need to kind of recognize that as a smile. So you can adjust it. There is also the practice element here in that I am fully enabled. I can do everything that I need to do. So I do not give this the amount of time that someone who is impaired would give it. And they would give it a lot more time. They may give it eight hours a day for a week, which I have never done. Total training time on this computer, maybe two hours. This is for me, this is a new computer. Previous laptop, total training time, eight to ten hours maybe. And I got a better result on the previous laptop. So I am not putting in the effort that other people would be putting in. Okay, let's read some emotions then. So this is the let me just turn this off for a second. So this thing is not painful, but I don't like wearing anything on my head. So after a while I like to take it off. It's not painful though. So this is the demo that I cannot do live. Now you are going to have to, I am submitting you to my personal musical taste here. What you are listening to is a Slow Blues by Patrick Smett. Now I tried listening to different types of music and recording my emotions whilst I was listening to that music. And I did this with CDs and found that canned music doesn't do it for me. I don't get a sufficient emotional response out of just listening to a CD in my house. So when I was at a concert, I sat in the concert with my laptop on my lap and my headset on my head listening to this guy, Patrick Smett. And no, nobody looked at me funny at all. So what have we got here? We've got a black line at the top which is what's called instantaneous excitement. That is something happens and you go, oh, that's quite exciting. Sort of a shock value. It's quite instantaneous. The red line here is engagement. How engaged are we in the current activity? And the converse to that would be boredom. How bored are you? When it drops low, how bored are you in this current activity? The blue line is my frustration level. Now, I don't follow this line too well because to me, it doesn't make a whole load of sense. It's not doing what I thought it was going to do. So you can see the frustration level plummeting down there. The green line here is meditation. What's my meditative state at this point? I personally can't affect the green line deliberately. The green line is, if you could meditate, you can make this peak. It's also for your deep sleep states. So you can go onto YouTube and type in emotive. You'll find various videos of people, of one guy in particular, recording his state whilst he's sleeping. Now, this thing's about to finish. I want you to notice the red line here is round about 0.5, 0.6. It is how engaged am I in this activity? Also, notice the black line. Black line's going up, down, up, down, up, down, but it's around 0.4, 0.5, 0.6. Now, this is, I like slow blues. Okay? I like this music. I am enjoying it, but what I want you to take out of this piece of music is it's not exciting. It is a calming, rewarding piece of music if you like slow blues. Okay. So again, all these lines converging around that area. Let's take a second piece of music. This piece of music is by a guy, this is by a guy called Big John Carter. He's playing a fast boogie. Again, you, I apologize if you don't like the music. It's a scientific experiment. Okay? So just go with it. Now, some things that we can point out here. This red line, as the music goes on and further and further, you'll start to see the red line will go up and up. So right now it's about 0.5, 0.8 there, but generally 0.6, something like that. It will start to go up throughout the piece of music because this is, I like this a lot better than the last piece of music. Fast boogie. This works for me. Now this black line here, like I said, the blue line, I don't get it. I'm clearly not frustrated. So there's something there I don't understand. The black line is much more interesting. This is used for, if you were writing a game, you're writing some sort of game and you just see the black line going up there. Actually, this is not wholly a scientific experiment. If you'll notice there, the black line goes up and came down. If you're listening to the music at that particular point, you'd have wondered, what was going on in the music just then? This is not really scientific because we're equating the music to my emotional state. What we're not counting here is what I can see or more precisely who I could see and what happened when she just left the room at that particular moment. Don't tell my wife that. Brainwaves don't lie. So this red line here up at the 0.8 and as the piece of music goes on, you see the black line spiking at 1.0. The black line, you might use this if you were writing a game and someone, you're exploring some sort of maze, for example. And as you open the door, so you see a big monster about to attack you, you would expect the black line to peek at that moment because it is instantaneously exciting. But you can't maintain that. It's instantaneous. It's not, as soon as you've opened the door and you've seen the monster, you can see the monster. It's over at that point. You may carry on fighting, but the instantaneous part has gone. So it would then plummet. You'd use this red line going up 0.8, 0.9. You'd use the red line to indicate, is this person interested in my game? And if it was quite low, and that would show boredom, you would say, okay, we need to throw more aliens at this person, more monsters. It's not exciting enough. And the program would adapt in real time according to their emotional state. You just throw more things at him. What I like about this is, I don't know if you can hear the very high notes of him doing almost rock and roll. It peeks at that moment and goes up and down, but starts peeking. Yeah, this is the point at which the song ends. What's the levels as the song ends? See that black line plummeting there? Okay, I'm not very interested anymore. Show's over. Song's coming to an end. Black line down at 0.2. So that's what you get. But the engagement's still very high up here. So that's what you get out of the, what's called the, what's called, sorry, the effective suite. Okay, no more videos, no more music. You're safe. Questions on that? You can just watch me drink water, or you could ask a question. It's a great time now. Yeah, absolutely. So the question is, does it adapt to different people? That's a really important point. So everybody's brain, the parts of your brain are exactly the same as everybody else's parts of their brain, providing you are not impaired in some way. So your occipital lobe is in exactly the same place as everybody else's. Your temporal lobe, parietal lobe, your frontal lobe, they're all in exactly the same place. However, your brain is folded in on itself. You ever looked at a picture of a brain, there are lots of different grooves and avenues, that's because it gets folded back in on itself. The way it is folded in on itself is unique to you. It's like a fingerprint. Nobody else's brain is folded the same way, which means when the signals are coming out, they're coming out slightly differently to anyone else. Now, the next bit that we're going to look at here, no, we're not going to look at that, is the bit of cognitive thought. And the way they've set this up previously is they've taken a sample of 50 people and worked out when 50 people are trying to perform this action, that's what their brainwaves look like, and they've taken an amalgamation of that. However, when you do it, it will not be quite right. So you train it. You go ahead and train it, and for this next part, it requires you to train it. So you try to perform an action, and it goes, I'm just going to record what your brain looks like when you're trying to perform that action, and we'll see. So yes, not everybody's brain is exactly, excuse me, exactly the same. Okay, let's try this one. This is reading conscious thought. Yeah. So let's get rid of that one. And start that one up there. Right, let's put this back on again. Okay, good. Okay, excellent. So this thing, yeah, as I wear it a bit longer, the greens will stay green a little bit longer. So this thing has something called the cognitive suite here, where it attempts to read my conscious thought, my intent to perform an action. Now, we do this with a training program, which has a cube in it, and I attempt to manipulate the cube with my mind. I know how that sounds. But that is the goal here. Now, I do find this particularly difficult, because when I give a presentation, I know, emotionally, I know I am in a completely different state to when I am not giving a presentation. I've done this for decades, but I'm still in a different state. So I'm going to try and move this cube. Firstly, I'm going to push it and pull it, and then we'll try moving it left and right. Now, you do not need to perform physical actions to move the cube. So if I want the cube to move away from me, I do not have to go back, like King Conyut or something like that. Go back, come to me. I don't have to do any arm actions. But I am going to do that. I am going to do that, because if the cube just moves around, I can go, yeah, I meant to do that. You don't know. You don't know. So I will do some arm actions and try and push the cube away or pull it towards me so that you can see whether it is actually doing what it's supposed to be doing. Now, the hardest thing of this is I've got to clear my mind, which I can't do. I can't do it with 100 people staring at me. Now, clear mind. Don't worry about the 100 people staring at you. Okay. For some reason, I can't pull it towards me, but I can successfully push it away from me. I don't know if you could see at the beginning of that demonstration, I was attempting to push it, and it was being pushed away from me, and it was staying at its extent. Let's try it with left and right and see if I can get a slightly better result with that. Let's remove that one. And then you can see I can add in different types of cognitive thought here. So I'm going to add in the thought of left. Let's take out pull, and let's add in right as well. Now, you can just see see here it says difficulty level. It says moderate. As I add in right, as I add in right, and then one more, it will change too hard. The more actions I add, the harder this thing gets. So let's try left and right. Okay, obviously not left. Well, it should work. If you move your left arm, actually that should work because you are using a left kind of thing. It's being incredibly stubborn at the moment. And what I had hopes to do here was I saved left and right as the second demo because I felt reasonably sure that I would be able to control this quite clearly. If you notice here, there's it says overall skill rating. And in the first demo that I did, if you were watching, you said overall skill rating was 14% but I worked very hard on training this up. You may look at 39% here and go, you weakling, 39%. I promise you, 39% makes me a Jedi mind master. So I have spent quite a long time training this thing to get 39% because each time you go through the training process, it kind of overwrites your previous result with what you got this time. So you can keep training it and training it and your score can go down. So I'll show you how this works. I felt that with my level of 39% where I swear I can move this cube left and right reasonably accurately, I thought that would be quite good. So this works by starting off with a neutral action and you click on start training, which I'm not going to do right now. And you clear your mind and it records what your mind looks like when it is just neutral doing nothing. That is surprisingly difficult. I don't know if you've ever tried to completely clear your mind, but unless you've done meditation, I'll bet you haven't cleared your mind because you go, think of nothing. You know when you're trying to go to sleep and you can't go to sleep and you go, think of nothing, think of nothing, think of nothing, think, what about the cat? Did I put the cat out? Oh no, what about that bug that I wrote today? Am I going to think of nothing? Think of nothing. You go, I didn't put the cat out. And your brain just fires off and goes off in different directions. You try thinking of nothing for eight seconds. I mean truly nothing. It's actually very difficult to do. So you clear your mind for eight seconds and then you pick one of these. You pick, uh-oh, no, you pick, let's pick in rotate left. You can see here I have never done rotate left on this machine. Let's take left out. Let's take right out. Okay, now let's do the training for rotate left. Oh, I don't want to do that. No, no, no. Okay, let's do abort, abort. All right, let's do that one, rotate left. Okay, sorry, that is a complete failure. I'm trying to rotate this thing left. And I would have hoped that you would have seen it move left at least once. And I didn't mean to do that. So except this training, I'll say no. And I'm not going to do the training live here because it's not going to work. Oh, right. So that's a great point. What about this animate cube? So I can click on that and it will rotate left. So let's start again. But it's rotating left, not because I'm making it rotate left, because it is visualizing this is what rotating left looks like if you were to succeed. Now, it's on there so that it would help you to visualize it. I actually find that doesn't help me at all. And it really puts me off because I don't know, am I moving it or is it just rotating because you told it to. So I turn that off. But some people find it very helpful. Okay. Yes, I am in my mind, I am thinking of the concept of rotate left. It doesn't actually have to be you trying to rotate it left. You could be thinking at that point, lift up. And that's what you mean by rotate left. Or you could be thinking of drive car. It's a pattern. Now, the reason why, as I added these in, this starts at easy and it starts getting more and more difficult is the way that this thing, the emo engine is doing the detection. It looks at your brain waves coming out of your head and it says, I know what your neutral state should be. Now, you've recorded what your pattern looks like for left. All the emo engine has to do is say, what does your brain state currently look like? Does it look more like neutral or does it look more like the pattern for left? If it looks more like the pattern for left, then you're trying to think of the concept of left. So that's quite, that's an on off, black, white kind of thing. If you add in right, it goes, well, now I've got three different brain states to try and work out. Which one of those is it most like? And as you add in each different conscious thought, so it gets harder and harder for it to try and make a decision. But also it gets harder and harder for you to make, what is actually the difference between thinking of the concept of left and the concept of right and maintaining that for several seconds? Because this thing has a buffer. You can't say, think left and it goes left. It needs to register that thought and make a decision. There is a delay in that process. Sorry, say again. Okay, so what if we exaggerate this? We picked something much more magnified. So the concept of stabbing someone to death would be right. And the concept of gently stroking a cat would be left. That's the idea. Smiling, not stabbing someone at all. That means nothing about me personally. That's between me and my analyst. So absolutely, magnifying emotions to something a little bit more stronger, it does not actually matter that it is left and right. So yes. So what do people use this for? They use it for all kinds of things, but it's the easiest way to picture this. Take anything that you have used a remote control for. So people do this with UFOs, for example. You have a remote control with some joystick, you have a UFO and you move up, down, left, right, around and everything. Take the joystick and throw it away. And then attach a headset and then you can control the UFO. So people have done this. Anything that you could control with a joystick, you could throw the joystick away and replace it with a headset. So this will be a wheelchair. Wheelchairs are controlled with a joystick. You wouldn't have to use, and people have done this. So people have replaced the joystick on the wheelchair. Can I get the, whoever is room monitor, can I get them to fix the problem outside with the people? Thanks. So other things will be cars. So there is a project in Germany called Brain Driver, where they have wired up this headset to a car. There's a guy sitting in a car, he thinks of the concept of left and the car, he doesn't use his hands, the car goes left or right. Or he thinks of forwards, which will be the push motion, and the car goes faster. And he thinks of pull towards him and the car breaks. I hasten to add, they've done these experiments in an air field. So they've got plenty enough space to get it wrong, but it does work. Other people have hooked it up to a skateboard. Perhaps more usefully, though, it's been hooked up to robot arms. So if you are in your bed and you are immobilized, there are various arms that you can get, robot arms, which will reach out to the side of the bed, pick something up and bring it to you so that you could drink. It's also been hooked up to house robots. There's a house robot called a robo dance 5. So that's normally used with a joystick. The house robot goes about the house performing tasks on your behalf. They've hooked one of these up to one of these headsets. So on and on and on. It's basically down to your imagination at that point. So let's go back to the sides and see what we've got. Yeah, actually, so here's the next demo, which is using the API. I said we could do this stuff in C sharp. So let's do it in C sharp. So here's a program. Question there. Yeah. Right, like in Firefox in the movie, that kind of thing. Yeah, so the question is, can I use my mind faster than I can use my hands? At this moment in time, no. No. The recognition of the conscious thought has a delay built into it so that it can actually work out the difference between neutral and some other states. So there is a delay and I cannot currently think faster than I could use a joystick or a game pad of some kind. However, my facial muscles do not suffer that delay pattern. So I can blink and it can recognize it almost instantaneously. That's a much faster processing time. Also note that this is the headset that we have today. The world of neuroscience is depending upon how you count it in the order of 30, 40 years old or longer than that. The world of the commercial headset, so this commercial headset is something like five years old. This headset, I think, is three years old. But this science has just taken a quantum leap forwards with these becoming sub $1,000 US dollars. So there will be another version. There's always another version. And they always get better. So no is the answer today, but I believe in the future. So we are looking at the world's worst user interface. Really, this is not my forte. I make stuff work. I never make it pretty. So this is my fairly nasty user interface here. And I've got it hooked up to the C sharp API. So as I use this headset, you can see there are checkboxes here that do various different things. As I blink, for example, we will see the checkbox for blink check. So I need to click on this start reading button and it will read my facial expressions for 10 seconds. So let's try it. I was supposed to blink. I don't think it's actually blinking. Oh, there you go. It did blink. Yeah, that's because it stopped reading. Okay, let's try it again. I can do the smile thing. Yeah, I know that I'm not going to be able to do the clenched teeth thing. Let's try the other bit is with the eyebrows. I can do the, yeah, there you go. There's my eyebrows going and it stopped. So let's look at the API. Let's see how I'm actually reading this. This is straight C sharp stuff here. Let me just fix that so that you can actually see it. And there you go. You should be able to see that. Okay, so we have a, this is a Windows forans application, not that that's particularly important. We have a private field here called engine, which is of type emo engine. So I'm using the C sharp managed API here. And in my forms constructor, there's initialize component. There's a few more bits and pieces, which I'm going to ignore for the moment. I create an instance of the emo engine. So I've basically got a connection to it. And then the emo engine has an event called expressive emo state updated. So it has the expressive API, it has the effective API, it has the cognitive API. I hook into whichever API I'm interested in and I say, call me back when you detect a change in one of the states. So I've said here, there's my emo state as I call me back and call back my engine underscore expressive emo state updated, which is this method here. So this method will be called when the, when the facial expression changes. And I grab hold of the state here. So we're passed in an event arcs, which is an emo state updated event arcs. And it has in it a state. So we get given the state that it currently is at the moment. And then what I do is I say, take the state and call the expressive is left wink method, which you probably don't need me to explain what that method does. It's a really nice name there. And I set my checkbox to be true or false, according to whether the person is left winking and expressive is right wink is blink is eyes open is looking up is looking down is looking left is looking right and so on and so on. You get told all this stuff. This one here is slightly different. This is get smile extent, which is a number between zero and 1.0. And it says it's not just is this person smiling. It's how much is this person smiling a lot or just a little bit. And you may have noticed on the avatar on the first demo that the avatar either had a great big smile or just a slight smile or no smile at all. That's how they were doing that. And then we've got get clench extent, which sounds really wrong, to be honest, get eyebrows extent and so on and so on. So that's the API. I've probably got three minutes left. So what could we do with this? We could say show avatar. And I could create an avatar based upon my facial expression. So this thing should be reacting to what I'm currently doing. And you can see the it's blinking its eyes. So what have I done here? I've taken lots of different pictures. I've cut them up into little different pieces. And I've said if you can see that the avatar is now looking left, then show the picture for his eyes looking left at this point on I know that was incredibly I swear you could do a better job than I did, right? But you get the idea not particularly difficult to create an avatar. Okay, last thing I want to do, you know, I got three minutes, I want to do two things if I can, but we'll see how much time I've got. I wanted to show something called test bench. Right, here's test bench. What we're looking at here is the feed coming out of this headset to each one of these sensors. Now, you can see on the left hand side here, there's these different check boxes. There's 01 and 02. That corresponds to the sensors at the back of my head, which are reading my visual information, P7, T7, P8, T8. And you can see all the activity going on in my head right now. I truly hope you don't know what that means. But so you can get this feed, you can get this raw data in your program. This is where you can work out the code that you need to write, but rather you can write your code, but you don't know what this stuff means. You need a neuroscientist to tell you, yeah, what we're looking for is the spike in the AF3. We're looking for activity in 02, at the same time as activity in P7. So for example, if I'm reading a map, let's say I need to go and read a map, that would activate signals in my 01 and 02 because I'm looking at a map in front of me. Then I might do some spatial, I may perceive the map in my mind. So I may look at the map and visualize it. To visualize it, I need my parietal lobe. So that would be activating P7 and P8. Then I may do some route planning and may work out, here I am at this location right here. I need to get to this location over here. I need to think about what route would I take to get there. That's decision making. That's in your frontal lobe. So we'd be looking at AF3 and AF4. Now, a neuroscientist could tell you that stuff, but then you need to work out how to code it based upon the raw data coming in here, which is no mean feat, which is why the EMO engine is pretty cool because it comes back and it says, this guy is winking or he's looking left or he's feeling unhappy or bored. It's much easier when you get the decoded signal. Okay, I am pretty much done here. So let me just finish off with a few things. I didn't show you the EMO Composer, which is faking the headset, but you can imagine there is a piece of software out there which fakes the headset, and you can download that for free. There are some applications out there which do various different things. This is a brain activity map if you know how to understand that kind of thing. Here's another program for doing typing. The EMO key I didn't show you is a program that comes with the emotive headset for free, which will do typing. There are lots of typing programs out there, brain speller, et cetera, et cetera. This is one for hooking into Unity 3D programs and games, and adding headset support to the games, emo lens for rating pictures, et cetera, et cetera, et cetera. So to finish off, there are two key differences in the world of neuroscience in the last five years. Firstly, these headsets, this is not the only one, these headsets became affordable. They are sub $1,000, which puts it in your hands. They also came out with an API that you can program to. Gone are the days when these things cost $20,000, $30,000, $40,000. You can buy one right now, and you can write C-sharp code against it, and you can help people around you, and you can discover the next wave. So I see us at the beginning of a new wave of neuroscience, where it's now put in the hands of the masses. So this is affordable. The facial reading stuff is fairly accurate, even though it wasn't enormously accurate when I did it, but I hope you'll forgive me because doing this presentation whilst using one of these is actually quite different. So I hope you'll forgive me for that. The conscious thought requires you to train it, but if you were wholly immobilized, you would invest the time in this, and you would put in eight hours a day for five days to train this. It's limited by the number of simultaneous thoughts that you could work on, and typically that's four. You're not going to get more than four, and have the computer recognize those different thoughts. And the potential is limited pretty much by you only. I hope you found that useful. If there is a question I didn't answer, then email me or get me on Twitter, or catch me. I'll be standing up around here or catch me throughout the conference. Hope you found this useful, and we'll see you again next year. Thank you.
The world of neuroscience changed over the last 5 years. Neuroheadsets (headsets that read brainwaves) became affordable and accessible. Neuroscience is no longer confined the realm of huge research budgets and professors in lab coats. It is accessible to regular developers. This session illustrates a neuroheadset that reads brain waves and uses a C# API to allow developers to recognise facial expressions, emotions and cognitive thought – that is, the headset can read basic, deliberate conscious thought. Although the headset is aimed at the gaming market the potential for the physically impaired is considerable. Come and see the potential of today’s Brain Computer Interfaces.
10.5446/51131 (DOI)
Hello everyone. Do we start the talks on time in Norway? We do. All right. So it's 420 sharp, so we're going to get started. Welcome everyone. My name is Pellie Deler. I'm a research, what is it? Senior research software design engineer in Microsoft Research in Redmond. And I work on a project. One of my projects is Fakes, which is a new unit testing isolation framework that will be available in Visual Studio 2010. And I guess you're all interested in unit testing. I'm just going to do a quick poll here. Who has written unit tests in the past? Awesome. So I got a room of experts and I expect people watch the video to know something about unit testing so we're not going to go through the basics, we're just going to focus on fakes and leveraging fakes. In fact, I'm totally going to skip any kind of discussion where it's a good idea or a bad idea to use this. I'll just show you how to use the hammer and well, you nail whatever you want with it. All right. So, why is a research guy presenting a tool about unit test isolation? So, a bunch of years ago, maybe five years now, Nikolai Tillman, another RSD, Microsoft Research and I, we built something called PECS. And PECS is a code exploration engine and it uses a precise data flow and control flow analysis also constrained solving. PECS needs this super precise analysis of your code. It basically analyzes every instruction running on the machine. So, when we built that, we went into all the, you know, many pro groups, many double burn say, write code that never talks to the environment. Write code that never, you know, connect to any database or anything like that because then our tool don't work or it's hard to use. Write this superb isolated code always. Now, reality settled in and we realized that we actually needed to provide a mock framework or something like that to help people dealing with the isolation project. So, there's great frameworks out there for.NET to do mocking. But the problem with them was for PECS, they had a lot of overhead. So, I'm talking about this to explain the rationale about how we build fakes and all the decisions we made in there. We made all the design decisions first for PECS, first for an automated tool that spits out test case for your code. So, it was about being as efficient as possible when we would actually generate the code for all these objects that we create. So, it is designed for that tool and a lot of decisions are being made about that. So, we had a type safe, we have a type safe, statically compiled detour framework with clear semantics. All right, we'll see what that means. And then finally, most what was picked up by Visual Studio and then we worked the last years working together with Visual Studio integrating the framework into Visual Studio 2012, which you can download and you can play with. I'm going to show you the DRC right now and show you what fakes is all about. So, that's kind of the background and the history of the project. All right, so, in a nutshell, if you have to remember anything about fakes, it's on the slide right now. The main idea behind fakes was to allow you to replace any.NET method with a delegate. Now, since JavaScript is mainstream, this is just business as usual in dynamic languages. And.NET is not so much. You can't just replace a static method with another method body. But basically, that's what we wanted to give and nothing more. That's all we needed for PECS and people could build on top of that. So, we tried to keep it as simple as possible. So, we'll talk about this sentence. And I have a demo. So, what does this mean? I have this little blinking demo here. Okay, demo time. So, we're going to switch into Visual Studio 2012, which has been stripped down for the demo. All right, here's my menu. I'm going to create a new project. Let's see if I can create it. There we go. New, new project. All right. And I'll create a little class library. Okay. So, here's a little piece of code that's really hard to test. And it, you know, something famous we'll remember was those crazy times of the millennium where we had, you know, these nasty bugs in our code base. It's not happy for some reason. It's happy. All right. Let's remove this. All right. So, now let's say we want to test this, this two-liner program. It's extremely hard to get that right for a couple reasons. So, daytime now, that's a static method that ships with the framework. You have no control over that. That thing goes very quickly into the machine clock and gets you the current time. You have no control over that, too. You can change the machine time, but by the time you read it, change it again. So, it would be extremely hard to match this equality here. All right. If you really want to hit and go into this branch, you need to tweak the machine time very, very, very precisely. It's almost impossible to get right. And this is symptomatic from big systems which haven't been well-componentized or maybe you're dealing with third-party code, legacy code that wasn't written in a testable way. You have this big chunk of code. It's all intertwined. It's really hard to work with. It's pretty hard to test. You need multiple machines. You need servers, all this stuff. So, this example looks funny, but actually, anytime you have something in cold context, right, which is a static method, you might have the problem. So, we're going to work a lot on this year 2K bug. And right now, I want to write a test case that hits that. So, I'm going to add a new project. And if you're familiar with Visual Studio 2012, there's a new project template. I'm going to hear in test and click on the unit test project, which is leveraging the MS test engine. They rewrote the entire UI. We'll see that. The entire UI for writing tests and running tests. Let's split this. Split this. All right. Let's go. All right. So, I got my code. I got my test here. And let's add a reference. All right. There we go. And here, I'm just going to call it. Right. Of course, it's going to be happy. So, what I'm going to do here is there's this new thing called the test explorer in 2012. And it has this cool feature where you can run your test on build. So, whenever I'm going to build now, I'm going to run the test. If you want to see me doing run tests, I'm just going to checkmark this. And then I'm going to build. And then once the build is done, I'm going to do the discovery and run the test. This is a very cool new test runner that comes in Visual Studio 2012. All right. And Fakes will just work with it. It just works. All right. So, it's happy and we didn't hit the bug. Now, how do we do it? So, remember the idea was in Fakes that we wanted to be able to replace a method with the delegate. So, in C sharp, the wish was to do this. The wish was to be able to write something like this. Everybody's familiar with the, so this is a lambda expression, the little notation here. So, if I was in JavaScript, this would probably work. Unfortunately, right now, C sharp is not happy because you can't do that in.NET. But the core idea behind Fakes and most was to be able to do that in a type safe way and in a T2 way. If you are able to replace the implementation of date time now, then for sure, you're going to hit the bug. Right. Now, how do we do this? So, I'm going to show you how to enable Fakes and we'll talk about the mechanics. There's one step you need to do in Fakes is you need to create a fake assembly which you're going to create all kinds of types that allows you to replace methods. So, the first step is you're going to the references and I'm going to right click on system and I'm going to say add Fakes assembly. Now, date time really lives in Ms. Corlib, but Ms. Corlib doesn't show in the reference list. So, we bundled it with system. So, if you do add Fakes assembly on the system assembly, it's going to go ahead, it added two files to your project and it's compiling right now. It's compiling new assemblies and it's going to add them to your project. So, you have new types to play with. And this is a great machine I got on loan and they forgot to tell me it was a bit slow. All right, here we go. So, we got this guy which is a new assembly that was created by us. We will recreate it on the team build server on demand. You never check that in and it's cache and so forth. And this assembly contains what we call shims. Shims are strongly typed wrappers that allows you to replace methods. So, I'm going to show you an example of shim. I'm going to show you the date time shim. Let's go back here. And the idea with shim is you can use naming convention. So, I'm going to use, I'm going to write shim date time. So, it's shim plus the type I want to isolate from. And you can see the little intelligence menu is there. So, it knows about it. So, I can press control dot and it's going to add my using system name space, using system fakes name space. So, we generate types. We put them in a sub name space dot fakes. So, date time lives in system. So, we have a system dot fakes name space. All right. So, I'm going to do that. Great. It gets nice color here. So, it knows about it. And now, same idea with properties. I'm going to use naming convention. First, I'm going to type the property name I want now. And then you see there's two now. There's now get, the getter of now. And UTC now. So, I'm interested by the now get. And in fact, the help here says, let me get out of this mode. Help says, there's no now, of course. Right. Help says sets the shim of date time get now. Right. So, this allows you to register a new implementation of get now. Pretty cool. All right. So, let's do it. So, in fact, there's no more squigglies. And when I'm going to press build, it should now start running. So, it's going to replace that method. And now, it's going to run, date time is always going to return the year 2000 in that app domain for the scope of the test. Now, it's going to fail. Let's take a look at the error message. Right here. So, the error message says, shims context must be in scope to register shims. Now, you don't want to leak those shims. You don't want to leak those redefinition of very important methods in your system. Otherwise, your entire app domain is impacted. So whenever you use shim, you have to first create a context. And we will clean up everything when the context is disposed. Right. And if there's no context, we won't allow you to run. So what this means is I'm going to go here. I'm going to say shims context. Control that. Shim context. Sorry. Oh, wrong file. I've trained and they always went on the left. And I guess now it went on the right. All right. Thanks. I was on the wrong file. Shims context. Control that. Create. Cool. Curly's. Boom. Okay. So I got a context. And I'm building. So once it's done building, it's going to start running. And in fact, you can see that we got the boom exception. We hit the bug. Cool. We got rid of the entire non-determinism of the time on the machine. Now we have a test that always fails. Always goes at the same place. Right. In fact, we can take a look at that in the debugger. What I'm going to do is I'm going to put a little breakpoint here. I'm going to tell the runner here to I guess I can do it here too. And say debug test. Now because we use C sharp, C sharp generates beautiful debuggable code. So all the delegates you register are registered in the symbol file. You can actually step through that. You can step through your mock test code. Super easy. So I'm here in my test case. I'm going to press F11. We're going to VR2K. I'm going to press F11 again. Usually you step over that because you're going to the BCL and there's no symbols for you. And what happens is we're now in our little detour here. And if we look at the stack trace, take a look at this part. You see the first frame is that anonymous delegate that C sharp created. Second frame is deep down in MS core lib. You basically entered MS core lib and then you took a detour. And then the third line is the test case and then the project and then the test case. So what fakes that is at runtime, it injects this little branch that says, hey, is there a delegate that I should run instead of this method? And if there's one, run this. It's a detour. It's as simple as that. The cool part is you can debug this. So we can actually step, step out, look at the value. Look again at the value. Let's try a 04. Yeah, year 2000. Now, there's something funny happened when then boom, okay, we hit the bug. Great. There's something funny happening actually, the serialization code that sends all the messages for the debugger to the debugger to Vero studio uses remoting, which somewhere uses daytime now to so Vero studio receiving messages from the year 2000 from the past from his debugger. He doesn't care much. All right, so, but there's a lesson here. When you make when you replace the method, you replace it for the entire app domain during the scope of that test case. Right, we're doing the scope of that shim contest. So for all threads. And basically one app domain, right? So you might have side effects that are if you start running shims on big system, you might have things that are a bit, you know, crazy, you don't understand what's happening because maybe you've registered a delegate that gets called and you didn't expect that API to be called, right? Okay, so that's, that's the basic mechanics. You want to replace any dot method with the delegate. Now we've seen shims. Shims is the heavy duty tool that lets you deal with all the bad stuff, which is static methods, private methods, seal types, basically nothing, all the non of a writable methods. We have another kind of code that we generate that's called stubs. And stubs are really implementation of non seal types, implementation of interfaces, implementation of upside classes, non seal classes, anything that would use virtual method dispatch. Stubs is much more lightweight, just using overloading. Shims is the ultimate weapon. It's using this detouring this runtime instrumentation, rewrite your code at runtime. Right, so it's good to know the differences in the trade off. And you start using both. All right, so shims. So we've talked about static method and just to hand out another great demo. So this demo is why I have this dedicated computer. Just to keep people excited. We're going to do, you know, it's like a circus here. So who think this is a good idea in a conference? All right. Okay. Okay, if I hit build now, I'm hosed. All right, so how does it work? I write shim. I need to get to the directory type. So I'm going to do shim directory, control dot. And then I need delete. So I'm going to do delete. And then I'm stuck with four overloads of delete. I'll try that again. Delete. Right. There's the, and you can see here the second part of the naming convention, which is how do you decide which overload you're going to replace? We basically mangle all the perimeter types together into that, that property name that can be very long. And you find your reload like that. So we're looking at string Boolean. It's the delete that takes a string and a Boolean. That's the one we want to replace. And it's no, it takes a path and it's recursive. And we're going to say, don't do that. Right. Okay. So let's review the code. I'm running something that will most likely nuke my machine or launch a rocket or whatever. And on the other side, I'm using fix so that I'm actually not nuking the machine. It's not happy for some reason. Okay. Come on. Happy. Okay. Let's build. And maybe this session finishes early. Maybe not. So don't do that. Right. Same idea. We trapped the delete call, replace it by something benign, something that didn't hit the file system. And then we can move on. Now I'm going to delete that code because it's very dangerous. All right. So these were static methods. Now, or in a sense, let's keep it just for fun. Now the next slide is called C sharp goodies. So we use delegates. So we use anonymous methods. We lose lambdas and you can do whatever you want on those. You can declare variables. You can declare variables out of scope. You can call assertions. You can call frameworks. Whatever you want. This is just C sharp. It's just C sharp. It's just VB. Whatever your language is. Okay. Let me show you that. So you can leverage all the power of the language. For example, if I wanted to count how many call we do, I would declare a variable count. And then I just incremented here. Right. Now I could just play it. Right. What's going to be the output here? Or maybe even better. I could say that count should be equal to one. I'm going to just play it just for fun. Right. So what's happening here is C sharp is doing, you know, creating a closure. Count is now something in the heap. All the changes happening in the delegates are you can observe them. So it's not really local anymore. So when we build, it's going to be happy now. And in fact, you can see that the output was one. Pretty cool. So we can leverage all the language we know. All our knowledge about C sharp. Even better. We can say that I only want this to be called once. Okay. So I shouldn't do this plus plus thingy. But so now I'm a certain really, it's only called once. And I can just put a cert right in there. And just use the C sharp language or the VB language. And when I build, again, it's happy and I'm going to call it twice just for fun. So now I'm calling twice the code. So this assertions a trigger. And just for fun, we're going to probably debug this. Put a break point here. All right. Let's go into debugger again. It's starting up. So we're registering all our delegate and we're getting to the delete code. Count is count is zero. And then we go back and count is one and boom. Right. Again, fully debugable. And you can just leverage the knowledge of C sharp that all the devs have. Right. All right. So that's that's for the C sharp goodies and the VB goodies. All right. So the next challenge you'll face in code bases when you do new object, new something. This is pretty hard to deal. If you had a nice componentized code base, you'd use a dependency injection layer or some kind of service provider and you wouldn't have a new. Anytime you have new, it's hard to test. So how do you deal with new? So we're going to change our example here. I'm going to do a stream reader, rooting some fancy file on my machine. All right. This is most likely not going to work. I can't type that fast. So, all right. Just for kicks. Clean up this thing here. I'm going to run this. So what I'm expecting to see is when it's done basically telling me, you know, the path doesn't exist. Right. Now I can look at the stack trace and I can see that this is probably the place. Whoops. This is probably the place where I want to start isolating maybe from the stream reader. Stream reader is actually a very nice abstraction over stream. So this might not be the best example, but I have to deal with the constructor here. So how do we do that? So in fakes, you can actually replace constructors. What are constructors? Contractors at the I a level are methods. They receive an uninitialized runtime instance freshly created by the by the runtime and they call the constructor on it. So you get a fresh piece of memory and then you call the constructor on it. So we're going to look at shim stream reader and they're a constructor. There are static methods called constructor. And again, you have this overloading. We're looking for the one that has a stream. Now, as I mentioned, you receive two things. You receive the object and then whatever argument that was passed to the constructor. Right. So this is basically a null pointer. It's just zeros. It's freshly created by the runtime. You're supposed to populate it with your constructor. But we trap it. We get it. And now what we can do is just not run the constructor at all. Just wipe it out. Turns out that the stream reader is pretty happy about it. Let me put a breakpoint here. Let's debug this test just to see what's happening. So instead of running the actual constructor, we run that method. Those are simply methods and we can replace them. So I'm going to step into this. Ooh. What just happened? Sorry. It's a wrong constructor. Oh, I took a stream. Right. I guess it was a good idea to change example. All right. In fact, you can be more aggressive and spell out the type if you want to make sure you have the right signature. All right. Looks happy. I hit build, I guess, so it's going to run the test. So I want to debug this. And now it passed. Let's go and debug it. All right. So we're going to get to the constructor and just to show debugger is a great way to see what's happening under the hood. Boom. You're in the empty constructor here. Right. If you look at the stack trace, you see that you are actually in the constructor. Right. You can get into pretty much anything using fakes and shims. Right. And sometimes you have legacy code that requires that. Then we continue and then this goes on. So we've covered constructors. Instance methods. So instance methods are static methods that takes the this pointer. So we nicely hide it, but that's basically it. So there's one way to replace them by just replacing all the implementation for all instances. So you can do that. Or you can actually replace instance method per runtime instance. You can have multiple runtime instance running different piece of code. Yeah. That can be fun. That can be cool. So let's do that. So let's say I want to read a stream reader. So I'm going to read that thing. Store that. I'm probably displayed. All right. So I'm reading it. So read to end is an instance method. The first way to do this is to replace for all stream reader. So I can do stream reader all instances. It's a nested type. And I can look for read to end. And that thing takes the this pointer. And I guess there's no order agreements. And I have to return something like that. All right. So when I do that, anytime you use a stream reader and does read to end, you'll get, hey, that might be disastrous if many piece of your system are using the same API. That might be a bit dangerous. In fact, I don't think I ever tried this demo. We can just try it. Looks happy. It's not so running. All right. So that's the first way. There's a nested type of all instances. You can treat instance method as static methods. Right. And we see that. Whoo. It worked. Yeah. And we trapped the read to end method. Cool. A more precise way is to actually register it on that particular instance. So what we're going to do first is we're going to wrap that instance into a shim. And you can see that as basically taking over, take control of that. And then inside we can do read to end. Now in this case, because we know the receiver, we don't give you the this argument. And you can just say something like that. So that's the second way of doing it. We will dispatch to the particular runtime instance. We'll let all the other ones unchanged. Let me run this. So I would say that's a preferred way. It's less impactful. You get less side effect that are unexpected. More precise. And again, we get hey again. Yay. All right. So that covers instance methods. Right. So we can register in two ways. All implementation or for single runtime methods. Now here's a little trick when it's a you only want to trap the first frame reader. You can actually dynamically change those. So if you want to unregister a shim dynamically just assign null to it. We clear the entry in the dictionary and then you're back with the real thing. So if you want to trap an object and then let go of the detour, you can do that. Or you can add more stuff dynamically. You can really have the same fun that you have in JavaScript but in your.NET code. And there might be a situation where this is useful. And again, think about replacing.NET method with delegates anytime you want. And if I run this twice, what's going to happen is the second time I'm going to get the exception. So I'm going to run this in a debugger. Let's just show how this works. So we're going to run the debugger. And I know people have been tempted at doing logging with this. Remember, shims is for testing. Never ever use it in production. Don't use this in production. Bad idea. All right. So I'm going in here. I'm registering my thing. I'm clearing the constructor. I'm doing the call. And then, boom. No more shim. You're back into the real thing. You blow up. Right? So you can play all kind of cool tricks with the framework. All right. Collections. And that's an important one. You'll find frameworks with their own custom collection. They were maybe built before generics or they really felt they needed a collection. The problem with collection is that they have those private implementations of interface types. Like get enumerator. I think these are usually hidden. So the way we deal with that is basically duck typing interfaces into other objects. So we'll allow you to replace all the interface implementation from one type into another type. That means I can take an array and shovel it into any collection. Right? I'm just going to do that. You'll see. So it's clear all this mess. So here we have my super custom collection, which implements I collection of T. Why not? Collection. There we go. Happy. And I have the default implementation that throws all the time. Okay. So if I call any of the methods of my custom collection, it blows up. Might be that I'm missing a server or something like that. Now I want to test a code that let's have a collection here. I need to be able to access it. Okay. So I don't want to do the constructor right now. Some collection of int. So we got it here and then your 2k is just going to add something to it. All right. So when I call this, it's going to blow up because ad is always throwing. So it's the need all this stuff. Now, before I get on, I need to tell Fakes to generate shims and stuffs for my custom assembly now. So far we only did a miscorded. So I'm going to go again in the references, right click here, say add Fakes assembly. There's a fake file being added here and then there's a file being added here. All right. Okay. So we're in the shim context and now what I'm going to do is I'm going to grab that instance. So I'm going to wrap it up into custom collection. It knows it. Cool. And I can do class one call. All right. So I got my little wrapping here. I'm going to be able to replace instance methods in that type. Now, I see that I have at t0. So I have a delegate that allows me to replace the admin as expected. But I don't have get enumerator. I have a get enumerator but I don't have the untyped one. So if I would do a for each and use the untyped one, I would actually hit the real thing and then fail. So what we're going to do is we're going to actually create an array of integers. There we go. All right. Get an array. And here I got my shim. And I'm going to use bind. So think bind as dynamic duck typing. Take all the methods that implement iCollection in the array and put them into my custom collection at once. So when I do a for each, element in class one, I can afford my example here. Hit year 2k. This is the example. All right. So if I enumerate the collection right now, if I wouldn't have my shim, it would fail. So let me just remove that. Show what happens. So when I run this unchanged, it fails because it's not implemented as expected. Now, when I do my binding, what happens is we replace the generator from custom collection to the ones from array so we get those nice numbers we got back. And in fact, it displays 1234. That's a very flexible way for you to deal with collections and in fact, types that implement certain interfaces. You can just bind them. But collection is definitely a prime example of that. And then you can use this nice array. Nice arrays syntax. Very nice. All right. So far, we've seen shims, which shims are made to deal with hard to test code. Now, what about stuff? So stuff are the brothers of shims, the older brother and they're made for interfaces and more testable code. Right. You can see stuff that's really thin dispatcher. We'll look at the source code of those guys. All right. So going to here and let's do another example. So we have eye account, classic boring example and then has a balance balance should be negative. Let's have this code here that checks that. So I receive some account here. And if the balance is negative, I throw right. No money. All right. So price straightforward. I'm going to build so that the stubs and shims are regenerated. They're regenerated on build. And now there should be a stub I account. So let's see. It's called this accounts. And then in fact, I don't even need the shim context right now. That's something. Okay. And then it's going to be stub I account. All right. Now it's the same idea. I want to replace balance. So we look for balance, get. Balance, get. Same idea. I can say minus one. So I'm testing the case where we blow up and run this. So I'm going to call class one, your 2K account. Everybody's happy. And then boom, we got the exception here. No money. So we have the same experience, the same look and feel from shims. But for interfaces as well. And also these debugable code. In fact, very, very efficient. The cool part with stubs is you can sneak into the code and look at how they're implemented. So what happens is when you build, we go and we look at your assembly and we generate a big C sharp file. Sometimes we compile it and add the reference. But the file stays on disk. You can actually go and take a peek. If you go into obj, debug, fakes, you'll see there's those short names. Whoops. That wasn't expected. There's those short names to keep the path kind of short to avoid this little bug there. And I'm looking for class library. Refresh this. I know why M is there. And then there's this f.cs. And that's basically the whole source code of the stubs. Right there. So I can look for stub I accounts. And if we squeeze into balance. Okay. So let's see. So this is the one we've used. It's a field. Look at here. It's a field that's funk event. So that's the field you set. Now the implementation of balance is mostly interesting. It starts here. I travel here. There you go. No. Control only. All right. Starts here. This is an interesting bid. We use fully qualified namespace. So it gets hard to get on the same screen. There we go. So you grab the field value. If that delegate is not null, you call it and you return that. That's a detour. That's what's happening in the shims as well. But it's hard to look at it because it's all happening at runtime. There's no source code that's dumped. You'd have to go into debugger and windbag and look at this. But that's the core idea under stubs. So stubs are fun. You can actually look into them. All right. And in my file go. Squeeze there. All right. So we've shown stubs in action, which are thin dispatchers. So there's been a lot of requests for people to be able to build more frameworks on top of stubs. One of them was to be able to record the calls that were made on the stubs. So we've added that in the RC time frame. So now what you can do on your stub is to create an observer. And you can create your own observer. We ship one observer. That thing is going to record every call and every argument passed in the stub. We'll see why this is useful. So I'm going to say that the observer is this guy. Right. And then when the test is done, I can actually get all the calls. This is an array of calls. I'm just going to grab the first one. And we're going to put a break point here. So you have the ability to record every method being done on your services. And people want to use that to build verification framework where you can do, you know, verify that this thing was called or so forth. So it's a hook for mock framework writers. And if I run this, it should go on. And I guess maybe I should make it pass. We'll never hit that place. When the debugger is done, we're going to give some money to that user. All right. It's building now. Should have started debugging. All right. Let's go. Debug. Debug. And it's going to break on the call. We're going to look at the call. So what you should see is basically one call. And that call is a, that call gives you the stub. So the actual stub instance, it gives you what was stubbed. So get balance. It gives you the type. And it would give you the arguments if there were any arguments. So you have full logging there. You can do all kind of cool stuff there. It's all efficient, efficiently stored for you. You can write your own observer if you want to. If you're into writing mock framework, this might interest you. All right. So I'm actually early. So we might have the time to actually dig deeper if you have questions. Do some more cool demos. So I want to recap what we've seen. We have a framework that allows you to replace any.NET delegate. It comes with two flavors. Stubs. Stubs are very lightweight. You should use them to test your code. It's a good smell. If you're stubs, that means you have testable code. You have interfaces. You have nice confidentialization. It's a good sign. Shims. Shims is to deal with the third party mess, the legacy code. The mess that the previous guy left or, you know, the framework you're using, whatever. It's basically considered as a smell. The more you have shims in your test case, the more they are going to be hard to maintain and so forth. We've seen example. And this is funny. We've seen an example of people that was in the time of the more time frame saying, well, since we've used using more, our code coverage went down to zero. And when you look at the test case, it's actually isolated from everything. And they weren't even calling their code base anymore. So, of course, code coverage was going down. So, this is kind of the rule of thumb if you're looking for a decision point between stubs and shims. Stubs should be for your own code base. You know, write testable code from the start. Make sure it's, you know, it can scale. If you have to deal with legacy that can't touch and everybody has that, you have a great tool that's shims. If you have a framework that's really hard to work with and we ship a bunch of them, then shims can help you too. Right. So, at this point, I think I have questions. I can take questions from the audience. I don't know if you have to take a mic. So, I don't know, everybody, I will repeat the question. Yeah. All right. So, the question is, in which skew will you get fakes? Right. Today, I was told that it will come in ultimate and only ultimate. If you look at the feature matrix in the Vue studio 2000 RC, fakes is in ultimate. Right. Great question. Forgot to mention that little detail. Any other question out there? All right. We have 10 minutes to dig deep and maybe show you the guts of shims and see how those guys work. Right. So, what we shown you is a nice typed layer. Now, that type layer goes into an untyped layer, a runtime, that basically text method info and doggits. And then that goes into an instrumented, basically the instrumentation that's implemented as a profiler. Now, it's pretty cool to look at the guts of shims. So, if we come back to our example where you're using daytime. So we had something like this. Whoops. Stop this. So, we had shim daytime now get equals something. Right. So, what's happening there? So, what we can do is actually take a look at now get and see what's this stuff is doing. So, I'm going to go into my file here and look at m is a miss core lib. m is core lib is a bigger file. All right. And I'm looking for shim daytime. Whoops. And now, now get. Okay. So, this is the setter. So, it receives a delegate and it passes all kind of information to this runtime. And it does set shim public static so we know it's a public static method and then it gives basically the whole signature so that we can do reflection to recover it. Right. And then we can look at what this runtime does. So, think about this runtime as a untyped layer. Undocumented layer. I must say. That's basically receiving that's your dictionary is receiving all these pairs instance, method info and it's going to do the dispatching. Right. So, we can actually take a look at this guy. Where is it? Reference. So, it lives in this assembly. And if I look into squeeze that up. Shims. Shimmer time. Right. So, it has all kind of good stuff in there. It has all these methods are used internally. These are the methods that whenever you do a call into the type layer of the shim, you're going to end up calling those. Right. Now, there are other methods. These are the methods that create uninitialized instances. In fact, we use a method from the remoting services to do that. And then most importantly. Where is it? There's the binding. And then this is the lookup. Given an object and a method base. Give me the shim or not. So, ultimately this is just a big lookup table. You know, that's thread safe and so forth. But it's untyped. And then it goes into the un-mage layer and so forth. So, Fakes is just a layer on top of that. There's no magic in there. Any other question? All right. Another goodness. So, when I was working with the directory type, I had to shim one method. Maybe I got the overload wrong. So, what I can do is I can say all the methods from directory treat them as not implemented. So, I can say at once this entire type is off limits. Remove all the methods. Now, I haven't prepared this demo. So, this might, I better, we'll see whether the feature works. I guess. So, what's going to happen now is instead of my machine stopping to work, it should actually throw a not implemented exception because I told the system directory is not okay. It's still running. Is it running? I didn't call it cheating. All right. Yep. I need a context. All right. And here we go. Boom. Machine is still alive. Right? So, you have this concept of behavior. The default behavior is to go through the original method. But you can actually change that. So, what happens whenever we enter a method, we ask two questions. First, is there a delegate explicitly given by the user? And if there's one, we call that. But if there's not, we ask an interface called the behavior. It says, hey, give us a delegate for this method. And what you can do there is you can actually code generate stuff on the fly and put it into your system. So, what we do is we code generate a method that throws an exception and put it in there on the fly when it's demanded. We build that for pecs. This was, of course, built for pecs. So, you would call a method. We just replace it with something that returns random values or, you know, values with our generation tool. But it's very useful. You can actually forbid an entire type. So, for example, if you're working with SharePoint and you want to trap SP context, you're just going to say that SP context is not implemented. And deep down in the code where somebody is using it, boom, you'll know where to start to isolate your code. Right? Now, I'm almost done. So, I'm going to wrap this up. If you have questions about Fakes, you can reach me. Reach also Peter Provost, who's a, and the team, they're very responsive, you know, very interested about the community answer, coming response to that. There's an MSDN page which explains all about the Fakes. And I'm basically done. If anybody has a question, go ahead. Otherwise, you're relieved from your functions and can go ahead to the next talk. And thanks for attending. Yeah, and don't forget. Yeah. You know, there's this evaluation when you leave the room, right? Thank you very much. And I'll be around until Friday if you have questions. Feel free to come by. I have a talk tomorrow on gamifying testing, how we turn testing into an education tool that's going to change the world. And I have another talk tomorrow which talks about how you're going to program apps on your phone for your phone. Come to this talk with your phone, leave your laptop at home, and write apps together. I guarantee it.
Replace any .NET method with your own delegate! “Fakes” is a new framework in Visual Studio 11 for test stubs and detours in .NET. Fakes may be used to detour any .NET method, even in tough situations like static methods, non-virtual methods, private methods, constructors, etc… If you’ve been a user of Moles, Fakes is a lot like Moles but different so come and learn about the changes coming ahead too.
10.5446/51132 (DOI)
All right, I guess we are going to get started. So, yeah, okay. Hello everyone. I'm very happy to be here to talk to you about design. And for those of you who saw some of my previous talks, you might be surprised that we are not going to write a single line of code today. We are going to write a lot of code on Friday. But today we really just talk about design from a history perspective and from an art perspective more. And I think it's interesting to know where the design comes from. And Metro is taking such a big relevance in Microsoft nowadays, in multiple properties, that it's probably a good thing that we understand where it comes from and what is the sort behind that. So my name is Laurent Buño. I work for IdentityMind. We are a user experience firm located in Seattle. Myself, I'm in Zurich in Switzerland. And we work on a number of platforms from WPF with which we started. Silverlight, of course, Windows Phone 7, Windows 8, Xbox. And we do a lot of applications for the Kinect nowadays. So it's quite exciting. We of course do also some work for other platforms such as iOS and Android. However, I would say that our heart lies in XAML and mostly because it has enabled a much closer workflow with designers than anything we had before. And that's really what we love to do. We have designers at home. Myself, I'm not a designer. I'm what we call an integrator or an interaction developer or whatever you want to call that. And with Metro, it's an interesting trend because in one hand, it's a little bit of simplified design. So it is a little bit stricter in terms of guidelines than what we did before. However, there is also an encouragement more and more to use Metro as a basis but to go a little bit wilder, to go a little bit further and to push it to the limits. And this is what you do when you work with professional designers. So in fact, I think what we can say is that the trend is really to give more and more importance to professional designers into UI development. And if we start, maybe it's interesting to define the terms. And Microsoft talks a lot about Metro nowadays and especially for two very distinct and different properties, I would say. The first one is a Metro design language. This is what we are going to talk about today. And this is really a design language which can be applied to any platform, basically. There is no technology behind that. For example, they use it in the Zoom player. We are going to see some screenshots later. This is where it actually originated. Windows Phone is using it. Xbox is using it as well as Windows 8. Now of course, Microsoft like to confuse things sometimes a little bit, especially with names. And they talk about Metro style applications. And Metro style applications are a special type of applications running on Windows 8. And you are going to hear, or you probably heard already, quite a lot about Windows 8 applications during this session. And unfortunately, there is a confusion because a Metro style application doesn't have to be developed with Metro design language. And similarly, the Metro design language doesn't have to be used in the Metro style application. It can be used in a civilized application or whatever application. I mean, you can do an iOS application with Metro if you want. Probably wouldn't recommend it because this is not what the user is expecting, right? However, be aware of this confusion. This is why I don't really like to talk so much about Metro style applications. I prefer to talk about immersive applications or tailored applications, which are two terms that you find also in the literature. That being said, let's talk a little bit about history and where does Metro come from. There are a few main inspirations for Metro. And one very big inspiration is the Bauhaus movement. Bauhaus is a design movement that originated in Germany in the 1920s. And you have to understand what kind of mood the people were in that time. You might think that it was a very sad mood because of everything that had just happened. But actually, it's exactly the contrary, right? We talk about the crazy 20s. I'm not sure if anybody is following a TV show called Boardwalk Empire. It's a great show. I can really recommend it if you like this kind of historic show. It happens in the 1920s in Atlantic City. And you can exactly see that, right? The war was ended and people were sure that it was the last of the last. Everybody knows that it was not the case, but they didn't, right? Most of the young people had died either from the war or from influenza, and that was over. So there was this feeling of renewal, right? And also interestingly, the 1920s were also the time where industry was really taking over handcraft. Mostly because of the war as well, right? Because the war is a great way to boost industry. You have to invent new things. It's a terrible way to boost industry, but the fact is, right, that most inventions, I mean, many inventions are developed during that time. So suddenly, handcraft was taking less importance, and industry was taking more importance, and you can find all those trends into the Bauhaus movement. This is a movement which proclaims that you should strip everything which is not necessary. So if you have a chair, this is something that you sit on, but you don't have to have all the frills and all the decorations around the chair. Keep the chair simple. Make it do one job, but do it well, okay? Bauhaus was applied to architecture. In fact, the founder was an architect. The architecture came a little bit later in the picture. He really started with industrial design and graphic design. So those are the three areas that Bauhaus was concentrating on. And this is an example of a Bauhaus architecture. It used to be a factory. This is now the seat of the Bauhaus Foundation in Germany in Dessau. And you can see that it is very functional, right? So no decorations. The building is doing what it is supposed to do. Now this one is interesting because it's from 1923, and you can find in that ad quite a lot of principles that Metro is recommending nowadays, okay? So you have a plain color, but vibrant. You don't have any gradients, any shadows, any icons. You have just typography, okay? And you have really the minimum information that you need in such an ad. You have Bauhaus, what it is, right? What it does. You have Ausstellung, which in German means exposition, and you have the date, and that's it. So this is really stripped to the minimum, okay? In terms of industrial design, again, the same principle apply. This typewriter is not a coffee machine. You cannot mistake it, okay? It doesn't do everything. It just types. It just allows you to type. However, it has this kind of beauty in the simplicity of the design. Another example of typography here, and again, the same principles, right? Some very vibrant colors, this orange and this blue, which contradict each other, okay? And again, right? Very minimal decoration. There is just this lamp in the middle. I'm not quite sure why, but it's here, and nothing else. The rest is just text and information. This one is one that I like a lot because, again, right, they are very efficient. You can stack them. You can put stuff on it. You can probably sit on them. But they don't do anything else in what they advertise. Now, in terms of design inspiration, another big design inspiration is actually Swiss, and you can attend because you probably know the answer already, but anybody has an idea. Why is Switzerland known for design? Nobody? Yeah? Exactly. Swiss typography, right? So Swiss typography was developed around the 1950s, and we all heard about Helvetica, which is kind of the most known figure of this particular movement. But basically, this is interesting because 30 years later, after Bauhaus, the same principles were taken again into the Swiss typography movement. So you have this idea of form follows function, meaning that if you have to fulfill a function with a letter, with a character, with a type, you don't add any decoration. You just do what it does, right? And a lot of research is put into these things. So designing a form takes a lot of time. The readability is very important, and it's, in fact, even more important for us who do applications which run on screens. We all know that when you go out with a phone or with any backlit screen, really, it's very difficult to read, right? So the readability of the font is extremely important. And the choice of the font is really very decisive in that. There are studies which show that if you choose the right type of font, you can speed up the readability of a page by up to 40%, depending what you choose, right? So that's really very relevant. Swiss typography is using some serif fonts. I'm going to show you in just a moment what some serif means. Serifs are small decorations in the letter, and here we strip them, we remove them. And also they use a lot of grids, a lot of rules, a lot of conventions. It's a very balanced movement. So there is this ID here of simplicity, harmony, and objectivity. Here is an example of a Swiss typography letter. And again, right, you see the grid. Grids are very important in metro, in design in general, but in metro in particular. And there is this balance and this research. Here those ads are taken from a Helvetica website, and they are all made with Helvetica. And again, you can see that the same principle supply. Now this one is from 59, so it's 36 years after the ads that I showed you before, and yet the same principles are used, right? You have this vibrant color. It's actually by chance here the same color. You have the simple text and no icons. It's just information. This one I like very much because without decorating too much, they still manage to give this impression of movement. And you really have just text and those white bars which are giving, right, this impression of movement. So even without decorating too much, without putting icons or shadows or whatever, you can already entice the reader, okay? This one is funny because it's an ad for a detergent or rather an adity for a detergent. I'm not sure exactly what it does. I'm not sure nowadays you could really sell anything like that because people are too lazy to read nowadays. So they expect to see a big picture showing what is a product, right? Here you don't have any picture showing you what the product is. The whole message is into the typography. Again, I'm not sure it would work so well today, but I thought it was interesting to show that just to show you what was done. Now in Helvetica, you can also play or actually with any font, right? You can play with the font weight to say, I love you or to say, I hate you. And what I want to say with that is that pay attention to those details when you design, especially when if you have text which is quite small, if you put too much decoration on it, it can make the text more difficult to read, especially underlines are very difficult nowadays and we try to shy away from underlines more and more. Instead, if you want to express something particular, you can just make, for example, the font bolder or maybe a little bit bigger depending where you are or maybe you can even play with colors but being reasonable about it and maybe not wanting to go overboard. Again, think that the readability is the most important character, is a very important character here. So I talked to you about the Sonserif and here is an example of a Serif font and the Serif are those small lines that you can see at the end of the bars. And they used to have a function when the characters were made in lead, right, when they were really physical because they were helping with the structure of the character. But of course, now that we have everything on the computer, we don't actually physically need them. So you could say that they are a decoration which is not absolutely necessary to the intent of the letter. So now we have the Sonserif fonts which strip those, which remove them, okay? Sego UI is, or the Sego font family in general, is a font which was developed by Microsoft and developing a font is quite a big work. There is actually an interview of the guy who worked on the Sego font and interestingly, he's also the guy who designed the droid font for Android later. So he's an independent font developer and the interview is very interesting. So if you manage to find it, I will see if I can blog about that later. And Sego has multiple declination. So in that case, we use Sego UI which is a font for Windows 8. You also have a special version of Sego for the Windows phone. It's called Sego WP, Sego Windows phone. And there are a few differences because you don't read a phone screen the same than you read a large Windows 8 tablet screen, right? So there are a few constraints that you need to take in considerations. There is also a specific version of that for Xbox. It's called Sego Xbox, no surprise. There is one, for example, for the Media Center because Media Center runs on TV and TV, well, it's a little bit better now with HD, but standard definition TV have a very crappy resolution. It's like 640 times 480 if I'm not mistaken. So you have, of course, some specific challenges that you need to take in account. So the Sego font is quite well adapted to computer screens. And Sego font comes in multiple font weights. So you have bold and light, and you even have semi-bold and semi-light, which gives you quite a range of feelings that you can express thanks to that. And that can make quite a big difference into the character of your application, into what you try to express there. Another inspiration for the Metro Design Language is the motion design. And we talk also of kinetic typography sometimes. I'm going to show you two small videos which illustrate that. The first video is a video by Saul Bass. Saul Bass was a designer in the 1960s who worked on many intro titles for movies, and especially for Alfred Hitchcock movies. And he used motion design and kinetic typography quite a lot. So we're going to see an example. And the other one, I'm going to let you guess what it is. I'm sure you already heard about it. Actually, it's in the title of my bar, so nobody looked at it. Anyway, I'm going to show the first one. So let me go full screen. Here we go. OK. So this one is for the movie North by Northwest. And there are a few characteristics which are interesting. The first thing is there is this green background. It's quite an unusual color, I would say, for a movie. And then you have this grid and the text which keeps moving. There is no text which stays still for more than just a few seconds. Now here they play with the typography a little bit because there is those arrows which suggest right directions. It's called North by Northwest, so it illustrates. And here you have the transition into the picture, and now we understand what the grid actually was. It was an array of windows. And that particular technique of having text shown in front of a video or of a picture is something that we do quite a lot in metro applications because it can help you give a very interesting feeling to your application if you do these kind of things. But of course, it means that readability has to be taken in account because if you put a video in the background and text in front, it can be very hard to read. So usually what we do is that we dim the video, the opacity of the video to something like 30, 40%. We put a black background behind it, and then we have a good readability even though you have a video and movement. Now the next one is much more recent, but same principle supply. Yeah, I tried to filter the first few sentences. I didn't quite manage this time. Anyway, that's of course the intro to Pulp Fiction. And again, you have the same kind of things, right? You have this motion. It keeps moving. And the typography is very important. They really use typography a lot here. You have those simple colors. They don't use pictures. And when they have some pictures, they transform the picture into drawing. We'll see an example of that in just a moment. That part is funny because you don't see the name anymore. You kind of guess it, which I think is a very, very cool effect. The title Pulp Fiction comes from the 1930s when they were selling those magazines which were printed on very bad paper and because they were cheap, right? So when you were reading them, they were turning into pulp after a while. So that's why it's called Pulp Fiction. And here they use the typography to kind of suggest that. You have this bleeding of the orange, right? Which is kind of suggesting that it is a bad print. Again some bold colors, right? And here what I was saying before, even when they put pictures, they transform them. So it's not really a picture. It's just a drawing, right? And notice that the text moves all the time. So really no text stays still for more than just a few seconds. And they play with typography quite a lot, right? Here you have a font which is used. But then suddenly they come with Uma Turman with a different font because they want to signify that she's very important to the movie which she actually is. So this is an interesting thing. There are a few more examples of this kind of motion design and kinetic typography. For example, another one which is quite well known is the intro to Catch Me If You Can, which is even more recent and which is a small masterwork. So if you have the occasion to find it on YouTube, it's really nice to see. So all that is quite important into Metro. And in fact, motion for us is also very important because it's not a prerecording motion, but it is a motion which is provoked by the user. So if you use motion and animations wisely, you can actually help your user to understand what the application is doing, right? So an example of that is I'm going to lock my screen. Here we go. And here, there is, those are my daughters, by the way. Here there is, you know, in every presentation you do, you should show a picture of your children. I'm not sure who said that, but I heard that in the presentation once. I think it's true. Anyway, if you do that on a slate and you touch the screen, this is not a touch screen, unfortunately, but if you touch the screen, the screen is going to move immediately a little bit up because of the movement of your finger. So basically by this simple animation, you're helping your users to understand what he needs to do. He needs to move up, okay? If he tries to move sideways, nothing is going to happen. On the phone, it's the same. If you take, this is a Windows phone that the Nokia Lumia 900, and if I switch it off and on again, if I try to press here, you see there is this small animation. It's very, very maybe difficult to see in the back, but there is this small bounce of the protection here. And by this bounce, what they say is that they tell me what I need to do. I need to move it up, and then I understand, okay? So animations are very important in an application to help your users to understand what they need to do. And in fact, you should really spend a lot of your design budget on animations because animations are really hard to get right. It's very easy to spend a lot, lot, lot of hours on a 300 millisecond animation just to make it right. Now in terms of Metro, coming back to that, Metro, of course, like the name shows, has also a lot of inspiration in the transportation signage. And here we have something which every traveler knows, right? It's what you get in an airport, a railway station, or if you go to an underground station. Even if you don't know the language of the place where you go, you immediately find your way because this is a convention. People know how to use those simple signage, simple icons. The icons are very simplified. They don't actually represent real life luggage, but everybody knows what they mean anyway. And if you're running in an airport, it can be very useful to have this recognition immediately. This is universal language. So this is here an example in Japan. And this is an example in Wall Street in New York. And yet you have this similar principles used. So the colors are very important, but colors are used to express an intent. They are really useful. There is no decoration in the way. Here the color signifies the number of the line. So it's very easy to find your way and which line you take, even if you don't speak Japanese. The five design principles are something that the various teams, various designer teams at Microsoft have put together to kind of give a guideline to developers and to designers who work with Metro. Those are principles. They are not rules. So you are free to use them or not. I recommend using them because if you use them, if you use everything that we talk about now, you kind of give to your users this feeling of, okay, I'm at home and I know what I'm going to do. If I'm using a Windows 8 slate, I know what I expect. If I'm using an iOS slate, I know what I expect. And this is something else. The first one is a pride in craftsmanship. And this one means basically be proud of what you do. Don't hesitate to push your application to the limit of what you do. Don't hesitate to spend many hours on it. And I think that I don't have to explain that to people who go to conferences. I have to explain that rather to people who don't go to conferences because, you know, when you go to a conference, it means that you're pushing yourself a little bit further already. So you have this pride into what you do. Think of quality always. And remember that if you have a very good design, but you don't have performance, it's going to be awful, right? And when you design for mobile, it's even more true. Mobile platforms can have some design, some performance challenges sometimes depending what you do. So maybe it happened to us already before that we had maybe to tone down the design a little bit and maybe be a little bit more reasonable in order to have a good performance. Performance is nothing without features. So of course, if you have a blank application, an application which does nothing, the perf is great, but it's not very nice to use. And think of the balance of the symmetry, right? All those principles that I mentioned coming from Barhouse from the Swiss typography apply in Metro as well. So don't hesitate to spend time on alignments. Differences of, you know, two, three pixels on the side can make a big difference into the perception of the user if they see that your texts are not aligned. It's really worth spending time on that and giving your users a very polished application. And fluid is the sentence that you always hear when you hear a Windows 8 presentation. So I try to avoid using it, but here I have to because it's really the name of the principle. And here we really talk about motion. Remember that designing for touch is different. So it's really a different experience, right? A mouse is very precise when you point with it. A finger is not, however. So when you design for touch, you should think of those things. Do your areas big enough so that the finger can reach it, right? Motion should be fast and should be responsive immediately because, again, when you use motion, when you use animations, you are helping your users to understand what your application is doing. So if the user touches the screen and your application takes even just a few milliseconds to start, it's already too much because he doesn't understand that there is this one-to-one relationship between the gesture and the motion, right? So pay attention to those things. Nobody likes to wait. So when you start your application, please be fast until you present the user with something useful. And then after that, you can do the rest of your preloading, of your image loading, of your image rendering. If you check many Windows phone applications, for example, when they have pictures, they load the picture in the background thread so that when you see, for example, if we take Twitter as an example, you will see the tweet. So this useful information is there already. And then the picture is loaded once the tweet is rendered because rendering the picture takes time. But while this is done, the user can do something and actually read the tweet. This one is an image I found which is illustrating something that we need to think about. So when you design for Windows 8, you design for a number of platforms. And it's not just laptops anymore, standard PCs, but also things that you hold in your hands like slates, okay? And when you do that, you have to think of the people who are using your slate. Now this here is taken, there was a research by the US Army of all places because they have a wide range of different people with different stature, with different body shapes and all that. And here you see the statistic representation of the length of sums. So when you think of a slate that people use normally with two hands like that and then they touch the controls, right? They need to be able to reach those controls. So it would be a bad idea to put those controls in the red area because most people won't be able to reach there while the green area is reasonably safe. So those things are important, right? If you don't think of those things, it can be quite hard. And nowadays we develop for a number of input devices. If I take the Xbox application as an example, we need to develop there for voice. So you can voice control your Kinect for the Kinect. So by waving at your Kinect, you can control your app. You have also remote controls. You have game pads and all that needs to be taken into account when you do your applications. So you will spend more time on each screen because of that. It's actually a good thing because it makes you think about your application. And you know there is a very famous design book called Don't Make Me Think, but that's true for your users. It's not true for you. As developers, you have to think of that, okay? So we need to think more so that our users don't have to think. Now authentically digital is an interesting principle. And I'm not at all wanting to do any judgment here, right? So don't take that as a judgment. But Apple is doing a lot of what is called skeuomorphism, which is basically trying to represent the reality into the computer. And they do that, for example, here in iBooks with wood, a lot of wood, a lot of those shelves, which don't actually physically exist, right? If you touch your computer screen, it's still flat. And they do also a lot of that with faux leather, you know, stitched leather or metal or things like that. Skeuomorphism is, you know, it's all good. I mean, some people like that. It's not a problem. But Metro is going to a totally other direction. And this is called what they call authentically digital. So basically, don't try to reproduce nature on your computer because your computer is not nature. Here we have the Kindle app. And we see immediately the contrast, right? The background is quite plain. You have a lot of typography. So you don't have buttons. For example, if you click on cloud or downloaded, you're going to switch. Those are kind of tabs if you want. However, they don't look like buttons. It's just typography. The color of the text is playing a role. So you see the selected one is one which is orange. And of course, the content is king. So the content, in that case, the books are what the user wants to use. So this is a very different approach to design. However, there are a few compromises. Like for example, the books have some shadows. And the shadows are not really Metro, but it's fine. Because the designer thought it would add something to the image. It would make it maybe a little bit more interesting or maybe a little bit more attractive or whatever. The shadow doesn't have a function. But again, what I'm telling you, those are principles and guidelines. They are not rules. So you can really deviate from that. It's not a problem. No application is ever going to be rejected because it doesn't follow the Metro design guidelines. Another example, the icons in iOS. And again, we see something which is quite lifelike, like the compass is a good example. This is a real analog compass. But in fact, most people don't use that. I didn't see one of those analog compass in many years. I used to use that when I was a kid and I was going camping. My kids probably never saw one in their life. They also compass, but they also GPS and digital stuff. On the other hand, the icons on Metro. Now those ones are definitely not my favorite. I think they went a little bit overboard with the simplification here. But it's just to show the contrast. And here, for example, the headset, it's clear that it's not a real headset. However, it cannot be mistaken. It is a headset. So it has something to do with sound, obviously. Spend time on icons. Icon are really important. It is again, helping your users to know what they need to do without you having to translate in different languages. However, things that you have to localize your icons anyway, because some things in our culture can be offensive in other cultures, et cetera, et cetera. The fourth design principle is called do more with less. And here, the idea is don't try to solve all the words problem into your application. And it's kind of interesting coming from Microsoft because it kind of goes against quite a lot of things they have done in the past. If you take Microsoft Word, the only thing it doesn't do is coffee. But it does really everything else. And here, we have really something which is different. We try to shy away from that. Instead, the guideline with those new Windows 8 applications is rather if you want to solve multiple problems, then develop multiple applications and have each application solve one problem. In the previous session, Billy Hollis was talking about the contracts a bit. And I think that Jonas is going to talk about that in this room after my session too. Contracts are a new way to share information between applications. And it's really very easy to share that information now. So you can publish one application saying, hey, I can do something with a specific type of content. And the other application says, hey, I have this particular type of content to offer. Cool, let's collaborate. So doing that is an interesting principle because it kind of tells you to collaborate with your competition sometimes and not to reinvent the wheel, which are actually good things but which are sometimes difficult for an enterprise manager to accept. But here, we are really at a point in time where Windows 8 is starting. We need a lot of applications to start the platform to make it popular, et cetera. And this is a good time to actually collaborate with people. And maybe if you find that there is an application on the market which is very good at sharing with Facebook, for example, well, maybe it means that you don't have to implement that particular feature in your application, but you can collaborate with that one. Windows 1 is a little bit bound with the previous one, again, collaborate. There are those contracts. There are those things which help you. And try to be consistent. It's really recommended to use things that people know because like this, they will feel at home when they use your app. And they won't have to go through a learning curve again. Good, so now I would like to show you a little bit some applications of Metro in the world because Metro is not new with Windows 8. It certainly became more popular now that Windows 8 is using it. But we already used Metro a lot in the past. And this one is the Zune music player and video player on PC. And Zune is where Metro was initially developed. And here you see all the principles that I mentioned before, right? You have a few icons, but they are very simple. They don't come in the way. You have a lot of usage of typography. This is another example of the Zune media player. And you have the content in the center which is king. In fact, you can see that the content is more or less the only source of color here. So everything is drawing your eyes to the content. I use the Zune media player quite a lot. And I have to say I really like it in terms of user experience. It has been sold very well. So I can only recommend to try it out. This is another view of Zune here. Another place where Metro is used nowadays is the new Xbox dashboard and the new Xbox applications which was released at the end of last year in December. And here again we have the Metro look and feel. There are a few differences here because this is typically running on the TV. So one big difference that we see is that there is really a lot of space. And space is critical because TV screens tend to do some weird stuff to your apps like cutting the corners and making it smaller or bigger for some reasons and all that. Also another thing which is important here is that this is typically used with a Kinect. So Kinect needs a lot of space as a hit target because it's a little bit difficult sometimes to reach a very precise place. And also you have to wait for a moment to activate the surface. So while you are waiting your hand tends to move a little bit. So you have to have big surfaces. And Metro in fact fits quite well with that. Because of the size of the phones, because of the size of the spaces and all that, it actually works well with Kinect. Motion is very important also in Xbox. I'm going to show you a video and for that video I don't need any sound. So just to show you here, for those of you who never saw the new dashboard, so you see the animations here tend to be faster than for example on Windows Phone or on Windows 8. And part of the reason is that the user here is using a Kinect to control that. And because the Kinect is a little bit less precise, you need to immediately show that you have feedback. Because if you don't do that, then the user is going to be frustrated and won't understand why is my hand not actually doing something. So again, I'm talking again about animations here, but by thinking of that, by timing your animations right and by using the correct accelerations, you can actually give a very different mood, a very different feeling to your users. So spend time on animations, it's really worth it. Now of course, another place where Metro became quite popular is Windows Phone. And here I'm going to show you a few examples. Those are all applications that I didn't see my develop. This one is called SBB Mobile and it was developed for the Swiss public transportation system. It's an app that you can use to see connections and buy your tickets. And here we see the connection screens and there is quite a lot of information on those screens. Also, there is a problem of the sun, right? Very often when you use that app, you're outside in the street and maybe even running. So you need to pay a lot of attention to presenting the content without too many decorations, which is why here we stripped most icons. We use... Yeah, there was an interesting discussion around the phones. You might notice that this is actually not totally Metro here. The font of the title is actually Ariel and it's not Sego. And it's fine. It's just because it is SBB's corporate phone. So they wanted to use that font, which we said was fine, no problem at all. However, we didn't use Ariel for the small text. For the small text, we use Sego because Sego is more readable for small phones than Ariel is. So those are things that we need to pay attention into. Another thing which is not Metro is the capital C in the title and the period at the end of the title. I'm not sure how many hours exactly we talked about that, but we spent quite some time on that. But anyway, if the client wants that, in the end, it's his app. It's his corporate identity. And this is fine. Metro is flexible enough to actually... I mean, this is unmistakably a Metro app, but there is a corporate identity of SBB into that app. Those are just other examples. When you work with maps, readability can be quite hard because you have a lot of information in the map, especially when you use a satellite view. So the big maps in Windows Phone 7 don't use a representation of a pin, which is skeuomorphic. They use a flat representation, which is color-coded. And this way, it's a little bit easier to see. So those are some small details which can make your users a little bit happier. So here, this is the ticket that you show to the conductor in the train. And here, the efficiency is very important. The conductor doesn't have to look for information because he has to spend as little time as possible with you because he has a whole train of people to check. And Switzerland is small. The time between two stations is typically, I don't know, half an hour maximum, right? So they don't have a lot of time. So the design here took us quite a lot of time to get right. Another example is the official Twitter application that we made. And here, it was interesting because it was one of the very, very first Windows Phone applications. So basically, we had no idea how it was working in terms of design. And there was quite a lot of things that were in the beginning when Windows Phone was released, which were a little bit controversial, like the fact that we cut the titles, the fact that we indicate to the user by showing here a little bit of the next screen that there is something more or here, right? This is a horizontal, this is called a panorama, a horizontal list box, or a pivot item also, like here, pivot. And that was a little bit controversial. I think people tend to like it nowadays because it really helps them to understand what they need to do without having an icon like an arrow or whatever, but just by showing the part of the next screen, you're telling the user, hey, there is more. He should do something. Those are our other views. Another example is IMDB, that's the Internet Movie Database. And here, we went for something completely different and we use what I showed you before in North by Northwest, there is this image as a background and then text in the foreground. And by using different images taken from different movies, you can also give a very different spirit to your app. However, remember that the readability is very important, so make sure to tone those images down. The user is not here to see those images, he's here to read the text, okay? But if you add the image, it gives some spirit to your app, so there is always this compromise that you need to take in account. Another example here with the panorama here, by playing with typography, you can do some interesting things like the detail screen, has less text, more titles, so that the user can find his way quite fast, but it's a transition screen. He's not spending a lot of time here. The next screen, the one with the review, is something where he spends a lot more time, so you can go with a small font, it's fine. Food spotting is an interesting app. For those of you who don't know food spotting, it's a food website where you can go register yourself and then you can rate food. So you take the picture and then you say, hey, I found that this dish here was really great. I use it a lot because I travel a lot and on my phone, I can easily find the next good restroom. So here what we did is by playing here with those pictures and changing the font color, you see that you can also change the mood of the app quite a lot. And it's still Metro, it's still using all those principles that we mentioned, but just by changing pictures and changing some simple details, you can really change the mood of your app. So again, Metro can seem a little bit, when you read the guidelines, can seem a little bit naked and maybe a little bit unfriendly, but those are just guidelines, so don't hesitate to push it further and especially by using pictures and videos, maybe to enhance your app, it can be very interesting. Windows 8 uses Metro quite a lot. I'm sure that you already either played with Windows 8 yourself or at least saw that a few times. I just want to illustrate a few interesting points. The start screen is using what we call lifetiles. And if I show you my real start screen, you see that those lifetiles are even animated. Like for example here, the weather is actually changing. Or here, the social application is changing based on my Facebook feed, which is always a big bet when you show your live Facebook feeds to an audience like that, but I hope that my friends keep it safe. Lifetiles are an interesting addition to the Windows Phone 7, which now is coming into Windows 8 as well, and that's very, very useful. So really take time to think of your lifetiles. This is the entrance door to your application. This is what makes your users want to use your app. I love lifetiles on the phone because they allow me in a glance to see what's going on. And I have one that I especially like, I'm using the service Tripit when I travel. It's a service where you can forward your email from your travel agent, and then they're going to pass it and they're going to make you a summary. And there is an app for Windows Phone 7, which is showing you as a lifestyle your next flight. And next flight is great. The status of the next flight is even more important. Is it okay or is it bad? And if it's bad, it means that immediately when you're still in the plane, you can check what's going on, you can call your travel agent and change your booking. And when you have something like that, your next plane is canceled, for example. If you're faster than your neighbor, you will get the seat and he won't, right? So that's critical. All the things are quite nice, like for example, the weather. Immediately you have the weather information without even starting the app. And here by planning your lifetiles carefully, you can give to your users the need to press to see more information and entice him to use it like that. The start screen also has what is called charms. So the charms are those five icons here on the side. And the charms are a common way to execute some functionality in your app. For example, the search charm is interesting because it means that your app can implement a search dialogue at a location which every user will immediately know. They won't have to look for it because they know, oh, okay, I used search into Windows 8. I used search into USA Today. I used search into another place. And now I know that if I press on the search button, I'm going to be able to search. By the way, all that, there is a lot of, you know, comments, let's say like that, around the start screen of Windows 8, mostly because it's moving the cheese of the people. It's changing their habits. It's really not bad. If you use it with a mouse and a keyboard, it's actually quite cool, but there are a few shortcuts that are interesting to know, like Win and then the key C is opening the charms bar or Win I brings you immediately to the settings. When you search the apps, this is what it looks like. And you see that you have a lot. I'm going to show you the actual search. So I'm going here to my settings and I'm going to press here the letter Q to start the apps. And now I can filter. I don't even have to go into that screen. I can just type on the Windows button and then start typing, like for example, Kindle and then I find the Kindle application. So it's actually quite fast, quite convenient to use with your keyboard. But the interesting thing here is that it's not only searching the apps. It's also proposing you to search into installed applications, what you have here. Those applications implement what we call the search contract. So they say, hey, I can be searched. Okay. I'm not going to show you code for that, but it's really literally like five lines of code and a change in the manifest of your application to support the search. What it means is that by doing that immediately, you tell your users, okay, you don't have to search, you know, to look into, is it under edit, find, or is it into tool search or whatever. Immediately, they know where to go. So there is this kind of common knowledge. Settings, the same things, right? If I show the charms here, I have the settings. And then if I press on the settings, this is a common UI which all applications can implement. Again, it's not a must, but it's recommended because it gives your user this feeling of familiarity. And then you can here have your setting shortcut. And then after that, jump into your app. I think Jonas is going to show that in the next session. Oh, one thing I wanted to show you since I'm here. So here you see you have a lot of apps, right? It's kind of hard to find your way. And of course, if you have a mouse with a wheel, it's, you know, you can use the scroll wheel to scroll up and down, but still it's kind of hard. So there is something called the semantic zoom, which is pretty cool. It's a way to zoom out of your information and to change the view. And here, if you have a touch screen, you can of course just use a pinch gesture. Here I'm going to click here in the corner. And now I have another view of my apps. They are sorted by letter. This is super easy to implement. It's just a matter of implementing the templates using the so-called jump viewer control and implementing the templates. It's very easy to do. And it helps you to help your user to dig into the application. For example, here I go to the letter R and immediately I find my apps. There are other applications that you can see using the jump viewer, using the so-called semantic zoom. So I think it's an interesting way to do things here. Where is it here? Okay. When you develop your apps, you have to think. Billy already talked about that in the previous session, but it's important. You have to think that it's not just for one side of screen and it's not just for one way of using your app. For example, here we have the so-called snapped view. The snapped view is here. This is a USA Today application on the side. And here you have the video which is opened in what we call the field view. So for each application, you can have full screen, snapped or filled. And you need to design for those. If you use the same design for the snapped view, it's going to look weird because the snapped view is a vertical type of scrolling while the full screen is rather a horizontal type of scrolling. So you need to think of that. Also remember that if you design for a slate, users can turn the slate upside down, right? And then you have a portrait or landscape type of experience. Now this is the same application, but this time the USA Today is filled and the video is snapped. So you see that the template is different. See the same type of information, but with a very different form. Try to implement a consistent experience, consistent throughout your app. Use the same gestures to make the same things, but also consistent with other apps. Okay, a pinch gesture is used to zoom. I think everybody knows that now, not even just on that platform, but even on iOS which introduced it. Okay, so think of those things. Try to be consistent because every time you're not consistent, you're forcing your users to think and they don't want that. Okay. For example, common location for settings, that's a good example. If I search for settings in any Metro app, I'm going to go to the charms and press on the settings button. That makes sense. Try to work with other applications. Try to use contracts. Okay. And think of the lifestyles. It's quite important because that's the entry point into your app. How do we do that? And what kind of help do we get from Microsoft to, you know, to follow those guidelines, to follow those principles? Well, the first thing is agreed. So the agreed in Metro is using 20 pixels times 20 pixels. So typically when you align your items, you should align them at 20 or 40, but not at 30 pixels. Okay. In Windows phone, the magic number is 24 for some reason. So when you align on the side, it should be aligned at 24 pixels from the left margin if you want to be consistent, which is kind of interesting. Those grids all come as soon as you start a new Windows 8 application with a new template, you will get those things immediately. So it's very easy to actually apply that into your app. The title, the start of the content should be at seven units, 140 pixels if you want to be consistent. Again, this is a value which is used everywhere in Windows 8 and the baseline for the title at five units or 100 pixels. So again, by using the grid in XAML, for example, it's very easy to make your rows with 100 pixels, 40 pixels additional for the beginning of your content and then to fill with the rest. Of course, your applications don't have to look like grid. Any content should start approximately at 140 pixels. On the side, we typically leave six units or 120 pixels. You can have, of course, lists. So by using lists, you can make them dense where here you have just one unit of margin between the items or here you have 1.5 units, which is maybe a little bit more unusual or here 2 units to give, again, a different feeling into your app by changing the density of your lists. So when you start with those things, try to use the template or at least if you don't want to use them, it's fine, but you can always take a look at them so that you see how the Metro design team implemented that. And I'm going to show you that in just a second. And there are those new controls which help you a lot, like the grid view. The grid view is this horizontal scrolling control that you have, like, for example, here. This is a grid view. Grid view, for those of you who come from Civilite or from WPF, it's just like a list box except that they change the templates and the item template. List view is the vertical scrolling control. And the jump viewer is the one that I showed you where you can semantically zoom out to get less information on your data, to get a better view of your data. Those controls are there and they do most of the heavy lifting for you, so it makes sense to use them. And by the way, they exist in HTML as well, so you know that you can develop Windows 8 applications in HTML and JavaScript, and those controls are also available there. Expression blend is really a great tool. I love expression blend, and now when you install Visual Studio, you get expression blend for free. So it really makes sense to use it. There is no excuse anymore. And blend is really helping you to do a lot of this work visually. In my talk on Friday, I'm going to show that, like, how to use blend to its full potential to design Windows 1 applications and also Windows 8 applications with design time data, et cetera. And really my recommendation is ask professionals, right? Professional designers cannot be replaced by a developer who thinks that he knows design a little bit. Okay? I'm not a designer. I don't do design. I talk about it sometimes. But I really go to a professional designer when I want the job to be done right. When you create a new app, this is, for example, what you get. There is this split application. You have two pages, so you can see how the navigation is done and all that. You don't really have to use those templates, but it's a good idea to see how they do that. You also have a grid application with three pages with different level of information. This is blend used here with Windows 8 app. So you see that you can easily design those units that I was mentioning, the 140 on top, the 120 on the side, these kind of things. It's very much easier to do that in a visual manner, especially when you work with text, because text has a baseline issue, so it can be difficult to implement. Blend is also helping you with the... Here you see you can change the view of your app directly in blend without having to run your app. So you can change blend to run the snapped view or the field view or even the portrait orientation. So that's very useful, because otherwise you have to run your app and try it. By the way, there is also a simulator, which comes when you install Visual Studio, so you can run your app not directly on your machine, but in the simulator, which allows you to do things like turning, which here doesn't work. If I turn, there is no sensor here, which is going to detect that I'm turning the screen. So you can use a simulator to use these kind of things, or for example, to set your jail location to a different place to test in the application what this is doing. Blend is also allowing you to simulate different resolutions, which can also be useful if you don't have the actual physical device to test it. You can test directly in blend. Phone size, the whole Windows 8 experience is built with four phone sizes. Those are the phone sizes. So don't go overboard. Don't use too many phone sizes. It's not a good idea because it's confusing the user. Okay? Be consistent with your phone sizes. And you can add additional differentiation between your text, for example, by using a different font weight, or like here, by changing the color or the opacity of the text box to make it a little bit less in the face, if I can say this like that. Colors in Metro are bold, right? We talked about that before. They are plain but bold. So it's courageous to make an application with Magenta phones, but it's paying. Nokia came out with those CU and Magenta phones, which are quite successful. So I think it's paying off. Those are, again, right? Those are the official Metro colors. You don't have to use it, but if you do, they give a sense of familiarity to your users, which is a good idea. And the icons are already mentioned that those are two places where you can download large sets of icons, which are quite metrified. And I'm sure we'll see more very soon. So this is another place where you can use icons for your application bar. The application bar, at this location, you have more than 150 different icons for the application bar, which allow you to make sure that you're using something that you're using nose. Quick question. Why is the button here on the left and on the right, and there is no button in the middle on the application bar? It's because I use my Sumb sometimes with the application, and I need to be able to reach those buttons, right? So remember that. Voila! That's the gist of the talk. So I think what I would love you to take home is maybe just one thing, is how important it is to involve designers in your application, not as an aftersalt, but really from the start, because design is taking really a place which is more and more relevant into our applications. Especially now with the challenges that we have, the multiple screen sizes, the backlight and sun don't play well together, all these things play a big role. So work with professionals, try to pay attention to that. And if you have questions, I will be happy to answer those. If you don't, I wish you a very good day, and thank you for your attention. Thank you.ierungsges.com
The Metro design language is becoming an intrinsic part of the Microsoft user experience. Already, we can find it on Zune (where it all started), Windows Phone 7 (where it was refined and perfected), Xbox, Windows 8, as well as on the Microsoft.com website. In this session, we will dive into the history of this design language, study its characteristics and show tips and tricks to implement Metro user interfaces in Windows Phone and Windows 8.
10.5446/51134 (DOI)
Go ahead and turn it on. Okay, thanks. Did anybody else travel from another country to get here? Other than Scandinavia. Where? Where? From Belgium. Okay, so still from Europe. Nobody else from the US except me? So in Belgium right now, do you have like a real day and a real night? There's like no gray line in between, right? Everybody else is from Norway? This is ridiculous. I experienced this last year and it drove me crazy. I showed this to my crew yesterday at my class. I think I had it here. Yeah, so I took this picture two nights ago. This is 11 o'clock PM. This is wrong on so many levels. I can't even begin to tell you. And remember, I'm on seven hour or six hour jet lag. So on top of jet lag, you throw this shit into the mix. No, but this is not the funniest thing. I want you to take a good look at this picture, okay? Because I'm now going to show you another picture. Same picture, three o'clock in the morning. There's the proof. See that big white thing there? Yeah, that's the moon. Three o'clock in the morning at 11 o'clock at night and they're almost identical pictures. This is insane. No, the alcohol is what helped me last year. Last year, I just stayed drunk the entire time, which is not easy to do considering that a bottle of Sam Adams has 12 freaking dollars in this place. Do you guys are the second most expensive city in the world? The first is Tokyo, second is Oslo. At least that's what I've read. Wow. I'm telling you, we really, in America, we have it so easy, we really don't realize it until we leave the country, but things are so much more expensive in the rest of Europe. It's unbelievable. When to Switzerland, in Switzerland it's just as bad as Norway as far as expense is concerned. A quarter pounder in McDonald's over there costs like $11. It's ridiculous. And their speeding tickets are a percentage of your net worth. Seriously, they want the speeding tickets to hurt, which they really, really do. Who attended my WCF class yesterday? You guys didn't get enough abuse? Wow, it's a pretty good crowd. This thing's pretty full. Thanks for coming in. I guess you want to learn how to take WCF to the next level. Hopefully that's what I'm going to show you. We're going to talk about extensibility today. We're going to talk about a bit of extensibility because the truth is, you'll learn in the first few slides, and I got very few slides, and then I'm going to jump right to the code and stay in the code for the rest of the talk. WCF is extremely extensible. They've done a really, really good job with the architecture on it. So then, if you don't like certain implementations of that architecture, you can plug in your own. I'm going to focus on one area, which I think really opens up the possibility for doing a whole lot. And that is tapping into the calls, meaning I'm going to extend WCF in a specific point where you can monitor exactly what operation was just called upon. Because if you know at the service layer, if you know the operation that has been called, there's a lot that you can actually do. And I'm going to describe more about what it is that I'm going to do and how I'm going to do it in a few minutes. For those of you that don't know me, my name is Miguel Castro. You like my new conference picture? That's cool, huh? Where's Johnny? That's Johnny. Johnny is responsible for giving me that hat. So I got a digress for another two minutes and tell you what happened. Last year I stood on stage and I made the comment that I wanted to get a Viking hat while I was here. I wanted to get one of those paper party ones that I can just get on stage and do a little dance or something and look like a berserker, right? A little paper one is what I said, something that costs $3. Johnny shows up the next day. This thing weighs 20 pounds. It almost broke the bridge of my nose. It's metal. It's got real wooden horns. And I was planning on doing a carry-on home and that didn't work out because you can break through the cockpit of an airplane with that thing. So I put it in my bag and I got it home and it still hurts when I put it on because it hits your nose really hard. But it was a great gift. Thank you once again. But I couldn't resist. I had to make that my official NDC photo. So I hope you guys appreciate the extent that I go for my attendees. So let's talk a little about WCS architecture. For those of you that saw me yesterday, some of this is going to be repeat just for the first few minutes. But in order to understand how to extend WCF, you got to understand a little bit of what's behind the scenes. And then we're going to jump into behaviors. What we're going to be doing today is we're going to be extending WCF using something called behaviors. And we're going to be invoking three different types of behaviors that I'm going to teach you to use. Actually, two behaviors and one other thing that kind of works like a behavior. And we're going to be using this technology to inject and intercept these calls. Because like I said, the example that I have is very specific, but I'm going to show you how you can tap into the call, into knowing exactly what operation was called. The user knows what operation was called. The service knows what operation was called because it's in that operation. But if you can put some kind of blanket around it that can monitor what operation, the sky's the limit as to what you can do. You can accomplish authorization solutions and you can accomplish auditing and logging and monitoring solutions, which is what I got for you today. It is a small part of the capabilities of WCF extensibility. I'm only beginning to scratch that surface myself. Even in the behavior technology that I'm going to teach you today, there's a lot that I still haven't done with it. You're going to see that there's a lot of methods that are going to be unimplemented. I'm going to describe what they do, but we're not going to mess with them because it would just take too long. I can spend an entire day on this topic. Then, time permitting, and I think time is going to be permitting. I'm just going to give you a quick little demo of something that I wasn't even going to bring into the talk, but I figured it would be kind of cool because what I did is take the call monitoring solution that I'm going to write for you. That's almost all written, actually. I took it to another level. At the last minute on the airplane, I threw that code in to just include it for you to give you just a little more code and some more ideas. If you're like me, you get a little bit out of a one-hour session, but then you go home and you start reading all the code, and that's when you really start absorbing a lot of this stuff and when you get your own ideas. My goal is always to just give you as much code as I possibly can, even if I don't have time to in detail explain it all. WCFS architecture is a pipeline architecture. It's what we call interception-based. If there's any ASP.NET developers in here, you may be familiar with the term pipeline. Interception-based means that I use, I'm going to repeat myself from yesterday, I use the kindergarten analogy of a train track with train stations on either end. What happens with WCFS is you got a request that starts here and it gets sent to a service. So you got beginning and the end of the train track. Well, in between those two stops, there's a bunch of different train stations and the message is on that train. And every station is responsible for one thing. It's responsible for examining that message and testing it for something, looking for something on which to act upon. And it may find that it may not. An example of this is, for example, transactions. WCFS is compatible with transactions because at one point in the interception pipeline, it stops the process, looks at that message, and sees if it's decorated with certain transactional attributes. And if it is, it acts upon it by either starting a transaction or adding the message to an existing transaction or whatever. If it doesn't see that, it just sends the message on its way. And every point is responsible for this. So it's a really segmented architecture designed to be pluggable, meaning you can remove one thing and put in another. That's kind of what we're going to do today. WCFS is written in a way where in the states we have the saying, eat your own dog food, meaning they wrote this architecture and then they created the product itself by building on the architecture. And that's how WCFS was written. In fact, if you don't like the way security works in WCFS, you can unplug the security module and put your own in place. I mean, some of these things are obviously not recommended, but it is definitely possible. So here's an example of what that train track, for lack of a better term, looks like. We got the client on one side, we got the service all the way on the other. Here's the machine boundary, and there's a bunch of channels on the client side. There's a bunch of channels on the service side, and these are all of the train station stops. This is all the stops that the message is going to make to be tested for different things. Transactions, security, both ends of security, authorization and authentication. And way over here, just before the dispatcher takes over, it figures out what kind of instantiation we're going to use for that service. Those of you that use WCFS, I'm hoping most people in this room use WCFS. Raise your hand if you do. Okay, good, because this is an extensibility talk. So you're familiar with instance context mode, where it figures out do we want to instantiate the service in per call, per session, or singleton. Well, that's the job of a behavior that's sitting way over here, and all the way at the end, it looks at that art attribute and says, okay, now that I'm ready to give you this instance, let me figure out how to give it to you. So it's very pluggable in just about every point. One way of pluggability is through something called behaviors. Who's written a custom behavior before? Okay, good. Well, by the end of the day, the rest of you are going to know how to write custom behaviors. Who's seen a behavior? Everybody. Okay, if you've instantiated a service, you've definitely seen one. If you haven't seen a behavior, then all of your services have been per session, single concurrency, because that's the default. And it's not the end all solution for everything. So WCFS behavior is, by the way, I want to find out, I think we got an overflow room too, right? On the other side, people watching our video. You think anybody's watching? If you're watching in the overflow room, just stand up, put your hands in the air and yell, yo. They're shy. Last year we had a whole group of people that were yelling. Remember that? Or nobody's watching what I'm saying, right? Alright, so a behavior is code that acts upon a service or an operation. There are two different types of behaviors, a service and operation behaviors. Service behaviors are written by implementing an interface, it's called iServiceBehavior. And then they can be attached to the class in a number of ways. I'm going to walk you through the different ways here, and we're going to step it up on every example. The easiest way is to just implement the behavior from the service class itself, but obviously you're limiting the behavior to that class itself. You may want to attach the behavior to the service class at host load time. Perhaps you want to do it dynamically so that all of your services receive the same behavior. Now you have to do it through code in your console host or whatever host solution you're using. There's ways of doing it with an attribute. I'm going to go through all of that. There's many reasons for writing behaviors. One of the most common reasons for writing behaviors is to attach error handlers. If you've done fault handling in WCF, you're familiar with the idea of having to throw fault exceptions, something that I taught yesterday. There's a way to centralize error handling in WCF where instead of having to catch every error in every operation and then throw the fault exception, you can automatically let a behavior pick that up and centralize all your error processing so it's all in one place. So that's a great reason for having behaviors. It's not the one that I'm going to show today. That one is very heavily documented on the web. Another reason for using behaviors is that you may have service behaviors. You may have the need to do something on a specific operation or maybe on all operations. You can do that with something called an operation behavior. An operation behavior is something that implements, it's a class that implements I operation behavior. So pretty simple. This is a class that can, this behavior can act upon a specific operation that you happen to decorate with this behavior. And this is one of the ones that we're going to use. What this behavior does not give us yet is the actual name of the operation and a way to inject execution code before and after an operation actually calls. And this is something that we're going to need for our solution because you're going to see what the solution is in a couple of minutes. For this, for this necessity, we need to recruit the aid of something else referred to as a parameter inspector. So these are the three things we're going to use today. The service behavior, the operation behavior and the sting call a parameter inspector. Parameter inspector gives us even more details of an operation because it lets us write code that can get called either before and or after an operation gets executed. So we got really fine control. And if we want, even though we're not going to use it today, we actually have access to all of the incoming arguments to that operation as well as the outgoing return value. And what we do with this is entirely up to us. I'm not going to do anything with the arguments from my example, but you can do whatever you want with that. I can't think of anything on the top of my head, but just think about it. You have the ability to insert code. This is almost like aspect oriented programming. You're able to insert code before or after a service operation executes and you're able to examine everything about that operation that is about to be executed. Perhaps you don't like the arguments that are coming in for whatever reason. You want to filter them against something. Maybe you have a profanity filter in place. I don't know why on earth you would have that, right? But you may want to do that and then cancel that operation. Or perhaps you want to do some kind of logging or timing. How long did this operation take to execute? This would be a great technique for doing some kind of load testing where you want to log the time that an operation was about to execute. Then the time when it finished executing. Maybe take an average, run it through a thousand concurrent calls and see what the average is and see if your service is performing well. There's tools out there to do this. These tools work with these kind of technologies. This is the way they wrote this stuff. I'm going to skip this because I'm going to show you how to install all this stuff from the code. This is the solution that I want to do. This talk came out of the necessity to do this. I did this for a client. A client said that they wanted this. They liked the idea of doing this and they asked me if it's possible. So I came up with the solution and then in conversation with my friend Carl Franklin, we decided to do a little show on the basics of this on DNR TV. It got a lot of hits. We received some really good responses. I thought it would be cool to try it out at a talk. That's how I wound up here today doing this. What I want to do is make it so I need the ability to run a service with two simple operations. This one does a greeting and a farewell. I need to have any client that calls into this service. I need to have the service know that this operation was called and at what time and date. Simple as that. I just kept it really simple. You can take this to another level and do whatever you want with this information. I want to start simple by reporting this information every time a call comes in. This is not as easy as you think without behaviors because if any of you thinking, well, this sounds pretty trivial, just put the code inside the operation. Obviously, you run into reusability issues there. I can definitely put console write lines in every single one of my service operations. What am I gaining? You'll know when the operation was called, but now if I write new ones, I have to remember to put the code in there. If I write more services, I have to remember to put the code in there. We don't want that kind of reusability headache, so we want to be able to hook this up in a very automatic fashion. I wanted first, my first solution is going to be to report this right to the console. The truth is that my host may not always be a console. I tend to use console hosts for demos and for development quite often, but when I deploy, it's never going to be a console. I want to take the solution up to the next level by being able to report this information to anywhere I want. In other words, let the host be the deciding factor of when I receive this information, what am I going to do with it? I may want to report it to the console. I may want to log it to a file. I may want to send an email message, which would be very inefficient, but I want to do whatever I want with it. I also want the ability to make this reusable, meaning I don't want to write this code more than once. If new services come into the picture, I want to be able to do something like just decorate the services and not worry about putting code anywhere in my operations. Then I'll show you a call monitor application that I wrote at the end. That's what that final solution is. Any questions? Because the rest is going to be all code. Nobody ever wants me to stay in PowerPoint for whatever reason. Here's the service that I have. Very, very simple. This service does a greeting and a farewell. You send a language and it checks for Spanish or English, all our hello. I mean, it doesn't get any stupider than this. I just couldn't come up with that. I was not in a creative mood. I hadn't started drinking Scotch that day and I wasn't feeling very creative. If I were to run this right now, it really doesn't get any easier. Let me just run this to make sure it works because the demo gods are usually not on my side. Hey, this is where Thor's from, right? Isn't Norway where Thor's from? I think that's pretty cool. See, the Avengers are really, really big in America right now. I don't know if it's really hit here, but it's the biggest movie of the year over there. All right, let's see if this thing calls. It should say hello, Miguel. And then goodbye. Okay, so we know that it works, but take a look at the console host. The console host is still blank. What I want to do is that I want that console host to give me a message on there every time that an operation gets called. I want to do this in as clean and elegant and as reusable fashion as possible. I don't want to put code in here. That's this console right line and then another piece of code in here because that would be cheating and it would not be reusable in any way. So what I'm going to do is that I am going to show you the first interface that I'm going to be working with. And then this interface is called iServiceBehavior. And iServiceBehavior gives us a bunch of methods to implement. And the one that I'm interested in is this one called ApplyDispatchBehavior. And this is where you attach this method and add the information that you want. The other ones I'm not going to mess with, but there's a whole bunch of things you can do with ServiceBehaviors here. I can add binding configuration information dynamically to that service, do it in code by tapping into the Add Binding Parameters. So there's all sorts of funky things that I can do. I want to show you when this is called here. Because the first thing I'm going to do is I'm going to do the empty methods on the interface just to show you when it is that they're called. So if I run this service right now, the service host, you're going to see that at host time, this gets called. So this is a great place to do initialization. And one of the initialization things that I want to do is that I want to attach characteristics, I want to attach behaviors to every single one of my operations. Unfortunately, I don't have access to all of the operations or the specific operations here in the ServiceBehavior. I do have an operation list here. And what I can do is I can add the information that I want to do with the service. And I can add the information that I want to do with the service. And I have an operation list here. And what I can do is that I can loop through the operation list and I can attach operation behaviors to every single operation. And operation behaviors come from an interface called IOperationBehavior. And IOperationBehavior gives me three very, very similar interfaces. And the one that I'm interested is this one here. Now, I'm going to cheat for a second and I'm going to look at some code that I have because I just need to grab something real quick. Actually, I'm going to uncomment that in a second. So I won't even copy and paste it. So what I want to do is that I want to take this operation behavior and use this to peek into any operation call. Now, I still don't have the ability to put code that gets executed before and after that call, but I'm getting close. But I want to do this for every single operation. And the only way that I can do this for every single operation is to use a service behavior to then loop through all my operations and attach the operation behavior. Because without the ability doing that, I'm a little more limited. And I'll show you how I'm limited in a second. The last thing that I want to do is implement one last interface. And this interface is called parameter inspector. And parameter inspector is the one that gives us the real fine-grained control because parameter inspector lets us put code that gets executed as these methods indicate before the call and after the call. And here we have the operation name at our disposal. We have all the input arguments and we have the return value on the after call. So we have a lot of information. And using this, we can actually do quite sophisticated logging capability if that's what we're looking to do. I'm going to do very, very simple logging capability. Now, I'm going to get rid of these because I'm just going to uncomment the code that I have and show you what happens because what I want to do is run it and show you when these things get hit. So you know exactly what the order of operations are. This is going to get cooler as we go on. Okay, and then the service behavior, you'll notice that I'm looping through the end points and then looping through each contract in the end points and going through those operations. So I have access here to every operation in the service. Now, this is kind of easy because I'm in the service itself. So who says that I can't just do a get type on this service or a type of and then do a method parameters and find out what these operations are. That's standard.Net reflection, but I want to keep this reusable because right now for this first example, you're seeing that I'm attaching these things directly to a specific service, but I want to pull away from this technique because I want to make this as reusable as possible. But right now, let me show you exactly when this stuff gets called. Watch this. I'm going to put a break point there. I'm going to put a break point there. And then I'm going to not put a break point here because I got a console right line. Look what I'm doing with this console right line. I'm saying this operation was called on this date and time. So this is to prove to you that I can get to the call and have fine grain control over this call. Now watch as soon as I load the service, the service behavior gets hit. The service behavior is going to go through and it's going to loop through all the operations and attach the implementer of the operation behavior to this service. Now who's the implementer of the operation behavior? The service itself because all three interfaces are implemented from the service. That's why you're seeing that I'm attaching this. But what it's actually looking for here is an operation behavior interface or implementer. Now after I do this, there now it's in the I operation behavior. So it's attaching each one. And now it's attaching each one and it's doing it for each operation. So it just hit it twice because I got two operations in the service. Now it's going to wait for my call. Now that last one, the parameter inspector, that's the magic bullet. Because that's the one that gets called during the call. The others are just for initialization. The others are for setting things up. But that parameter inspector is the one that I want because that's the one that gets called at the time of the call. And that's where you can tap into things before and after. Has anybody ever heard of aspect oriented programming? So this is pseudo aspect oriented programming. Aspect oriented programming is the ability to write code that you can then inject before or after certain methods are called. In.NET that's not regularly possible. There are frameworks to let you mimic this. If anybody's ever used spring.NET, spring's got a pretty decent mechanism for mimicking aspect oriented programming. WCF has pretty much full aspect oriented capabilities with these types of technologies, specifically with the parameter inspector. Now if I call my service, watch what happens. Now it'll tell me the operation was called. And as you can see at 9.25 a.m. I'm on home time. So at 9.25 a.m. first service dot obtain greeting, that's the name of the operation, was called. I can add the user who called it, whatever information that I want. If I wanted to put this in the after call, the after call is called after that return on the operation. So I can see the time span in between the beginning and the end of the call. If I call the second one, you can see that operation farewell was called. So the solution is on its way here. We got what we want. We have the ability to tap into the call now. Now what I want to do is take it to the next level and make it as reusable as possible. And I'm not going to get that kind of reusability by doing this, by implementing it right from the service. Because by doing that I'm limiting this technology specifically for first service. And that's certainly not what I want to do. So let's go and comment all this crap out here. I still haven't learned the Mac keyboard yet. Apple couldn't give us a page up, page down key. There we go. All right. Now what I'm going to do is that I'm going to show you three different types of classes that I have here. Because what I've done is that I've taken this to another level. I've taken this and I've written a separate class to be my behavior. So instead of implementing it from the service itself, I want a separate class to be the service behavior. And as you can see, this class implements I service behavior and it's got all the same kind of codes. This is exactly the same thing as before. The difference is that now instead of the operation behavior, because remember, I'm going to use a service behavior to loop through all the operations and attach the operation behavior to every operation. Now I'm going to show you another alternative in a second. But the operation behavior, it's now its own class as well. Here's the operation behavior class. It implements I operation behavior. And as you can see, it's obtaining an instance of the parameter inspector and it's adding it to it. And because there's one piece of information that the parameter inspector does not have and that is the name of the service. Somebody forgot. I have no idea why they forgot. But because I'm hooking in the parameter inspectors from the operation behavior, I'm able to obtain the service name from one of the arguments that comes into the operation behavior. I can obtain the service name and then it's as simple as writing a constructor on my parameter inspector to receive that service name. So when I add the parameter inspector, I have it on the service name there. And now in the before call, I'm doing the same console ride line with that service name before I had it all available to me because I was inside the service itself. But now I'm not. Now I'm writing this in a separate class. Now I got this stuff in three different classes, right? To hook this up, I got two primary choices how I can do this. I can do this in a pretty cool way and that is by making these behaviors attributes. And that's why you see that it ends in the word attribute and it inherits from the attribute class. Because now I can attach this through code if that's the route that I want to take. That may not be the route I want to take. But it would actually work quite well because here the fact that I have operation, the operation behavior is the wrapper for the parameter. But once I deal with attributes, the service behavior may or may not be needed. I got a choice here. I can do this. I can decorate that operation with that attribute. Now watch what happens when I run this. It doesn't compile. That's what happens. What am I missing? That's what happened. Now I'm going to run the client. Now I attached it to obtain greeting, right? So let's call the service for obtain greeting. And as you can see, it gave me, there's the service call and it logged it. And let's call the farewell greeting, the farewell operation. As you can see, it didn't log that. Why didn't it log that? Because I didn't decorate that. So by turning these things into attributes, I've actually given myself the ability to attach it on the operations that I want. But because I also wrote the service behavior, and notice what the service behavior does, is loops through all the operations and programmatically attaches the operation behavior to the operations. This would essentially be the same thing as putting that operation behavior on every single one of my operations. But now the fact that I wrote a service behavior to do that task for me, I have the capability of doing this. Bless you. Now I have the ability of decorating my service with one behavior and doing this. Now I should get log messages for both of these calls, even though none of the operations are decorated. Yep, there's the first one and there's the second one. So I'm headed in the right direction here. I got the reusability that I want, almost the reusability that I want. I want a little more. But I don't have to implement this from the services, so I don't have to repeat that code. And I have the ability to decorate it with attributes, which is really, really easy to do. And by having both of those attributes, I have the ability to restrict what operations I want to log, if that's what I'm looking for. Or I want to just do the entire service. All right, now, to take this to yet eating another level, I can remove the behavior from here. And now I want to do it programmatically because perhaps I have control over the host because I'm the guy that wrote the host. But I got a team of developers and they wrote different services. And I want to be able to log everything without either giving them the behaviors or even worse, having to make sure they remember to do it. You know, I want to offload the responsibility from the developers and I want to do it at the host because I have full control over the host. Well, let's go over to the host. This is my console reporter. This is the host here. And what I can do in the host is that I can instantiate that behavior. And I have copies of them in the host here. Obviously, in real world, I would have these in our own DLL, so I wouldn't have to duplicate the code. But there's a reason I duplicated it because I have another version of them in another project now that I'm going to show you. But I have the same service behavior that you saw. I'm going to instantiate it just before, after I create the host, but before I open it. So all behaviors have to be added to the host before you open. And I instantiate an instance of that behavior. And remember, instantiating this behavior, when this behavior attaches, it's going to automatically go through here and instantiate the operation behavior and attach it to all the operations, which is in turn going to create the parameter inspector and add the parameter inspector to the operations. All simply by instantiating the one service behavior and then programmatically adding it to that service. This is directly equivalent to decorating the service with the attribute. Exactly the same thing. Only now I've done it programmatically. And if I have more services listed in this host, I can do it programmatically there without worrying about having the developer do it and having to remember to do it. So this should actually work. If I remember correctly, I don't have any decorations on the service. It's all done here. So this should work just fine. So if I call the service, I should get a monitor result here. Yep. And the same thing for this one. So it works for all operations. So we're doing well so far. I mean, forget about the fact that this example is very, very simple and I've tried to keep it simple purposely. But can anybody see just not uses for this kind of stuff? I mean, the ability to actually tap into calls and do whatever you want. I'm doing something really, really trivial. I'm showing you a useless freaking message. This is just for the purposes of demo. But you have the ability to tap into before and after a call. That's more powerful than you can think. Because before I learned how to do it, there were a lot of things that I thought, Dan, I really like to know when that operation is called. Perhaps I want to restrict. I want to write a system where I can go in and check boxes to just restrict or unrestrict calls to operations. So I can turn things off. Maybe some operations will only should only work for a service between the hours of nine to five. After that, the service needs to go offline or at least return a message saying, sorry, you can't use me at this point in time. This is how you would do it. You can't use the limit when you have the ability to tap into these calls. So I've just tried to keep the examples as simple and as trivial as possible. All right. So what I want to, the level I want to take this to now is that this reporting that we're doing, well, the reporting is kind of okay if you're doing a console app. But it's not for anything else. And most of us just don't use console apps for anything other than quick demo or like during our development time. In development, I use them all the time because I can start the host and stop it real easily. But once you deploy, if you deploy with a console app, God help you. So I don't want to do a console ride line. I want, maybe I want to do a console ride line, but I don't want the behavior to be locked into that. I want the host to decide how they want to report on this. So I'll show you what I did because the rest of this is really just a variation of what I've shown you already. I have another project here called Enhanced Console Reporter. And this one has versions of the behaviors that do things just a little different. So I'll start with the, I'll start from the inside out with the parameter inspector. And what I've done with the parameter inspector is on the before call, instead of doing a console ride line, because this is the place where I was reporting. So instead of doing a console ride line, what I want to do is just fire an event to whoever is listening. So somebody that's going to be, excuse me, somebody that's going to be, something that's going to instantiate my operation report inspector class is going to tap into this event called service operation called. And that event is simply going to fire, as you can see, on service operation called. There's the event firing right here with a custom event args that I wrote. And the custom event args has all the information that I need, the service name, the operation name, and the timestamp. I can add anything else that I want to this. And that's all the before call is doing, is firing that operation. Incidentally, in case, because nobody's asked yet, but in case you're wondering what this return null is, what is this? This can be anything you want. Whatever you return here gets sent into the after call in this bucket here, correlation state. So any information that you want to hang on to and then have access to in the after call, you can return from the before call. Most of the time you're probably going to return null, but just understand that's what that bucket's for. All right, so I'm returning, I'm raising this event call on service operation called. Now, what is going to instantiate my parameter inspectors? The operation behavior. Remember the service behavior ties the oper, hooks the operation behaviors into the operations and the operation behavior hooks the parameter inspectors in. So we started from the inside out. So since it's the operation behavior that is going to be instantiating my inspector in the operation behavior, I'm setting up an event as well. So then when I hook up my inspectors, when I instantiate my inspector before I added to the list of parameter inspectors, I'm going to tie that event, the event that the inspector is raising, service operation called, I'm going to tie it into a method in my operation behavior. And that method is going to call an event call on service operation called, which is declared in the operation behavior. So I'm essentially doing the same event to all three levels and just letting it bubble up the chain. This is called event bubbling. WCF does not have automatic event bubbling unlike WPF. If you're a WPF or Silverlight developer, you know event bubbling works really nicely. Whereas long as the event is the same name, it'll bubble automatically all the way up to the top. Here, I'm mimicking the same process, but I have to do it manually because there is no automatic feature for it. So then at the service behavior level, when I hook up the operation behaviors, when I instantiate the operation behavior, I'm hooking into its service operation called event, calling a method on this class, and that method raises the event on service operation called, which is declared up here once again. So I'm just letting the event bubble up. So now what used to be a console write line is simply raising an event that shoots up three levels in the chain and then somewhere where I'm ever, wherever I'm going to hook up the service behavior, which is going to be in my host, using code like I just showed you in the last example, I have one event to tie into. And that event is going to receive the message from the operation behavior, which is going to receive its message from the parameter inspector. And if I open up that host, you'll notice that the code is just like the old one, where I'm instantiating the service behavior and I'm adding it to the service and then I'm opening the host. But before I add it to the service, I'm going to hook into the service operation called and I chose to do it in a Lambda expression instead of another method, but obviously easy ways to just do it in another method. And here's where I'm doing my console write line. But in this situation, in a non-console, this is where I would report it somewhere else, maybe put it out to a text file, a database, log it somewhere, do whatever that I do, use log for net, do that, something other than a console write line. The end result for this example is actually going to be the same. The last example, the hosting manager solution that I'm going to show you, that's where it gets really cool. And every time I run this, you're probably bored of tears because it's the same result. I just want to prove that it works. So this should do exactly the same thing and exactly the same thing. But the console write line is not tied directly into the parameter inspector. So we have a lot more flexibility. And I've added more flexibility with every step that I take here. Any questions? All right, before I show you the last demo, I want to give you another simple example that I got working just recently. It's not complete, but I wanted to give you just a little more, a little more example of what you can use this kind of technology for, because tapping into calls is kind of cool, but it may not be necessary for everybody. So one thing that has always pissed me off about WCF is if I wanted to add security to my story. Now, you guys that were in my, who was in my class yesterday again? All right, do you guys remember when we went through security, when we breezed through security in like 35 minutes in record time? So remember when we got to authorization, in authorization, what was the name of the attribute? I could never remember by hand. I remember the name of the enum, but I don't remember the name of the attribute. Hang on one second. In order to do authorization in WCF, services, that's it. In order to do services in WCF and authorize them, you have to set up security, which takes you five or six hours, and then you have to decorate your operation with this principle permission attribute, telling it you're demanding authorization roles for the following, authorization permission for the following roles. In this case, I used a constant, but this would actually be just the word administrators. What I'm doing it is telling it, I know who the identity is of the person that just called me, because I have security properly set up. I only want you to have access to the obtained greeting method, if you remember of the administrator roles. And this is the way WCF security works. And unless you do something super fancy, like what I'm about to do, this is the way, this is what you're locked into, unless you want to roll your own complete security solution from scratch, which is a pain in the ass to do. So what I don't like about this is the fact that now I'm managing my roles in code. So I may have one operation that requires three different roles, or can require one of three different roles in order to succeed. Another one that is for everybody, another one that is only for administrators, and I may have this kind of code scattered all over the place. A service with three operations, pretty uncommon. Five services with ten operations in each, that's a little more common, and you're going to have code like this all over the place, and it just becomes a little bit of a headache. So one of the things that I want to do is that I want to be able to have a point where I know that the obtained greeting operation is about to be called, let me check security before I allow you to continue or throw a security exception, which is essentially what this failure is going to do, is throw a security exception. So does that make sense? I want to check authorization at that point in time, but I want to do it from a centralized location, because even though I'm about to hard code it in a real solution, what I want to do is maybe have some kind of mapping file, or some kind of database storage, where then I can have a list of mappings where here's my methods, obtained greeting and obtained farewell. These are the roles that obtained greeting has access to. These are the roles that obtained farewell has access to. And if I wanted to manage this later in the future, I can go to this one file, change those roles around for each method, and nothing ever has to change in the code, where right now it has to, with the stuff I taught you guys yesterday, you have to go back into the code to change it. My opinion, that sucks. Big time. So what I did is that I wrote a little simple solution, like I said, it's not complete because it only works for Windows authentication right now, it doesn't work for custom authentication. But what I did is do the exact same technique. I started with a service behavior called an authorization service behavior that just goes through and hooks up to every operation, the authorization operation behavior. That one goes through, and for every operation, it adds an instance of the authorization inspector. Now let's go back to the service behavior real quick, I want to show you something. There's one property here called enabled that comes in through the constructor. I'm going to show you where that gets set in a second, but notice this property is going to be a true or false, it comes into the constructor. The reason I'm bringing it into the constructor, and then I'm widening the scope to a class level, is so that when I instantiate the operation behavior, I can throw it into its constructor, because I'm eventually going to need it in the parameter inspector. Now in the parameter inspector is where I'm doing my authorization magic. I'm finding out if I'm enabled or not, and then if I'm enabled, that's going to be my magic flag, then I do my authorization here. Now I just hard-coded this, so I set up my current principle, so I have my roles in my user, and then all I'm doing is that I'm doing some authorization checks. In this case, I'm checking to see if the operation name is obtained greeting, I'm checking to see if you're an administrator, and if you are, let it go on its way. If you're not an administrator, I'm going to throw the same exception that that other technique would have thrown, system security exception, in fact, I even copied the text message. If the operation is obtained farewell, I'm not going to add any additional security. So obviously a real-world situation would not have this long bunch of ifs, it would go out to a file, maybe a cache storage, and use the operation name to find out what groups can access it, and then figure out if it needs to throw an exception or not. The point is it gives you a centralized location to check your authorization. It all started with the service behavior, so I got to hook it up somehow, but I want to hook this up a little differently, I don't want to do this through code, I want to make this yet a little cooler to give you as much information and code as I possibly can in the next 10 minutes. So I wrote this thing called an authorization behavior extension, and this inherits from an extension element. Now when you hear the word element, what do you normally think of? XML, config, WPF perhaps, you're headed in the right direction. This is where the configuration property enabled is set up, and if you've ever set up custom configuration stuff for like web config, or app config, this should be familiar to you, the configuration property attribute, this is how you define configuration properties. So this is where I defined it, and then what I do is that the behavior extension element gives me two must override methods, these are abstract, so you must override them, but they're very, very easy because this is, it's almost always going to look the same. All a behavior extension does is allow you to use XML configuration to hook up a service behavior. So it's always going to relate one to one with a service behavior, and we have one of those already. So all you got to do is in the behavior type property, you return the type of what service behavior you're going to hook up, and in the create behavior, you return an instance of it. Why they couldn't do this in one step? I have no idea. This should have been just one step, kind of like that laundry mat example from this morning's keynote. So I'm returning an instance of authorization behavior, and I'm sending into it, into the constructor, the value of the enabled property. Now once I have this set up, instead of hooking up this behavior in code at the host load, like you saw earlier, I have the ability to do it in a much cooler fashion, which is setting it up in the extensions section inside system service model. This is where I can list all my behavior extensions, and it's very, very easy. You can see that it has a behavior extension section, and then I can just have as many ads as I have extensions. In this case, I just have one. You give it a name, and then this is a.NET type notation, which is fully qualified type, comma, the assembly name, since it most likely will not be in the same assembly. And then once you do this, this name is important, because now you can use this name like a standard service behavior, and those of you that were with me yesterday, remember how to write a service behavior? It's in the service behavior section. I didn't add a name to it, because I have it as the default behavior. If I put a name right here, then I would have to have behavior configuration equals and target that name. But now, just like I have all these other behaviors, like service metadata, service debug, security authorization, I have my authorization behavior that I wrote. That's the name right there. That's why it's called authorization, and there's my properties, and I can add whatever properties I want. I can hook up the logger this way, and make one of the properties a file that it's going to log the call monitoring to. So then I can hook that call monitor up using this technique, and then put the file name in one of the properties. So now this hooks up the behavior, and now to test this, what I've done is that I have in my client, I have that other button. You saw that there was a third button there that I haven't hit yet. That third button simply calls the exact same service proxy, but it changes the Windows credentials to another user that I have set up on this machine that doesn't have administrator privileges. So calling that third button is going to call the obtained greeting, but under a different user name. So now if I do this, call service should work just fine, and it logged. Call service here, work just fine, and it logged. Calling this one under a different user runs into that authorization. There it is. And it says security exception, request for principle permission failed. The exact same message that it gives me if I were to have decorated the operation with the built-in WCF authorization attribute. And I have one place to go to and do all of my authorization checks for whatever operation, because I have access to the operation name, and I have access to the user and their roles with those two pieces of information. I can do whatever I want. It just depends where I get that information from. In my case, I cheated and I hard-coded everything in a very ugly fashion. Make sense? All right. All right, I'm going to leave you with one more thing, because that call monitor was kind of cool, but the part that I don't like about it is that I may not care about the monitoring at all points in time. Maybe I want to log into something. Maybe I want to bring up an application and check the calls as they're happening, and then leave the application and not log it permanently. So I took this to another level, and I'm not going to go through all the detail, because we only got five minutes, but I'm going to leave you with all the code. I'll make it all available to you. And this is a little toy that I wrote called hosting manager solution. What hosting manager solution does is that, remember, it works on the same principle. So what happens is that instead of the monitor, the call monitor writing out to the console, it does the same thing as what my second example did, which is it shoots the event up the chain, so then the host just receives an event. Now, in that second example, the host was writing out to the console. In this example, what the host is going to do is that it's going to fire off a callback to any listeners through a service. So that host is running another service that I wrote called call monitor. And that service, if I open up iCallMonitorServiceContract, you're going to see that that service has a operation called report call information. And what happens is that service has two operations, connect and disconnect. So any third party can hook up to this service and connect to it. The minute it connects to it, it sets up a callback. And those of you that were with me yesterday remember how to do callbacks. And this is called an autoband callback. So this is a callback that the service remembers what all the proxies are, and whenever it wants, whenever this gets fired, it simply sees to see who's connected to it, and it sends the callback to those clients. So as many clients as you want can connect to this service and receive this information. And that's what the host is now doing. Instead of writing to the console, it's reporting back, using a callback, to any connected clients. So you can write any client you want to receive this information. And what I did is that I just wrote a simple client. So let me... Okay, so there's my client. That's the host. And I'm going to bring up the client. No, the regular client right here. Now, first, let's just run these two. And you'll see that if I call this service, I'm still writing out to the console here, just for shits and giggles. I'm still writing out. So you see that the monitor is taking place here. But I'm also checking to see if anybody is connected to that other service that I'm running, the call monitor service. And if there is anybody connected, I'm shooting a callback up to their proxies, using standard WCF callback technology. So now, to prove to you how this works, I'm going to open up a test monitor client that I wrote. In fact, I'm going to open up two of them, simply because I can. Now let's close all this up. Let's close all this up. So here's one monitor client. Here's another. And now if I call the service, it still shows up just on the console. But if I take this guy and connect, now I call the service, now he receives that message. If I take this guy and connect it and call the service, now they both receive the message, as you can see. If I disconnect this one, now only that one receives the message. So just an example how you can write an app to then call in and connect to this only if you want to. Now this is just for a simple monitor client, but you can have any of your apps do this kind of stuff. So it's just an additional level that I took this up to, just so I don't have to rely on this host here. I can just log on with a monitor, see what I need, and then log off. The inspiration behind this was, who's familiar with a product called Windows Home Service? Windows Home Server, I'm sorry. So nobody has a Windows Home Server here? Do they sell it in this country? So they don't, unfortunately, it's not a best-selling product anymore, which is really a bad thing, because it's an awesome product from Microsoft. It's a home server, has the ability to do file sharing, it also has the ability to scan your machines and do backups on a nightly basis. But the home server is a headless machine. It's a machine that has no monitor on it. The entire thing is configured from a remote application. And you can install this remote application in any of your machines. I've got seven computers in my house, and I have this installed on all of them. The home server backs them all up on a nightly basis, and from any computer in any room in the house that I am, that I'm in, I can open up the home server console and see a status of everything that's going on. And that was kind of the inspiration behind writing this. When I do a backup on home server, I can tell it, I have stations one through seven, I can say start backup on station one. It starts the backup, and you actually see a little window open up, backing up all the files. I can close that console, come back an hour later, the backup's still going. The backup's going on one machine. I can open up the home server console, the home server management console, and say view current backups, and it'll open up that same window, and the files that it's currently backing up are scrolling by. But when I'm done looking at that, I can close it up. So it's the same kind of concept that I'm doing here. I've got something going on on a server, and all I'm doing on that server is sending messages up on what is going on, on what the status is. The difference is my example is so simple, that status is just, this call just took place. And the other example I just gave you is the server's doing the following backup. And then whenever somebody's interested, open up the dashboard, look at what's going on, monitor whatever you want to monitor, close it back up, and have the ability to do that from any machine. I can have these two programs running on any machine that I want on my network. As long as they connect to this service, they're going to be able to see any time that a call has taken place on that service. So this has been a kind of a very trivial and simple application of what I believe has tremendous potential. And if anybody comes up with a really cool use for this kind of stuff, if your creativity is flowing right now, and you're thinking, I can probably use this in my company in the following, just shoot me an email and tell me what you came up with. And I'll be happy to write a blog posting and mention you on it, because I'm really curious to see what people come up with this to use this kind of technology in. And I hope it's been of some use to you. I got one more talk tomorrow on extending XAML, writing, once again, behaviors in XAML, markup extensions, and value converters. So I hope to see you there. Thanks a lot, guys. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
If you’ve been writing WCF services for a while, you know that this technology gives you superior flexibility for providing and consuming services. There are many permutations with which you can configure your services but this flexibility is not the only thing that makes WCF an incredible technology. You can also hook into the WCF runtime to provide your own customizations that get called upon when your services do. In this session, I’ll demonstrate this with a couple of techniques for achieving call monitoring and call authorization. You’ll learn how to write parameter inspectors, operation behaviors, and service behaviors; and how to install and reuse them easily.
10.5446/51136 (DOI)
Good morning. Or how do you say it in Norwegian? Good morning. Okay, in Norwegian. So the title of my talk is New Get, From Zero to Done in No Time. If you were here earlier, Paul gave a really interesting talk about how to have great open source projects and he had a great introduction on why you should use New Get, which I really enjoyed. So I get to follow up on that and give you more details about New Get. For this talk, these are the three main things I want to talk about. Give a brief introduction to New Get. Has anyone here never used New Get? Okay, I've seen two hands, maybe three hands. Okay, interesting. Good. How many of you have used it but have never built a package? All right. Okay, good. So we'll talk a lot about how to create packages. Because New Get's not just about consuming, it's also about producing, right? We'll talk a little bit about New Get internals. So I'll talk a little more about versioning and how that works and some of the details about how things work a little more than I typically do in my talks because I was told that Norwegians are very advanced and not to bore them with the boring beginner stuff. So, and I will try to do that as best I can. And then I'll talk about how to, a little bit about how to get involved with New Get. So this is me. My name is Phil Hack. And if you don't follow me on Twitter, please just go ahead and add me because I say very interesting things about toilets and garbage, you know, at Hacked. And then my blog is hdp-hacked.com. I typically blog about.NET topics, about Git and about New Get and whatever else is on my mind at the moment. So I really wanted to discuss scale first. And I was trying to think of a way to really sort of describe the scale of how many developers are in the world, right? And so I found this image that's one million pixels. And fortunately at this resolution, it actually fits on the screen. So this bright gray area is one million pixels. So think about that. That's like one million. Imagine each pixel being a developer. Unfortunately, I can't fit eight million on the screen. So I had to zoom out and now make you use your imagination. Imagine each of these pixels is about eight people. So this represents all the.NET developers in the world. I've heard one estimate that there's about eight million.NET developers..NET only. Not Windows, not including Windows developers, not including Ruby developers,.NET developers. So now you see there's this light gray is one million. The big square is eight million. And then there's this little blue square. And to me, that represents Microsoft with its 90 something thousand employees. Within that 90,000 employees, about this many, about maybe 12,000 are developers. So you think about that in terms of scale. We have 12,000 developers at Microsoft trying to build frameworks, products, tools for eight million developers. That's not going to scale. They can't fit every need that you have in your applications today. So what do we do? Well, we have this concept. Scratch an itch. If you have a problem that you need solved and nobody has built a tool for that, well, what are you going to do? Sit down and cry? No. You're going to build something that solves that problem. Ideally, you're not going to be mean. And you're going to share that solution. Share that code with other people so that they can get the benefit and you guys can collaborate together and make it even better and better. And only through us meeting our own needs, only through us building these libraries and sharing it with each other, can we actually meet the needs of the eight million developers out there? We can't look to Microsoft to do it. We can let them solve some of the hard problems we don't want to solve, but they're not going to solve all the little niche pieces of code that we need. And this is where NuGet comes in. If you want to share your code on the.NET platform, there's no better way than using NuGet as a means to getting that code into people's hands so they can quickly and effectively use the code in their applications. All right. So, those are my slides. Let's go to demos. I love that little quick swipe. This is Windows and a VMware. And then this is Mac, right? Windows, Mac, Windows. Okay. These are small tables. I have to find somewhere to put that down. All right. So, what I'm going to do here is create a new project. And as many of you know, I used to be a program manager at Microsoft. I now work for GitHub. But one of the products I was working on is ASP.NET MVC. So, I like to feature it in all my demos. And I'm going to create a brand new MVC project. And this is going to take a few seconds because it's doing some interesting things behind the scenes. Actually, they recently sped it up. So, that was actually way fashion. Oh, no, no. It's still going. The speed-ups haven't been checked in yet. But there's a hint. We're speeding that up. Okay. They're speeding it up. See, old habits die hard. All right. So, let's, there's a couple ways to launch NuGet. Now, first of all, you need to get NuGet. But if you have ASP.NET MVC three or four installed, you already have NuGet. If you have Visual Studio 11, you already have NuGet. It will be shipped as part of Visual Studio 11. If you don't have it, the quickest way to go get it is to Tools Extension Manager. And if you go to the online gallery, just look for the top one. We're always number one now. I hope that's still true because that would be in print. Okay. So, we're at number one. So, you can quickly install it from there. You'll notice here, when I launch, you can right click on a project and launch. I forgot to do this ahead of time. There we go. Manage NuGet packages. So, this is the way to launch the NuGet package manager. And you'll notice here that it kind of looks like the Visual Studio Extension Manager. And this is a source of confusion for a lot of people. But the difference is actually very clear, at least in my mind. It's very clear. The Visual Studio Extension Manager is for extensions to Visual Studio, hence Visual Studio Extension Manager. So, those are things that will extend Visual Studio, your development environment. But those are not things that you ship as part of your application, right? So, you might have a really cool editor, but you don't ship the editor with your application. Whereas the goal of NuGet is to bring in libraries and code into your project that you will then build upon and ship as part of your application. So, if your application needs to run that code, it's going to be in NuGet, not in the Visual, generally not in the Visual Studio Extension Gallery. Okay. So, can you all see that? Okay. Let me zoom in a little bit. So, this is the Extension Gallery. We have some tabs on the left. You can see, oh, these are the packages I have installed. You notice there already have a bunch of packages installed. So, within the project system, you can actually define a set of packages that are within your project template so that when people create a new project with your template, we pre-install these packages. And the benefit there is later on if we ever, if Microsoft ever ships a new version of MVC4, they can just upload that to NuGet and I can get the update right in my project after I've created the project. Because if I got a new project template, that doesn't help all the projects I've already created, right? Another cool thing is you'll notice that MVC4 is actually in there. So, teams like ASP.net are actually shipping all of their libraries now, not just the third party ones they're using as NuGet packages. The MF team has announced that a lot of their new stuff going forward is being distributed via NuGet. I've heard, you know, people wanting to distribute the CLR as a NuGet package. Who knows what the future might hold there. But the point is that it's starting to finally get the attention of people within Microsoft, whereas, you know, people like you outside of Microsoft have been using this for a much longer time already. Now, let's go in here to the handy dandy search box and I'm going to type a search for, you know what I forgot? I forgot to put my timer on so I have no idea how long I've been speaking. Okay, let's not worry about that. So, let's do a search for Elma. Let me zoom out and go back to the online tab. It kind of helps if you're online and you search for Elma. And one of the things here, you'll notice that we have some search results in a pageable format. But on the right we give you some details about the package, who it was created by, its ID, what version it's currently on. You can click on these links to find that information about the package. And then you can see here that there's a set of dependencies. So, one of the nice things about NuGet is that a package can have dependencies on other packages. And when you install that top level package, it's going to pull all its dependencies into your project. Let's go ahead and install Elma. What Elma is, is an error logging module and handler. So, it's a really nice way of logging unhandled exceptions. So, I've installed it. Let's actually see it work. I'm going to go to the home controller and I'm going to throw an exception here. And we'll just get rid of that line. And then let's control F5. Okay, we got this exception. It was unhandled. And now later on I want to go look through the logs and see what did my user see. So, I can go to Elma.exd and I get this Tivo for error messages that your users have seen with all this extra information that's really useful for debugging errors. You know what I said? I didn't have to go and download a DLL from a zip file, right click, unblock it, figure out how to make use of this. I just installed the package. It just worked. Oops. Great. Well, what did it do to my project? First question someone asked when I said, oh, you installed this thing, but what did it do? And will I be happy with that? So, I'll tell you what it didn't do. It didn't change your machine, right? It didn't change anything in the GAC. It didn't go screwing around with registry settings. It didn't change anything that's external to your solution. One of the guiding principles, one of the guiding design principles for NuGet is all the changes it makes, you can commit to source control if you want to. And the next person who then gets latest from source control is in the same state you are. They can start developing. If you've ever had a library distributed by MSI, you know it's kind of a pain because you install that MSI on your machine and you're referencing this DLL from the GAC, you check your code in the source control, the next guy says, oh, I'm going to get latest and start coding in the hit, at five and everything breaks, nothing works. Because now they have to install that MSI. It's like a bad virus. Everyone in your company now has to install the MSI to work on your project, which is especially bad when it's an open source project, right? Okay. So where were we? So let's look at what happened. Well, one of the things that happened is we added this packages.config file to the Roudia project. This is a list of all the packages and the version that are installed into your project. So we scroll down here. We'll see that Alma 1.2.2 is here. But we'll also see that it's dependency Alma core libraries here. Now on a clean class library, those would be the only two entries. But as I mentioned before, MVC4 installs a bunch of other packages. The other thing it did is if we look at the references node, we'll see that it added a reference to Alma. Great. I didn't have to go and right click, add reference, find the DLL. But where did it add it from? Let's take a quick look. So right click. I'm going to open the folder in the Explorer. And you'll notice here that I have my solution file right here. And next to it is this packages folder. So this is the folder that we install all packages into at the solution, next to the solution. So if I go into this folder, I can find the Alma folder. And we'll look at the Alma core library. And by convention, there's a three, typically three folders within a package. Lib, which is for all your assemblies. Tools, if you have any PowerShell scripts that you want run. And lib, tools, and what? Content. Yeah, sorry. Content. I can't believe I forgot that. So content is any files you want added to the project at the time that you install the package. So you can actually distribute not only libraries, but source code. In a talk earlier, Paul mentioned that a lot of projects will have two packages, sort of the core library and then something they call a quick start. And you install it into a blank project and it will add all this interesting code that you can start that helps you show you how to use the project. So great way to distribute libraries. So the interesting thing about this is if I go and add another solution, a project, let's call it class library and we'll call it unit test because that's a good thing to do. And then I install Alma here. A question people often ask me is, well, now do I have two copies of Alma running around? Well, no. As I mentioned earlier, because we keep the packages folder at the level of your solution, all we had to do for the next project is just add a reference to the same DLL that's already there. We didn't have to redownload it. We didn't have to do any of that. So we try to be as efficient as possible. Another thing that's happening behind the scenes is that we're actually caching these packages on your local machine as well. So if we go to package manager settings, you'll see that we have this package cache here. And this is on your local machine. But it makes it so that we don't have to download it every single time if nothing's changed. So we'll see if a newer version is available. If not, we'll just pull it from the cache. Cool. Let me just add one more project. And everybody likes to have a MVC app core type of assembly. So one thing that if you've seen very early talks on NuGet that you might not have known is that we also added a dialog that works at the solution level. So a lot of times you want to manage packages for all the projects at the same time, not individually. So you can right click on the solution and manage NuGet packages for the solution. You get the same dialog, but things are slightly different in that if I go to install packages and I search for Elma and I click manage, you can see here we give you this dialog that shows you all the packages that Elma's installed into. And I can either uninstall it from all of them or I can install them into every single project. So that's kind of a useful way of managing packages in your solution. Cool. All right, let's look at the package manager console. When we first built this, we actually didn't start with the nice GUI tool. We actually started with a PowerShell based console kind of similar to RubyGems, except that ours runs within Visual Studio and has full access to the DTE. DTE stands for Design Time Environment. What that means is that we have full access to Visual Studio because unlike a lot of Ruby where it's really just files on a disk, a lot of C-sharp or VB.NET projects require the project system, right? We add references to the CS-proj file and all these sort of things. So we wanted to make new get work as well as possible with that. And within this console, you have the full power of PowerShell to do really cool things like piping one command into another. So I'm going to show an example of a package I wrote. Let me zoom in here. Called Magic ApeBall. And you notice here that I can actually get, if I hit tab, I get tab completion based on the list of packages that are in the feed. So it's kind of like some people compare it to IntelliSense. But IntelliSense is static, right? This is dynamic. This is looking at the feed of what's available and then giving you this list. So very cool stuff. So I installed Magic ApeBall. And this is a package that actually adds new commands to your PowerShell console. So you can have packages that add really cool commands like scaffolding or deployment scripts, whatever you might want. In this case, I did something much less useful where you can ask a question and hit tab and you can see a set of questions. For example, is the talk going well? And then it randomly answers as I see it. Yes, great. Or the first time I gave this demo, you can see that I was co-presenting with Scott Hanselman. This is his head getting too big. And again, yes. I wonder if my random number generator is going to get there. Okay. So for the most part, anything you can do in the dialog, you can also do in the console. But the console gives you slightly more power. For example, and I'll talk a little bit about this later. Let's say I want to update all the packages in my solution, but I only want bug fixes. So I don't want you to upgrade to the latest major version. I just want patch versions. I can do update package dash safe. And that's going to examine every single package in my solution. It's going to check with the online and see is there a minor update. And I'll explain exactly what I mean by that a little bit later when I talk about versioning. And if there's any available, it's going to upgrade them. And you can see here that it actually, here's a great example. Knockout.js, it went from 2.0 to 2.001. And unfortunately, because I left the packages config open, it's going to ask me to reload it a billion times. I've heard a rumor that they fixed that in Visual Studio 11. So let's fingers crossed, right? Okay. So that's package manager console. If you have any questions, feel free to raise your hand and I can ignore it now or I can ignore it later. So the next thing I want to show you is, oh, I forgot to mention this, readme.txt. This is a brand new feature of NuGet. If you have a readme in the root of your package, when installing that package, we'll pop up that readme at the end of the package install. But we'll only do it if you're installing that package, not if you're installing some other package that depends on that because you could imagine a dependency graph of like 20 packages and you don't want 20 readme's opening all at once. Okay. So I mentioned earlier that when we install a package, we create this packages folder that's in the root of your project. But you'll notice here that that packages folder is not listed in my solution explorer because unfortunately, Visual Studio doesn't do well with assets that are in the root of your project that aren't a project in and of themselves. So what this means is if you're using Visual Studio source control integration, it doesn't get integrated automatically. Now if you're a subversion or a mercurial or a Git user and you typically use a command line or something like that or one of the tortoise shell extensions, you're probably used to adding the solution, the actual folder, and so you're okay. If you're using TFS, there's actually support for TFS integration to actually add it to the TFS which you might call it. I don't know TFS that well. So you're actually okay with TFS. If you're using Visual Source Save, you're not okay in any regards. And if you're using Source Gear Vault or one of those, then the integration is not there. If you're using a distributed version control, though, you know that every time you clone a repository, you're getting the full repository, the full copy. And binary files are very hard to diff tightly, right? So what happens is if a binary file is changing a lot, it's going to grow your repository really big. So a lot of people came to the NuGet team and said, look, we don't want to check in our packages folder. That's not the way we do things. We don't check in binaries. Okay, fine. So what we implemented is something called package restore. I personally tend to like checking in packages because I like to be able to get latest from the source control and know that that's exactly what I used to build something. But if you have some other way of ensuring that, then package restore is actually quite nice. And so all you need to do is right click on the solution and there's an option here to... Right click on the solution. Oh, there it is. Enable NuGet package restore. You notice here that the latest version of NuGet has sort of adopted the metro contrast less icons that make it really hard to find anything. Yeah, I'm not biased at all against hating those icons, but okay. So that dialogue that I just clicked okay on tells you exactly what package restore is going to do. It's going to do a couple of things. One, it's going to add a.nuget folder and it's going to add three files in that folder. Now, this folder, you absolutely do want to commit to source control because this is how everything gets restored. But since it's only three files that are never going to change, we actually do the work behind the scenes to add a solution folder. So I don't know if you've never seen solution folders in Visual Studio. They're kind of a handy way of... They're sort of like virtual folders, kind of like a playlist. And they're just pointers to other files on disk or in your project. And we add three files, a target file, a package... That's actually for solution level packages, and then NuGet.exe. And this is how we can do package restore. And the way it works is, if I go back here, and let's say I... Whoops, I deleted packages. Or maybe I got latest because I'm no longer checking packages into source control. I get latest and there's no packages. Well, how do I get them back? All you have to do is compile. And we'll see here as we watch that, it successfully installed all those packages. And we're back to where we were. Now, the key point here, though, that needs to be clear is that we're not actually going through the full installation because we assume that if you're using package restore, you'd already installed that package at some point. And all the changes that the package might need to make to your project have already been made. All package restore is about is getting the contents of those packages back onto disk so that all your assembly references, for example, are still... Are valid rather than missing. And so if you use this, you just have to make sure that your source control ignores the packages folder because it's not needed anymore. One other thing to show about that is if I... Let's go delete the packages folder again because it's kind of fun. With package restore enabled, if I launch the managed new get packages for solution, you'll see here that we actually detect that state. And then we display this dialogue and a button to restore packages that doesn't work. Wait, wait, is that actually working? Okay, so last time I did this demo, that didn't work. So this is a demo fail that actually is a success. How's that? Okay, great. And so now if I go back here, and he's still trying to wrap his mind around that, we see here that the packages were restored. Now, if that dialogue for some reason gets in a corrupt state on your machine and doesn't work like it did for me earlier, you can just go back, hit build, and packages are restored. Very cool. Okay. So I've shown you how to install a package. I've shown you how to set up packages with source control and restore them. Where did this packages come from? Well, a mommy package and a daddy package fall in love and... No, that's not how that happens. It's newget.org. This is where all the packages are uploaded to. It turns out you can also have private feeds. Newget allows you to add multiple package sources. But this is the default package source that comes with Newget that we ship within the product. And you can see here, we got a nice landing page. You can go here. You can search for packages. According to this, we have 6,197 unique packages. So that's pretty exciting. And let me log in. And I have this fake hacked account because I don't want to use my real account for this demo because I'm going to show you some private information here. When you log into your account, you can go here and you can actually manage all your packages and see what packages you have. You can see here I've used this account before for demos. But the thing here is there's an API key. And this is important for people who are going to create packages. This will... You can upload packages through the website, but you can also use a command line tool called newget.exe to do it. And all you need to do is you need to make sure that you have access to this API key. So while you're watching this, you can actually... Now that you have my API key, you can upload packages as fake hacked until I hit generate new API key and make the old one invalid. So that's kind of nice. If you ever lose your API key or accidentally reveal it, you can just go back to here, generate a new one, and your old one will be invalid. Okay. So we're making a note of the API key. Let's talk about package creation. So the first thing we're going to need is newget.exe. So newget, the client tools for newget are hosted in coplex, newget.coplex.com. And if you go to downloads here, this is where you can get some of these extra tools, newget.exe. And then later I'll talk about newget package explorer. Okay. Now I already have newget.exe installed, or in my path. Typically what I do is that is really tiny. So I like to put newget in a utils folder and add it to my path so that newget is available no matter where I am. But nice thing is once you have it on your machine, you can always update it from itself. So you can do newget u-self. And so that's going to go online, check for updates, and then update itself. And it'll keep a backup of the old one so that in case you ever need to go back, or in case you do demos like this all the time, then you can easily reset. Now let's create a class library. We're going to create a brand new project. And this is going to be the coolest package ever made. So we need a really cool idea, a cool name. So what I thought we would do, let me just put it in here, is I want to create a troll hunter API. So this will be a library for troll hunters to use to locate trolls, to discuss really cool ways of dispatching with trolls, where to buy high intensity UV light bulbs, and things like that. So I don't have time to implement a full troll hunter API. So let's just start with a single method, ecstatic string, where is the troll? So it will be a troll location. Let's call it a troll locator. And then later on we'll flesh out the API, but right now it's just the, we'll start with a single one, single response, possible response. Okay, where's the troll? Now, you guys see this assembly info? The thing you love to ignore. How many of you ever have never edited assembly info.cs? Have never. Okay, a few of you. Okay, I'm actually surprised. You guys are conscientious developers. But typically this is the kind of thing that most people just ignore. They just keep the defaults. But NuGet gives you a reason to actually pay attention to this. And I'll show you why I mentioned. Cool API for troll hunters to collaborate, to work together and kill trolls. Okay. I should watch a Norwegian, I think it's a Norwegian movie. Troll hunter is really interesting. It's very good. And the other thing I want to do is I'm going to use this attribute that no one's ever heard of called assembly informational version. And it's one of those attributes that no one's, this very obscure attributes that NuGet has given a new life to and now it feels important again. And what I'll do is here, I'm going to use this dash beta. 1.0.0 dash beta. We're going to create a beta package here. And so we'll do that and we open a command prompt to here. Oh, I don't have that. Okay. So one of the commands I want to run is, you notice here I have yet to build the project. So one thing I can do is I can NuGet packet and I can run dash build. And it's actually going to build the project and packages NuGet package. But you notice here I get these warnings. Author was not specified using hacked in yellow. That's very hard to see in here. So I apologize for that. But let's try something else. I may want to modify some of the metadata for the package. I can't yet do all the metadata solely from assembly metadata attributes. In.NET 4.5 there's actually a new generic attribute called assembly metadata that's key value pair. And with that you can actually do all of NuGet metadata. And you would not believe how many emails I took to get that one attribute into the product. But we made it. Okay. So NuGet spec. And since this is the only CS project in this directory I actually don't need to specify the project file. We try to make things as easy as possible. The obvious thing here actually works so people think it's an Easter egg like I had all said in his keynote. But we're intentional about that. So you notice here this created a Trollhunter API.NewSpec. So NuSpec is the way of creating metadata for your package, the NuSpec file. If I click here and we'll let's include that in our project. And you notice here that we have all these placeholder strings now. Dollar sign ID. These are all pulled from the metadata of the project. It was great so that if you just change the assembly info you don't have to go through and change the NuSpec as well. License URL. We highly, highly, highly, highly, highly recommend that you choose a license that allows people to know what rights they have to your code. Ideally never do what I'm about to do. That's only for demos. Project URL. That would probably go to your GitHub page or your generated page as Paul showed in his earlier talk if you're here. If you weren't you should check out the video because GitHub pages is pretty cool. I'm just going to delete all that and we'll say tags, Norway, trolls, that sort of thing. And now if I run NuGet Pack, I'm going to actually run at another flag that I really recommend called symbols. And Trollhunter, the replacement token author has no value. Oh. So this is a common stake I make. I didn't build it, right? So let's do dash build. Let's compile it and then try it. Why doesn't that have author? So let's go back to the metadata. Oh. So companies where you put the author. So we'll call this fill hack enterprises. Let's call that. Now let's run that. Okay, great. I should have chosen a different color. I don't know why they chose yellow. All right. That's just giving me, oh. What that warning message is saying is we actually looked to see if you didn't change a field and we tell you, hey, it looks like your description is the default summary. You would not believe how many packages have release notes as summary of changes made in this release of the package. So quite a few. Please do change those things. Okay. So I created this package. Let's take a look at it. You notice here that we actually created, ended up creating two packages, your package and symbols. So the symbols contains the source code and your debug symbols for the package. And if you, when you publish your package, we actually have a partnership with symbolsource.org and we will upload the symbols package to them so that people who are installing your package can actually step through the code if they need to debug into your package. It's a very cool thing, really nice thing to do for the users of your package. So that's why I always recommend it. You notice here on my machine that this has this little icon, this NuGet icon. That's because I have NuGet package explorer installed and it's kind of a neat way of quickly looking at a package in a GUI environment. So you can actually click the package. You can edit the metadata, say, don't want the S and you can save it out and all that sort of thing. Oops. Let's try opening it again. But the important thing here is to make, I can check to make sure that it created my package correctly. And you notice here that it actually, within the live folder, created a net 40 folder. It looked at my project, realized I was targeting net 40 and it pulled this DLL in that folder. And the reason why I did that is I could also create a version of this DLL for that target dot net 35, dot net 20 and put them in the proper subfolders and then it will choose the right one for the target project you're installing this package into. That's pretty cool. So it looks like that created that fairly well. Now, if I want to upload this, I need to, I can use NuGet to do that. The first thing I need to do is NuGet, set API key and I need to paste in my API key. So it's probably not in my clipboard anymore. So let's go back here. And this command I only need to run once. It will store, securely store my API key on this machine so that the next time I do this sort of thing, I don't have to worry about it. Okay. Did I do that wrong? Yeah. Okay. And you notice by default it set the API key associated with the NuGet dot org gallery. If you have other galleries you're working with, you can actually specify dash source and point to that URL as well. Great. So now that I've done that, I can actually just run NuGet push and then the package name. And now that's uploading Trollhunter API 1.0.0 dash beta to the gallery. Okay. So let's try this out. Manage NuGet packages. Let's go online. Let's look for Trollhunter. Now you notice here I didn't find Trollhunter, but I just uploaded it. Well, one of the new recent features that we added is concept of pre-release packages. So remember that dash beta flag I mentioned, that marks it as a pre-release package. And by default, we only show stable only. So if I say include pre-release, we should see that show up there and it has that little pre-release flag. The basic idea here is that we assume that if you, most people just want to install stable software, they don't want to take the latest and greatest pre-release all the time. But if you do want to, you have to opt in. So let me talk about, let me change gears and talk about versioning for a second. So versions in NuGet, we're trying to really push people towards using something we call semantic versioning. We didn't invent this. This is actually something created by Tom Preston Werner, who happens to be one of the co-founders of GitHub, which is kind of funny to me. And you can see this back at semver.org. And the basic idea is unlike the.NET version, which is four parts, semantic versioning is really just three parts. Major. You increase the major version any time you have a breaking change. So that can mean that sometimes, you know, packages, major versions might iterate more than you typically would. But by following this, people know what to expect. That's the beauty of it. Minor version. There's no breaking changes, but there may be new features. And then patches, there's only bug fixes. So remember earlier, I mentioned update package dash safe. That will update all the packages to the most recent patch version. And the assumption there is that that is about the most safe update you should be able to do. And perhaps a recommended update, because ideally that's just bug fixes. But there's also this pre-release versions string here that you can append to the patch. Dash beta, dash alpha, dash whatever. So to give you some examples, here's an example of a set of versions in order of precedence. So for example, 2.0.0 dash alpha is less than 2.0.0 beta. And it goes, the string actually can be anything. You could have it be Zorg. And the precedence is determined by alphabetical order. So alpha comes before beta, right? The interesting thing here though, as you might expect, 2.0.0 beta comes before 2.0.0, because we would assume that once you remove the pre-release string, that it's now an RTM version or a stable version. So you can upgrade from 2.0.0 alpha to 2.0.0, for example, but you can't upgrade from 2.0.0 to 2.0.0 beta, whatever. So an increase in minor patch version, as I mentioned, that's what NuGet does if you use a safe option. It's also what happens if when NuGet is trying to locate a dependency. And I'll talk about that in a second. Minor version, major version, update, so pretty straightforward. So let's say, for example, we want to install this package, RouteMagic. And we look at RouteMagic and we notice that it has this dependency on a package called WebActivator. And the version it has a dependency on is 1.0.0. Now, this is a little bit misleading here. This version is what that's really saying is that we depend on version 1.0.0 or greater. You can also be more explicit about the version range using the version range syntax. And all the details of that are on docs.nuget.org. So this is saying that I can have 1.0.0 or greater. So the question is, let's say that in the gallery in NuGet.org, we have these sets of versions of WebActivator. Well, which one is it going to choose? Sorry about that. So it turns out that it's going to pick this one, 1.0.1. Why is that? Well, we try to actually pick the lowest version of the package that meets your dependency if that package is not yet installed. And the reason is that we figure if you set it dependent on 1.0 or greater, that's probably the safest one to install. Now, if you already had WebActivator 3.0 installed, that meets that version constraint, we're not going to downgrade it. We're going to just say, okay, that's fine. That should work. But if you don't have it, we're going to try to choose the lowest version. And that's why we thought 1.0.0. But we also had this assumption that if there's a patch version, it might be a security update for all we know. And if you're following semantic versioning, this should be as safe as installing 1.0 for the most part. So we're going to pick that one, the highest patch version, lowest major minor. Okay? So if you're ever wondering why, that gives you an idea of why it chooses which versions it ends up choosing. Now, this is all predicated on the idea that everyone's doing semantic versioning, which in general we know is not true. But even people who weren't doing semantic versioning typically wouldn't increment this version for a breaking change, although, you know, unless you're logged for NAT. Back to demos. What time am I supposed to be done? Anyone know? So I started at 11.40, right? So I guess 12.40? Okay, I got some time. Cool. So talked about versioning. So another feature that sort of, let me go back to my other project. So there's another feature that I really like, but it doesn't really fit anywhere in a natural flow of giving this talk, but I'll just show it anyways, is if you go to tools and library package manager and you're on a pro version or higher of Visual Studio, unfortunately, I'd like to make this feature available to all versions, but we depend on something that's within the pro versions of Visual Studio. We have this thing called package visualizer. And what that does is that examines your packages and creates this cool visualization of a package, which if you, when you do it on an NBC four package, it starts to look less cool and more clutter. But you can see here on the left, there's my three packages, NBC application one, core, unit test, and then a graph of the dependency chain for those packages. It's kind of a neat way of sort of quickly seeing how the packages interrelate within your project. You can flip the project to the right. You can flip them to the top or bottom. Those seem kind of pointless, but you can do other types of graphs. Let's try zooming to fit. So you can see like when you get an NBC four project, you get quite a few packages and we have these other views as well. So kind of an interesting way of looking at that. I think this might be useful. If you find this useful, let me know because I thought it was a neat feature, but I wasn't really sure how people would end up using it. One thing that might be cool is to actually be able to delete packages right from this view, but we don't have that yet. Okay. So one thing I like to show off is just to give people ideas the full extent of what packages can do. So I wrote this other package called Mood Swings. Oh, it's downloading. Wow. Did the internet just go slow? Meanwhile, I just put it in the app. It's not that big. Oh, there we go. Install. So what Mood Swings allows you to do is it actually automates doing interesting things within Visual Studio. So for example, I can say set Mood. And let's say I'm in the mood to code and I like my dark theme. And you notice here I completely changed the settings within Visual Studio. I can set mood to back to presentation mode and increases my font size. So a lot of cool things that, you know, you have full control of Visual Studio within these things. So you might give you some ideas. For example, I can do insane. And that is insane. Let's go back to presentation. And I can do set mood. This is one I've shown off before. Rick, let me make sure this works. So we get ASCII Rick Astley. But I can't sing like a raw but he can. Okay. So a lot of things you can do with NuGet Package Manager. So as I mentioned earlier, we passed 10 million package downloads not too long ago. So let me switch back and let's look at the actual number. Oh, 11 million. Wow. That number is growing so quickly. It's hard to keep track because we celebrate, we had a big celebration for the first million and then the second million. And then we realized, okay, this is getting ridiculous. And then we, I actually got to head back to Microsoft and have some cake with the team when we hit 10 million. We recently passed. So I don't know why this number doesn't match what's on NuGet.org. But according to this, we have 6051 unique packages. So this is the number that is really interesting to me because it sort of represents the overall health and variety of our ecosystem. As that, you know, ideally those are 6051 really good packages. But really, you're leaving? So, yeah, so I need to update the slide now. So we have 11 million package downloads. So as you can tell, NuGet is really getting popular among.NET developers. This is the part that's really appealed to me. It's open source. So I mentioned I no longer work at Microsoft, but fortunately I still get to work on NuGet because the source code is open source. It's actually an outer curve project. The outer curve foundation is a separate foundation from Microsoft that's meant to make it easy for corporations and developers to work together on open source code. And so I'm still listed as the project main project head of NuGet, although I'm finding it very hard to spend a lot of time on it as I adjust to my new job. But that's kind of why I'm telling you about it because I would love for you to contribute to NuGet if you've ever had an issue or a problem with it. All the client work is done on NuGet.coplex.com and all the server work is done on GitHub. So like the website, NuGet.org is done on GitHub. We accept pull requests. We do all the development in the open. There's no internal secret Microsoft repository. And it's, we've had a few external contributors who've done a lot of great work, but we'd really like to ramp that up. And, you know, of course we use NuGet to build NuGet, so we install a lot of packages within those projects. And if you want to read more on NuGet, I wrote an article for MSDN Magazine, Managed Project Libraries with NuGet. Hopefully you read that. There's a book out called Pro NuGet, and it's a, I think it's a press. It's really interesting because they go deeper into like how to use NuGet in the enterprise. They talk about a lot of the third-party stuff that's come, that's sort of been spawned out of NuGet. For example, if you want a private NuGet feed, you can go to myget.org and you can get a feed there. There's a deployment system that I think is called OctopusDeploy that's built on top of NuGet. Paul is working on a click once replacement called NSync that's also built on NuGet. So there's a lot of people doing really cool stuff with NuGet. One of the things we really would like, though, is NuGet to work on Mono better. And there's a couple of folks who are working on trying to create a NuGet plugin for MonoDevelop. There's already a NuGet plugin for SharpDevelop. So if that sounds interesting to you, we would love some help because in October I'm supposed to give a talk on NuGet or something at Mono Develop and we don't yet have anything working for it. And I'm really busy, so I would love your help, please. And of course, our docs.nuget.org. Just to plug my own book, I'm working with, in fact, all three of my other co-authors for ProMVC4 book are here at the conference. So if you see Brad Wilson, John Galloway, or Case Scott Allen. But I wrote a chapter about how we develop the NuGet Gallery, nuget.org, using MVC and using NuGet. And so I talk about a lot of the real-world lessons that we use, that we learn building the gallery. So if you're looking for a really interesting ASP.NET MVC real-world project, I recommend checking out the NuGet Gallery source code. All right, well, that's it. Thank you very much. Please don't forget the evaluations. Just choose the green one and we'll be okay. Actually, anyone have questions? Yeah.
Developers are known for “scratching their own itch,” producing thousands of libraries to handle every imaginable task. NuGet is an open source package manager from the Outercurve Foundation for .NET (and Windows) developers that brings these libraries together in one gallery and accelerates getting started with a project. It makes it easy to incorporate these libraries in a solution. In this talk, Phil Haack, project coordinator for NuGet, will describe the problem that NuGet solves, how to make effective use of it, and some tips and tricks around the newer features of NuGet.
10.5446/51138 (DOI)
Hello, hello, hello, hello, hello, hello, hello, hello, hello. Echo. How many of you have iPads? How many of you have kids? You ever bought that paper app? Do you know what paper is? That drawing app for the iPad? So I'm a gadget freak. I have a lot of old iPads. And against my better judgment as a father, I gave my youngest, or my second youngest, my 10-year-old, my old iPad, and I put paper on it. And she made my slides. It's absolutely incredible. So if you have kids with an iPad, let them play with the paper app. It's awesome. So thank you guys for showing up today. If you can see the name of the talk is Five Things You Didn't Know About PostgresQL. I'm going to talk about French stuff today. So it's going to look wacky. I'm not going to try and convince you, oh my gosh, you have to use Postgres. That's not my point. My point is that Postgres is a very, very interesting alternative to some mainstream databases. I'm going to compare it to SQL Server. And I'm going to say some probably challenging things about SQL Server, but I like SQL Server. If you like SQL Server and it works, rock on. I'm going to crap all over MySQL. So if any of you like MySQL, prepare to be offended majorly. So I'm going to start with this. Why in the world do you want to care about Postgres? Number one, the name is horrible. And everyone I talk to about Postgres says, what's with the name? And they say it is the single worst design decision that they did for the database was to name it that. So I'll tell you in a nutshell, the reason why you care about Postgres is that comparing it to big enterprise engines like SQL Server and Oracle, it will stand right up to them. In terms of performance optimizations, in terms of sharding, in terms of replication, in terms of table compression, every enterprise feature you would ever want, pretty much, is in Postgres. Now, of course, there's going to be some exceptions when you're installing SharePoint. It won't work with Postgres. So there might be some situations, but Postgres will stand right up to them. It's fast, it's free, it's got some great tooling. The guys at Heroku created this little thing that works on your Macintosh. It's what I'm using right now, Postgres.app. When you want to install it, you click the installer, download right here, you click it, it drops the DMG file to your Macintosh, opens it up, you drop it in your applications, double click it, and you're off and running. That's it. Who here has tried to install SQL Server? Is it fun? You guys have a good time? You know what it's good for is when you're installing it, you can actually get up and leave your desk and go... A few hours. Am I back on? I can yell really loud. You can get up and leave your desk for a couple of hours, and then, yeah, you got a free little afternoon vacation. It also has some really good tooling, believe it or not. Navicat is something that you're about to see. What you're looking at here is the Navicat Data Modeler. They also have exceptional tools for Windows, and they do cost money. They're about $100. Not very much, but it's worth it. It has a great importer. It's got an awesome Data Modeler. It's got a visual query builder. It's got scheduled tasks that you can plug in, so it's really good stuff. What you see down below on the right is Entity Framework using Postgres. Not a lot of people know you can do that, but you can. Unfortunately, you have to buy the driver from DevArt, which is, I think, $100. But there is a free, expressed version. If you don't want to work with Entity Framework and you just want to work with the visual tools inside Visual Studio, it'll hook right up to Postgres, and it runs just right. Runs great. This is MonoDevelop. I don't know if anybody here uses Mono at all. MonoDevelop works perfectly with Postgres. It's got a query window, and it'll just hook right up straight with the drivers, and off you go. There's no problem at all. If you're.NET shop looking for an alternative, and you want to check out Postgres on Mono, you absolutely can. Let's have some fun, shall we? We already talked about comparing this to SQL Server. So this is thing number one that you might not know. It does stand right up to SQL Server. I am not just saying this as a geeky developer. I have a DBA friend who loves Postgres. I did a video with him at the place I work, TechPub, and it's an instructional video. And I just sent him an email, and I said, Rob, I know you're a SQL Server DBA. Do me a favor. Can you just take a look at Postgres and tell me what you hate about it? Because I really need to speak to it. I am not kidding. Within a week, he emailed me back and was gushing. He says, I love this thing. This thing is utterly amazing. And I said, what about performance? And so he took the stack overflow data dump, put them side by side, and he said, all my tweaks that I did for SQL Server to get it to run performance on top of the stack overflow data dump were ten times easier with Postgres, and it works just the same. So I just want to throw that out there. But now is the part where I get to make a lot of fun of MySQL. I did this a little bit last year if you attended my talk. I'm going to do it again this year because it's too much fun. And I want to show you if you are considering MySQL, please don't. Okay? Now, I will say before I do any of these demos that, yes, if you're a MySQL wonk, you can go in there and change the database engines around. You can change the configurations. You can get away from the problems I'm about to show you. But keep in mind, these are the defaults. Everything you see me do, these are the defaults. Okay? Here we go. Okay. So this is MySQL 5.5 running locally on my machine. I have a little database here called NDC. And what I want to do is get this table to stop shaking. So I'm going to put my foot on it. What I'm going to do is I'm going to create a table. And we are going to track the gold medals won at the 2010 Olympic Games. So I'm going to put this long incantation here. Primary key, not null, auto, increment. And let's track the country that's competed. And so we're going to set this to Varkar 50, not null. I'm going to put that in caps because I mean it. And then I'm going to put the event name Varkar 50, not null. And then how many golds for one? And that's just going to be an integer, not null. Seems pretty clear. You guys understand SQL pretty much? It's just you guys get it, I hope. Thank God. So I'm going to run this and there's my medals table. All right. That's what we want to see. So let's say we're going to track Sweden's Olympic medals this last Olympic Games. So I'm going to insert into medals. And being a good Swedish person, all I really know is the name of my country. I don't know anything else. So I'm going to say values, Sweden, and we'll figure the rest out later. What do you think is going to happen here? I'm going to run it and have my window. And if you can see that down there, no errors. No errors. You guys see a problem here? I specifically said in all caps, not null. No, I don't want nulls in there. But my SQL said, well, it's okay. We can do it anyway. So let's go and take a look. Select star from medals. What exactly happened here? My SQL decided, well, you try to insert null, but what you really meant was zero. And then you try to insert an empty string here, an event name, or a null into the event name. What you really meant was an empty string. And to prove that, we have where event name equals that. What? This is wrong in a lot of ways. Right? So this is so wrong because you're planting bad data. You're planting an empty string. You're planting a zero. And that's not at all what we meant. We kind of just goofed up. No data should be allowed in this database. My SQL should honor your directives. No nulls instead. It said, well, I think you meant this. And so in it goes. All right. So we'll forgive it that for now. Let's alter this table. And we are going to suggest that Sweden might have improperly won some medals at some point in the future. So we're going to add a column called bribes paid. This is going to be a decimal. And we are going to set it to 10 to so it's money, not null. Run that. Whoops. Add column, alter table, add column, bribes paid, decimal, 10 to null. One second. Oh, thank you. Yeah, run that. Good. Okay. We specifically said we are putting some money in here. It can't be null. So what do you think happened here? Well, as we can tell SQL or minus SQL has a problem with that. Zero again. All right, all right, we'll play the zero game. Not a problem. Let's just keep going here. We're going to update this now and make it right. So we're going to update medals and we're going to say bribes paid. We're going to set it to $1000 where ID equals one. Oh, boy. Sorry. And that's all good. So we'll go back here and we will select star from medals. Run this. And okay, finally we have some sanity happing. We're able to track this stuff the right way. But then all of a sudden our Swedish person comes in and says, that is ridiculous. You're allowing to have a scale in here of 10. We would never pay that much. We're too cheap. Let's reset this down here a little bit to two. We only pay a couple hundred dollars. Maybe that will bribe our judges and we'll get our medals. So we're going to reset that. What do you think is going to happen? Let's go back and take a look at our query. We are going to run this. My SQL decided that since we said, well, we wanted only a precision of two, we're going to reset all your data down to fit that. So our data was lost. Roll it around your head. This is the default setting for my SQL. This is what it does. If you've ever used a Rails migration and accidentally got some stuff messed up and then started jamming in some tests, and you're wondering where is 99 coming from? Well, it's my SQL doing its fun work. All right. So the Swedish person says, you know what? I think I messed up here. Let's set this back. And this time I'm going to set it to eight. And I'm going to go ahead and update this one more time. And this time I'm going to set the golds because we won at least 100 golds. I'm not really sure how much we won. So I'm going to say update medals, set golds equal to a whole lot. A lot. Can you guys see a problem there? Let's run that. And again, we don't have any errors. What the hell is this database doing with our data? So I'm going to say select star from medals again. Run that. So here's the question that I have as a developer and a person who likes databases. Was it ignored or was it turned into a zero? Anybody want to guess what my SQL did? Because I have no idea. And this is scary if you're using a database. Well, let's see if we can find out. So let's say select cast and let's see cast and we'll say ha ha ha as unsigned. That's a zero. That doesn't make much sense. Does it? So it turned it into a zero. It turned a string into a zero. But I thought it would be a no. No, I thought it would be an empty string. I'm so confused. Okay. For fun, let's just say, let's say ha ha ha and we'll divide that by zero. At least it's going to catch that for us, right? If it's zero divided by zero, it should explode. There's a mathematic operation happening here. Yeah? What do you guys think? That's no. Right? Ha ha ha divided by zero. All right. So I'm picking on my SQL a little bit. At least it's going to get this mathematic operation correct. If I had to say a hundred, a thousand divided by zero, it's got to get this right, right? Don't you think? You'd think. It's no. Okay. So I'm picking on my SQL enough just to show what Postgres would do. Let's go in here and do the same thing. We're going to create a table, metals, and we're going to do an ID. And this time we're going to use some of its sugary little fun syntax. Serial, which is nice. That big, long incantation that you saw me do can be replaced by serial. We'll say it's a primary key. Country. Barcar. 50. Event. Name. Barcar. 50. And gold. And I'm going to come through here and I'm going to say, oops, set that to int two. Not no. I mean it this time. And I'm going to put that directive up here as well. Run that. Hooray. We've got our table in there. So now let's see if we can do the same thing. Insert into metals and then country. Values. Norway. Because Norwegians use Postgres. Now we have some sanity. We have an error right here. Wow. It violates a null constraint. Amazing. Isn't that neat how that works? So just to show you that that works like that and also the rest of the stuff I don't want to go over. So I'm going to cruise along. But I will say that if you try and divide 100 by zero, you get a division by zero error. So Postgres honors your data, protects your data in fairness and niceness to MySQL. You can change it away from that default behavior to have it behave like a database. So, okay. And. All right. So that is MySQL. Moving on to number two, things you might not have known about Postgres. There is this thing called the MVCC, multi-version concurrency control. Has anybody in here ever run into a deadlock when you are trying to update some data in SQL Server or most databases for that matter? Postgres is a very interesting way of handling this. What it actually does and what you're about to see is it takes the transaction that you're currently executing. If you try and work with a within a transaction, actually takes a snapshot of it and puts it into memory and allows you to work on the set of data in memory, leaving the database alone. Multi-version concurrency control. It's a version of your database basically removed. It leaves your database alone so other people can work with it. So let's take a look. What happens if we do this in SQL Server? So in SQL Server, this is running on my Windows side. We have a post table in there and you can see we have some scores. It's a Stack Overflow replica database. What we want to do here is open up a transaction. We're going to update this table. We're going to set a score to a zillion where ID equals one. I'm going to highlight, then we're going to pull out, we're going to select the post so we can see our changes. I'm only going to highlight the top of this and run that part. Notice that the data is changed, but I'm within the scope. This is what we expected. But I haven't committed the transaction. The transaction is still open. In a second window, I'm going to say select star from post and I'm going to ask for the exact same record. This is what happens with deadlocks. You get deadlocked. You can't have that record until that transaction closes, which makes sense, right? SQL Server says no, no, no, you can't have it. Well, being a good developer, I say I need my data. I can't have my application slow down. So I'm going to use my tricky little no lock keyword. You guys ever seen no lock in a query? What no lock does is says, screw your transaction. I want the data and you get dirty data out of a transaction. So let's say that all of a sudden for some reason you need to roll back that transaction, which I'm going to do right now. The transaction is done. The update doesn't happen. But you go and then you can see this. This is our accurate data that's in the database. Nothing was ever changed. But in our other window, we've just reported backcrap data, which is okay. This is something that you learn as a developer and you kind of learn about deadlocks and all the things you need to do. Unfortunately, to get around this in SQL Server, if you don't want deadlocks, let's say someone's updating a whole huge bulk million row, you have to use no lock. And in fact, any framework and a bunch of other orms use no lock just de facto, which is insane. So that's what you have to do is SQL Server. And let's take a look at what you do with Postgres. So come back over here. And I'm going to create my table. And I'm going to mirror exactly what happened up there. Create table posts. ID, serial, primary key. And then title. Yep. No, body. And this is going to be our car. I'll set this as text. And score. And this is an int. And we're going to default this to let's just say one. Run this. We've got our table. All right, so let's insert a record in here, insert into posts body. And values. Hello, and DC. Run that we've got a record in there. Okay, so let's open up a transaction. And down below here, we'll commit it. And then we'll update posts, sets body equal to hello, and DC all caps. 2012. What year is it? Where ID equals one. And then we'll do the same thing we did before. Select start from posts. Okay, the exact same thing we did before. And SQL Server, which you saw, and I'm going to just highlight the first part. And I'm going to execute it, leaving the transaction open. So it's running. Whoops. There you go. Let's pull this over. So it's running. Hello, and DC 2012. That's what we expect to see. Okay. Open up a brand new window, select start from posts. Where my spelling works. ID equals one. Do you notice the difference in the two demos? We've got data. It locks last time with SQL Server. This is the MVC literally in action. You don't have locking to worry about with it, which is kind of handy. So I can come back over here and commit this. We are committed. Come back over here, run that, and now it's updated. Simple stuff. Now in fairness to, if I had opened up another window and started the second transaction, then the locks would go in place and say, no, you're doing two transactions in a single piece of data. Screech grinding hall. So something to consider when you're thinking, well, locking big deal. I don't ever run into it. We're talking about quick transactions. Have you heard of Twitter? One day Twitter went down. They use a MySQL database. No wonder. They use MySQL database and all these tweets are going in the database and they're realizing that they have a problem with the code and it's putting in crappy data. And so the DBA goes and he sits down and he says, well, I have about 50 million, 100 million, whatever rose. So I'm going to write an update statement, update the table, and I'm going to set all these values, blah, blah, blah. Go. Well, guess what happened? I'm going to put it on every single record in the database for five hours and Twitter stopped working. I mean, again, so you don't want to do these kinds of things. You really need to be aware of what's going on. MVCC can absolutely help you. Okay. Moving on. This is one of my favorite demos. This tends to blow people's minds. You can have inheritance in Postgres. One table can inherit from another. And I had a discussion about this with another database friend and he said, oh, yeah, it's just trickery. You can do the same thing in SQL Server with clever joins and union queries. No, no, no, no, no, no, no. This is at the table level. You can actually do inheritance. Let's take a look. This is a fun one. Okay. Go away. All right. So we have our post table. Let's pretend that we are Jeff Atwood designing Stack Overflow. Okay. So we have our post table and we're thinking about how do we put this together? Well, we have our ideas in terms of object-oriented programming, right? We know we have questions. We know we have answers. But how do we store them in the database? Well, you kind of flip over in your DBA mind and you say, well, they're all really just posts. They just have to be different kinds of posts. And that makes perfect sense. What we're going to end up with is a huge post table that's trying to accommodate both the concept of answers as well as questions. So you might have one row of data that is literally a question, another row of data that's an answer, and they're going to have data that just takes up space. They're going to have empty columns because they're not a question, not an answer. Okay. Well, in the Postgres realm, you can actually do something completely different. And get my query window back. There we go. Close this. I'm going to create a table and I'm going to call it Questions. And I'm going to have a question ID, serial primary key. And then I'm going to have asked on timestamp tz, which is a weird data type, but I'll talk about that in a little bit, asked on timestamp tz. And this is going to inherit from the post table, just like that. Run it. Everything is good. What the hell did I just do? Select star from questions. I inherited from the post table. There's no data in here, which is weird because we do have a post record, right? So let's take a look at that. It's there, but it's not part of our questions table. These are literally physically different tables on disk. They're partitioned. And this is one of those interesting things about Postgres where all of a sudden things start to make some strange sense. If you want to do partitioning in SQL server because you have a massive table that's growing and growing and growing, like the Stack Overflow Post Table, you could do a partitioning scheme where you could say, well, okay, I'm going to break this thing out where posts that start from A to L are going to go over here in this table and partition it out on disk like that some arbitrary way. Postgres says, why would you do that? Just logically, semantically separate your tables. And what it does under the covers is it partitions them for you. So questions is partitioned from posts. My other table I'm about to create right here answers is also partitioned from posts. You can have indexes dedicated to those tables. So they don't actually index the entire post table. They'll index the questions table instead. This is one of the fun things about Postgres. You say, wow, that makes a lot of sense. Well, how do you do partitioning in SQL server? And the answer is you pay a lot of money for an enterprise license. You get it for free with Postgres. This is something to consider. I'm not trying to speak too ill of SQL server. Okay, so let's continue on here. And I am going to create a table. Answers. Answer ID. Serial. Primary key. And this is going to be answered on timestamp tz. And I'm going to default this to now. Postgres is full of all these little sugary, little syntax things. It makes it kind of fun. And again, this is going to inherit posts. Run this and we are good to go. Now we have three tables in here. Answers, posts, and questions. So let's insert into our questions table. And actually, let me do this. Let me see the data that I'm trying to put in. So we are going to insert a question into here. Insert into questions, body. And we will set an initial score. Question ID is set for us. Asked on. I forgot to set a default there. Values. What is a bunny? It's about the caliber of the questions. I'm just kidding. Okay, we are going to set the initial score to one and asked on is now. And we are going to run that. And we have some data in our questions table. There it is. Now you might be thinking, well, what happened to the post table? Let's find out. Now we have two records in our post table. Same data, but it's shared between these two things. You can think of it sort of as a union. Not really. Under the covers, they are partitioned, they are put together. It's a deep story and I honestly don't know the exact workings of it, but this is what the end result is. We have a questions and a post table and they are sharing data. Which I think is groovy. So that is the basics of inheritance. I'm going to come back to this in just a second. This next one is also one of my very favorite demos. Postgres has the concept of a foreign data wrapper. Now if you have ever used linked tables in SQL server using an ODC connection where you want to pull in a table from somewhere else and link it virtually, you certainly can. Postgres allows you to do it with any data source. If you know a little bit of C or some Python, you can write a foreign data wrapper. But basically, any wrapper that you want to use has already been done. There is foreign data wrappers for Gmail, there is foreign data wrappers for Google search, foreign data wrappers for RSS, HTML and Twitter. So you can query Twitter. What would querying Twitter look like? Let's take a look. So I am going to do this right here. I am going to clone Twitter foreign data wrapper from GitHub straight into my drive. I am going to CD into it. The instructions when you read them on GitHub, you just have to say make and make install. What this is going to do is just compiles it down, it sniffs out and says, where is Postgres? There it is. It snoops it out, finds the extensions folder and drops the foreign data wrapper in there. So then you just got to go back to your database and say create extension because we want to extend our database now to use an extension, Twitter FDW. Run. And it's run. And I am actually going to do this from here. Easy, cool, and you see. I am going to use the command line for this because it reads a little bit better. No, I won't. And so I am going to say select from user created at text from Twitter where Q equals, and let's search the NDC Oslo hashtag. If you guys are quick and you tweet something, you will see it right here on my Twitter results. That's a live query of Twitter through Postgres. Now you might be thinking, oh dude, come on, who is going to ever want to do such a thing? I don't know. It's there. It might be things you didn't know about Postgres. But still, you could have uses for this if you want to track hashtag stuff and put it in a database, have it run on a timer. That's kind of fun stuff. And again, there's all kinds of wrappers that you can use, Gmail, RSS feeds and so on. Alrighty, so there is that. Let's head back over to here. One of the things I really like about Postgres, and you've been seeing me use these data types and also some sugary syntax, is that it's rather intelligent when it comes to storing your data. And it can be a good thing and it can be a bad thing. When you look at the data type list that you're allowed to use with Postgres, it is huge. And it's actually intimidating. For instance, they have geometric data types, object shapes, circles, lines, planes. And it's funny, I was talking to a friend of mine and they said, why would you ever want a square in your database? It doesn't make any sense. He said, well, did you ever see those applications where you put a tile down and you try and make a word like Scrabble? It's like try and make a Scrabble game and you can represent those tiles, the squares in the database, because they keep their position with each other. All kinds of stuff like that. You can extend Postgres. You saw me work with an extension here. And then Postgres to have huge data types using a thing called PostGIS, which is a full geographical information system, draws maps and everything, and then you have data types up to Wazoo. Let's just take a look at some of the more fun ones. And so let's go back to here. Okay, let's start with the one that I really, really like, which is timestamp TZ. And I can say select asked on from questions. So a lot of people have problems with timestamps and databases because they are not aware of time zones. Typically what everybody is supposed to do is use UTC or make it relative to GMT, switch the time around so we know what time it was globally, no matter where your server is. Postgres says you can do that. If you use timestamp, it will GMT it for you, or UTC it for you. If you ask for a time zone to be stored, it will do it. So we are in plus zero two, which is a long way from my minus 10 where I live across the globe. But it's kind of nice to see timestamps handled like this. If we wanted to, we can update that timestamp. Using some sugary syntax. Whoops. Did I spell something wrong? Yep. Thank you. There we go. And we'll select. And so it updated the date to tomorrow. The nice thing is you can have things like tomorrow, yesterday, now. So you don't have to go through and figure out what's the function name. It sees a string in there and replace it for you, which is pretty handy. One of my favorites, believe it or not, is let's just do this. Let's do it. Infinity. And it says infinity. What does that mean? And why would you ever use it? I know it sounds a little strange, but consider a situation where you want to order by, ask Don, and no matter what happens, you want this record to appear last. If you want it to appear first, you can certainly do that. Can you guess what negative infinity means? It's not fair that I'm making fun of my SQL and I can do these kinds of things in Postgres. So now it's negative infinity. So this means, this guarantees, no matter what data goes into this table, this thing is going to come up first. You can do this with dates, you can do it with numbers, you can do it with, well, timestamps as you can see here. So again, I'm not saying this is going to save your application from certain doom. It's just a funny thing that you can do in Postgres. All right, let's take a look at another really fun data type. I'm going to alter table posts, and I'm going to add a column called tags. Now, if you're thinking about a tagging situation, you've got a number of different ways to do it. If you've ever used tags in a system, pretty much the database guy sitting next to you is going to say, oh, we've got to have a mini to mini. You're going to have a tag table over here, and you're going to have a join table here, and blah, blah, blah. But if you're working at Stack Overflow, they would say, yeah, that's two joins, too many. Queering joins can be expensive. We have a big table. We don't want to have to do rollups on tags. Can we just denormalize it? So now you come to part two, where typically you'll have something like just, okay, we'll do a var car 500, because we're going to save some space for a whole bunch of tags that could go in there. That is a massive waste of space, because what you end up with is usually a comma separated or a pipe separated list, and then you've got to parse it and code. It's a big pain in the ass. Wouldn't it be fun if we could just go like that and have an array as a data type? You wouldn't think so, but you can. And this is kind of fun. I love this. So you guys select star from posts. Spelling, spelling. Woo! Tags are null. How do we work with these funny things? Well, let's play around with our question here. What is a bunny? And we'll tag that question. Updates, posts, sets tags, equal to, and you'd think this syntax would be rather difficult, but it really isn't. We said it's going to be a var car, so that means it's going to have to be some type of string that we're going to put in there. It could be an integer, it could be a date, it could be any type that we want. So I'll just say a bunny is a critter, and it's also cute. There we go. Select star from posts. Run that. Whoops, I forgot to do my where. Oh well. This is where things kind of turn from fun to weird, and I'll be up front about that. We are looking at an array. You can see it's an array based on this textual representation. But if you try and work with this column, it's just a string, because the editor doesn't know what to do with it. It wants to represent it. The only way it can represent it in this editor is with a string. No orms understand this. So maybe one day someone will go and rewrite their arm and have it understand array data types and postgres, but to be honest, they're not used very much. So I just want to throw that out there. Maybe someday, who knows. But how do we query against this is what a lot of people want to know, and you can say where critter equals any tags. Which is a pretty clear query. So there we got them all back, because critter appears there. So I'm going to do that just to show you that this query works. And there we go. So you can throw an index down on this, and you can all of a sudden now do some intelligent queries with arrays and tags. And you're kind of saved a little bit. Now you might be thinking, well, that's kind of wonky. Sorry, my power is not on. Whatever. That's kind of wonky. Who would ever do it? Well, Stack Overflow might, if you think about it, because they could have a lump table where they're doing all their aggregates. They can query off that later, but when they actually want to know if a post is part of a tag, this is how they could do it. Okay. Last data type that I'm going to show you is your own. I'm just going to create a type. Now you can do this in SQL Server. You can do this in a lot of other ones. Postgres has an interesting way of dealing with this, and I'll show you what I mean by that in just a second. We're going to define a person to have a name, and it's going to be a VARCAR. And an email. So now we have a type. Whoops. Sorry. So now I have a type person. Oh, already exists. Okay. Let's do this. Sorry. One second. I had a little fluff in there. Drop type person. Go. All right. Go. Okay. So how do we do this? Well, we could say alter table questions. Add asked by person. Run that. Select star from questions. Let's do this so you can see it. So right now it's null. What in the world? How do we ever work with such a thing? Well, let's do this. We will update questions set asked by equal to. And when you work with a composite type, which is what these things are called, you have to surround them with parentheses so that the database knows what you're talking about. So you could say asked by, and then you just put in the value. Me. Rob at techpub.com. And run that. It's in there. So now when we run this, again, we run into some weirdness here. Data doesn't know what to do with this. Neither would most query tools to composite type. And it would require knowledge of the type to then go and show it correctly. So basically, this is a database concern, but it's interesting to know that it's there. Again, this is represented as a string. I can query from this. And I could say we're asked by dot name equals so and so and so on. But I'm going to stop short of that demo and say there is one really interesting way that you can actually use what I've done with the questions tables so far. So now it sounds like a lot of arm waving. There we go. Oh, I'm sorry. Before I finish this last point, it's every table, and this is weird. This is going to break your brain. Tailing on from the types in the database. Every table in a Postgres database is a type. So you can use it just as I used it if we wanted to. I can say alter table answers. Add column question, which is of type questions. So it takes that table definition and then jams it into answers. And so now when you, if you want, now you can go and query it. This isn't a very good example because you have to also put in an ID and all this other stuff. But just knowing that, just knowing that you can use a table as a type in Postgres is weird. And that's the way it's built. But anyway, coming back here to my questions. I've got an array and I've got this person type and I can query it within Postgres, but I can't do much with anything, do anything much with it outside of the database until I decide to use a different extension. And this is where things get very interesting. So I am going to extend my database with another extension, create extension PLV8. Oops. Well, it already exists. Just for fun to see the demo work. Because it's all got to work. Oh, fine. Whatever. Okay, PLV8. Does anybody know what PLV8 is? Do you guys know what V8 is? The V8 JavaScript engine from Google? I just plugged it in to my database. So if you guys don't want to learn the nasty syntax that comes along with writing functions, you can now write your functions using the JavaScript V8 engine provided by Google. And to show you what I mean, why would you even care? It's just interesting. Sorry. I'm losing my window here. Drop some code in. Where is my query window? This syntax isn't the most friendly in the world, but most database syntax isn't. I can now create a function called toJSON. It's going to take some query text. Inside of here, I'm running straight up JavaScript. It's going to use the PLV8 engine to execute a query for me. And any time you execute a query from within a function that uses the PLV8 engine, it comes back as a raise and as basic JavaScript JSON types. So this is going to do something fun. So I'm going to run this. We have our function. And now I'm going to say select toJSON. Select star from questions. Yes, I know that looks weird. Run that. And open this up in Sublime. And lose my window. Look what it did. It took my tags and it made it an array. It understood that it was an array. So now I have some JSON to work with. It took my composite type and made it into a JSON object. I can provide some pretty interesting JSON straight from my database using this function. It's really fast because it uses the V8 Google engine. Is it going to save your application from certain doom? I don't know. You don't know how you guys are going to use this thing. All right. So that is that. So those are the five things that I decided to cover. But I'm going to stretch this into six. And this is where I kind of move away from the demos and kind of just get into talking about databases and choices in general. The biggest thing about Postgres is that it's community supported, community driven. People need a bit of functionality and then they get together and they build it and they bolt it on to Postgres. The Postgres team is super tight. Those guys since version 8 have decided they're going to focus heavily on performance because they want to match these enterprise engines and they did it. And the database now these days is really fast. You couldn't say this about Postgres, I would say two and a half, three years ago, before version 8, version 7 and before. But now the thing is really, really fast. It's also free. This slide I took from my friend Rob Sullivan who loves to give presentations on Postgres. As I mentioned, he's a SQL Server DBA. He has got to figure out the licensing needs for his company. They are a big company. They're a big enterprise company and they have big enterprise SQL Server licenses that they have to pay. This is an invoice that he had to pay for SQL Server. Okay, I know, we pay for our software. We have to pay for our software because it's going to be used. But he had to go through this document and I was actually going to show you the document for fun to see how to figure out this licensing. It is mind bogglingly difficult because you have these things called core factors and core factor tables. What's a core in your database because you pay per core? It's an Oracle licensing model. Well, it used to be a physical core, but now they have virtual cores. Virtual cores were never counted before, but now they are. All you need to know is that you pay a lot of money for the privilege of using SQL Server. Now, I don't want to sit here and just crap all of our SQL Server like I did my SQL. But when you're sitting there through the installation of SQL Server just to back a web app, just to back something like, I don't know, a small little commerce front end that might go into your back-end billing system, you kind of have to ask yourself, what am I doing? Because this is a big workhorse database. Are you getting the value out of this database that you should be? Is the database backing a web application going to do automatic compression? What you're looking at here is SQL Server on the left side, Postgres on the right. This is again, Rob Sullivan did this for me, is a SQL Server DPA. Loaded up the entire stack overflow dump, we have a 30 gig database on disk. On the right side, you see the same database, but it's only six and a half gigs. Those are the toast tables, whatever that means, in Postgres. It just does automatic compression for you. Why do we care about automatic compression? Because every time you query anything in any database, really, it caches that data in RAM. So it doesn't have to get it off the hard drive again. Anytime you have to hit the hard drive, it's slow. When they start the data server for Stack Overflow, the first thing they do is literally select star from posts. And it takes about 20 minutes, but it loads the entire database in a memory. What would be the limiter on that? The size of your memory, the size of the RAM on your machine. Do you see the difference of the needs here for the size of the machine? If you have 30 gigs down on disk for SQL Server, then you need at least 30 gigs, or more, hopefully, on your database machine. Postgres helps you out with that. By the way, if you want to have compression in SQL Server, you get to pay an enterprise license fee for that. It comes out of the box, works out of the box with Postgres. Okay, big deal. Why are we talking licensing? We're all basically developers. Why do we care? Because you need to think about these things, believe it or not. You come in and someone says, I want you to write me a web app, and I want this web app to blah, blah, blah, blah. Okay, well, I'm going to go get SQL Server, and then I'm going to go use ASP.NBC or whatever. I would offer to you that you need to stop and think about that decision, not just because I don't like SQL Server, but because SQL Server costs money. You are spending someone else's money on a license. I have gotten almost fired for doing exactly that. Eight years ago, I recommended to a client that we use, it wasn't eight years ago, it was six and a half years ago. I recommend to a client, we should use TFS. It was brand new, coming from Microsoft. It looked great, great replacement for SourceSafe, which is what they're on. They said, awesome, I got a phone call within 10 days as they were trying to install it. They said, we can't install it because we need to upgrade our license for SQL Server. We have an MSD on license, but we're using that license over here, and I'm not about to corrupt that database with what I'm going to put TFS on. You better be right about this. I said, I'm right about TFS. He installed it, we liked it. Thank goodness, because later on I was thinking I could have gotten fired for making that decision for that. You have to counsel your clients when you're spending their money, you really have to think about it. What do I need from this database? Now Microsoft and a lot of people would tell you, well, there's Bisparc. That's everyone's favorite thing to say. Bisparc, bisparc, woohoo. I have not much to say about Bisparc. My company was a Bisparc member for years, and it worked fine for us. The thing that always I came back to is that Microsoft is in business to make money off software licenses. That's not an evil thing to say, it's the truth. They make money off of software licensing. Therefore, the things that they do are put in place to further their ability to make money off licensing. Bisparc is in place to further their ability to make money off of licensing. That sentence is just something you need to consider. I'm not trying to say they're evil, they're doing what anybody would do. Here, come try our stuff, come try our stuff. Okay, now pay for it, which is fine. The question that you need to say, or the thing that you need to say to yourself is, well, just because Bisparc is there, doesn't make SQL Server free. That is one thing, a big argument that I have with people a lot. Now it does if all you need is one license. They will give you one license to all these things, and you can take it and run with it and have a good time. And if all you need is one license, rock on, you just scored. But in the future, if you need another license, you're going to pay for it, unless you're part of their software assurance program, which is weird. Another thing for you to consider is the decisions that you make today, you will inherit later. If you're really good at your job, which I know all of you are because you're here. You will inherit the decisions that you make for your application later on, right? If you say, well, let's use SQL Server and we're good to go. Well, five years from now, you're going to take back the project, maybe become the project lead. You're going to have to evaluate, do we need this thing? But now you're bought in. You're buying licenses, you're expected to keep going. Think about this decision now because if you're not paying a big, fat license later on, and you have all these default capabilities, and you know that Postgres is cutting edge and crazy with these, with these JSON data type, I didn't even mention that. JSON data type is coming with the next release. So what you just saw me do jamming JSON into a database, you will be able to query the JSON data type directly using PLV8, which turns it into MongoDB. Isn't that kind of crazy? You have a relational system that's also kind of no SQL. It's kind of trippy. So anyway, not that that's going to save your life, but it's good to know that that team is constantly innovating. They release about two to three times a year. SQL Server releases about four or five years or so, depending if you consider a service pack release. That's not a bad thing, but when you want improvements, it is a bad thing. So just consider that. Postgres over the last few years is coming on strong. This last year, it is majorly coming on strong. People are starting to see just how fast and capable this thing is, not only for little web apps that back rails or Django or Node, they're starting to see its capabilities as an enterprise system. One of the reasons I wanted to give this talk today is because so many people don't know that these capabilities exist. Now that you do, go play with it. The install is easy. If you have a Mac, I showed you Postgres.app. Go get it from Heroku, drop it down, install it. The Windows installer is also amazingly easy. Just double click it. It takes about two minutes and you're off to go. That's it for me. Thank you very much. If you have any questions, please do come in and actually shout them out. I think we have some extra time. What's our time? Two minutes. Sliding. Okay. Anybody have a question? Yes. If you made a very long call with Bing, you shared it with D, a couple of years ago. Uh-huh. And it looks to have a clear message that you should, a developer should, and think, clear what data model that you should. Sure. Now you're doing the same thing with Postgres. Does that mean that something changed in your view on whether you used the same thing? Yes and no. The message is still the same. So the question is, I did a demo a couple years ago on MongoDB. And now I'm here doing a demo on Postgres, which is sort of going in the other direction. Because the message of the first talk was, open your mind to new SQL. It's coming. It's awesome. It's cool. It's whatever. Has my view changed, which is the question? Yes and no. I still think you should always be open to what works best for you and your application. I still like new SQL. I use MongoDB. I love the flexibility of it. But for some systems, it doesn't make too much sense. For instance, you might have a commerce system where you use the same thing. You might have a commerce system where you need flexibility describing what a product is and does. You might want to have variants. You might want to have all kinds of schema changes that you can decipher at runtime. For that, MongoDB is perfect. And it's nice and fast, too. And it's going to be delivering your catalog light and quick. For the invoices that come in, you're probably going to want to have something like Postgres that can lock down the constraints. You can have a referential integrity. You can feel really good about it. Then you can do roll up queries later on. Does that answer your question? Okay. Cool. Anybody else? Yeah. That's a tough question to answer. With MySQL, you can change out the engines that they have. SQL Server only has one engine. I believe Postgres does, too, although I don't know for sure. The SQL Server engine can be optimized really well. If you get a DBA like Brent Ozar sitting on that thing, well, you can see with Stack Overflow, although a lot of that's coming out of Redis and HA proxy. The speed is great for SQL Server. But one thing you can't do, and this is something I didn't mention because it's a little bit wonky, you can optimize Postgres right down into the core, down to the kernel, the way it will partition memory for a query. You can optimize that within the query itself. So when you need it, when you need the engine to kick up and cycle this query real fast, it'll do it. Speed-wise, I watched for the demo that we did on TechPub. I watched Rob Sullivan compare the two. He got it to go faster than SQL Server over the dumps that came out. He got it to go faster, but SQL Server was a little bit faster with full text indexing. Postgres, by the way, does full text out of the box, too. I didn't mention that. It's really hard, man. When you get down to the microseconds, then you're talking about, well, if I use an SSD, and if I had more RAM, and if I bought the RAM from New Egg, and, oh, that RAM sucks, you can do all kinds of benchmarks. For the range that you need, Postgres stands right up to it, and it's just as fast. Now, I haven't tried it with Oracle. Yeah. I haven't done replication. I know you can do it, and I don't know how it's done. I asked Rob about that, and he was going to look into it, so I don't have many answers for that. Anybody else? Yeah, absolutely. I also didn't show their dump schemes, and I didn't show security schemes that it has in place, which are mind-bogglingly awesome. They won't let you log in. Your backup, when you run backups nightly, it won't let you do it. MySQL will allow you to execute a command using a cron or a nightly task, and you can pass along in the command line, dump my data, the username is this, the password is this, which is questionable when it comes down to security, because if someone gets your password, well, they can go and dump your data. You cannot execute that command with Postgres. It's not possible. The only thing that's possible is to read it out of your SSH store in Linux, which is your local security store. It has to read a special file that belongs only to you, with certain permissions only to you, that you store your password in. So that's another consideration that they have, but anyway, the backup from there is just a simple SQL dump. You can take it, encrypt it, which is what we do nightly at Tech Pub, and dump it up on S3. But your data is safe, very safe, with integrity and everything else. Anybody else? Okay, thanks so much. Appreciate you guys coming.
If you're a .NET developer, chances are you've worked solely with SQL Server, SQL CE, or SQLite in your day-to-day development. Some .NET developers venture over to the OSS side of things and might dabble in MySQL - but not many have embraced the amazing capabilities of PostGresSQL. In this talk Rob Conery will show you why you need to care about this database engine and how it can stand toe to toe with any version of SQL Server in terms of scaling, speed and overall power. In addition, for fun and laughs, Rob will do a PostGresSQL/MySQL dance-off - discussing some of the "interesting" aspects of MySQL and why many DBAs absolutely hate it.
10.5446/51145 (DOI)
Good morning. How you guys doing? Good. Okay, so this is HTML5 game development. What I'm going to do in this talk is I'm going to show you how to do 2D games using mostly the Canvas API. I'll also talk about the animation timing specification, which defines a method called request animation frame that you should be using to correctly do animations in the browser. We'll talk about that. We'll also look at sound. See the sound API, which is pretty bad in HTML5, but we'll see the good and the bad. We'll also talk about local storage and see how you can save user information on a local disk, such as preferences or high scores and things like that. Okay, so here's the outline for this talk. We're going to start off with a short introduction and a short demo to begin with. And then we're going to talk about animation. Again, we're going to look at the animation timing specification and a method called request animation frame. We'll see how to repair the background when you do animations. That's really the challenging part is updating the background as you damage it when you move things around. We'll see three different ways to do that, and we'll compare performance for each of those methods of erasing and maintaining the background. We'll also talk about how you can calculate frames per second, how you can implement time-based motion. We'll define that first and then show you how to implement that. We'll see how to do scrolling backgrounds. We'll see how to do something called parallax. And then we'll talk about sprites. Sprites are just small, animated objects that do things in games, so we'll see how to implement those. We'll see how to paint sprites, we'll see how to animate sprites, and we'll see how to make them interact. Then we'll talk about sound. We'll look at the sound API. Just a few slides to show you how to do multi-channel sound so you can play multiple sounds at once in a game. And then we'll wind up talking about collision detection. We're going to look at some simple collision detection with bounding boxes. We're also going to look at something called the separating axis theorem, which is kind of the gold standard for collision detection, both in 2 and 3D. So that's where we're headed in this talk. This talk is all HTML5. The slides are HTML5. All my demos, for the most part, are in the slides themselves. What you're looking at is just one big HTML5 application. And I downloaded that from HTML5rocks.com, Google put this together. You can download the slideshow. And then I have modified it quite a bit since I downloaded it probably about a year ago, I suppose, now. In fact, here is the previous slide. This is what it looks like in HTML. So I have a class for slides and thank you Google with the H1 and so forth. Whoops, that must be the wrong slide. Never mind. Okay, well, there's an example slide that you could create with HTML5rocks. Some of the game images that I'm going to use are courtesy of replica island. Anybody ever play replica island on Android? No. I should get something straight right off the bat here. How many people refuse to raise your hands at a conference? Okay. So I'm going to use some images from replica island, which is an open source, very popular game on Android. And the reason I'm up here today is because I just wrote a book. I just came out about three weeks ago on HTML5 canvas. How many people were at Rob Ashton's WebGL talk? Yeah? Okay, canvas does not suck. Okay, and I'm going to show you that today. So all the examples that you're going to see today are from the book. The book also has a corresponding website, corehtml5canvas.com. You can go there, you can run some featured examples, which is what I'm showing up here. You can download all the code for the book without buying the book. Of course, I don't recommend that you do that, but you can if you want. You can also download some free chapters from the book. You can download free chapters up there that you can download, hopefully to entice you to spend 50 bucks. Okay, so let's start off with a demo. So I have a pinball game. And unbeknownst to me, pinball games are very difficult to implement. There's a lot of stuff going on here, lots of collision detection. Up at the top, I have a curved surface, which is kind of tough for collision detection. In fact, we'll talk about that at the end of this talk. The ball can potentially move at pretty high speeds. The flippers move at high speeds. Their angular motion is pretty great. So collision detection is pretty tricky with this. Let's see what it looks like. By the way, in the last talk, this game just basically... Oh, maybe it doesn't play itself. Pretty much plays itself. Every once in a while, I have to activate the flippers. So I have some sound for the flippers. You didn't see that, right? Alright, one more time. It's so much fun. So basically what you're looking at is an image in the canvas background. And I just do that with CSS, so I don't have to redraw the pinball image every time. And then what I did was I just took GIMP and I cut out these pieces and made them brighter. And then when the ball collides, when I detect that collision detection, that's how I do the bumpers. I momentarily display the brighter version of the bumper. And I have bumpers all over the place. And you can see these are fairly realistic if it would go through one of those. You can see... Anyway, that's a pinball. Okay, so canvas does not suck, right? By the way, canvas did suck. I have to give Rob some credit. And canvas has kind of gotten a bad rap for being slow, which sometimes is true. If you run this pinball game on iPhone, it's not going to do very well. However, canvas recently has gotten quite a boost. Chrome 18 now has hardware-accelerated canvas. And that's what I'm running here, Chrome 18, or actually Chrome 21. And a little bit later, I'll show you the difference between hardware-accelerated canvas and non-hardware-accelerated canvas. Okay, so let's start off talking about animation. So here's some basic animation. I'm going to click this button. By the way, before we get too far here, Chrome has a bug that when you start an animation for the first half-second, it kind of jumps around a little bit. And you may notice that if I pause this... Let's just reload this. No, maybe not. So that's basic animation. All I'm doing is drawing three disks and moving them around the screen. So, pretty easy to do. So how do I do that? Well, this is how I do not do it. Okay, this function's okay up here. This is my animate function. And what I'm going to do is I'm going to clear the canvas. So this is the canvas context. I'm going to clear a rectangle. That's the whole canvas. And then I'm going to update whatever that is. That will update the positions of my three disks. And then I draw the background and then I draw those three disks. So every time through the animation loop, I erase everything, draw the background, draw the disk. Erase everything, draw the background, draw the disk over and over and over. The over and over and over part can't be this, right? Because that's going to lock up the browser. JavaScript runs on a single thread. And if you do something like while true, that'll lock up the browser. And it'll, in fact, lock up your animation at the same time. So you can't do that. So how do you do animation? Well, here's one way to do animation. So I have the same animate function up here. And I'm going to use set interval. Anybody know how many frames per second that is? 60. Yeah. So what I'm going to do is I'm telling the browser every 60, every 1000 divided by 60 seconds. I want you to call my animate function. So the browser calls that animate function over and over again. You don't want to do animation this way. And I'll expand on that in just a minute. One problem with this approach is you are setting the frame rate. You are telling the browser how many frames per second you want. And that's not a good idea. Here's why. Set interval and set timeout are general purpose functions, JavaScript functions. They were never meant to support animation. In fact, they're not very precise. You can tell set timeout, hey, I want you to call this function in two milliseconds or five milliseconds. And the browser could just say, ah, I'm going to wait 10 or 15. In fact, Firefox will do that. Browsers have a lot of leeway with set interval and set timeout so that they can conserve resources if they need to. So what you're doing with set interval and set timeout is you're not telling the browser, hey, call this function in 15 milliseconds. What you're doing is telling the browser, hey, call this function in about 15 milliseconds. And that about is a problem in a game. You don't want to have a 15 millisecond pause because you only have, as we'll see later, 16.7 milliseconds to do your thing. Okay. The other thing is, as I said before, we're telling the browser what we want to run at, the frames per second. And we don't really know. I'm guessing 60 frames per second because somebody else told me that most monitors refresh at that rate. And I know I want to match the monitor refresh. But I'm just guessing. I have no idea. The browser, presumably, unless it's Firefox, knows better than I do. Sorry. Okay. So the browser knows the best time to draw the next frame. So why not let the browser pick that frame rate for me? And that's exactly what Request Animation Frame does. So like all things in HTML5, this started as a vendor-specific extension. Initially by Gecko or Mozilla. So Mozilla came up with this method, Mo's Request Animation Frame. And basically, what I'm doing with this function is I'm telling the window, hey, call this function when you're ready to draw the next animation frame. Okay. I'm not telling it draw 60 frames per second or 100 frames per second. I'm just saying, hey, when you're ready, let me know and call my function and then I'll draw. Very nice. So now the browser is picking the time to call my function and it's picking the frame rate for me. After Mozilla, WebKit followed suit, which with pretty much the same implementation, there are slight variations between these vendor-specific implementations, but they're all pretty much the same. And then we also have an IE version of this. Of course, you know what happens when you have three different versions of a new function, right? What happens? It gets standardized, right? And we have that too. So now we have window request animation frame. And this is standard, but not many browsers support it yet. So what do you do? Anybody know? What do we use when we want to use future features, but the browser doesn't support it, we use a what? Anybody know? Starts with a P? No? No? Okay. Polyfill? Anybody know that? Why didn't you say that? Okay. So we're going to write a polyfill. So a polyfill is a polymorphic backfill. Okay, I'll let you think about that on your own time. But what this is, is a function that returns a function that's either one of these functions if they're there or it's my function. Okay. So my function is a fallback and my function uses set timeout and I'm doing the best I can here to run it 50 frame for 60 frames per second. But what I really want to do is let the browser take care of that for me. Okay. Using the polyfill is pretty easy. Just like that. Now I've called the polyfill that I wrote request next animation frame to distinguish it from request animation frame. Otherwise, I clobber the original method and I'm in big trouble. So my version is called request next animation frame. Okay. That okay? All right. Let's talk about frames per second and time based motion. So if you're going to write a game, you have to monitor your frames per second. Okay. You have to know how many frames per second you're running at. In fact, it's a good idea if your frames per second drop under a certain threshold that you tell the user, hey, this game is running slowly. Things might not work exactly right. And let's see how to do that stuff. First of all, let's see how to calculate frames per second. So here I have that same animation, except now up at the top, I'm showing the frames per second. I'm not using request animation frame here. I'm using set interval with an interval of zero, which of course is impossible, right? But the browser is doing the best it can. So my frame rate is wildly jumping all over the place, which is not a good idea. So here's how you calculate frames per second. This should be easy to figure out, right? What you do is every time your animate is called, you keep track of the last time it was called, subtract off that from the current time. And now you know how much time it took for your last frame to draw. That's your frame rate. If you want, you can average it out over time. I'm just taking the last frame rate. That's my frame rate. So I have one frame for so many milliseconds. I know this is hard math, but to get to frames per second from frames per milliseconds, you multiply by this. Okay? Okay. And here's how I do that. This is the code from the demo that I just showed you. Here's my animate function. And what I'm going to do is fill some text in the canvas. And I'm going to calculate my FPS. And here is that. Here is that. Okay. Here is that function up there. What I'm doing is notice that FPS is 1000 divided by now minus last time. So that's 1000 or over the milliseconds that it took for the last frame to draw, right? Which is right here. 1000 over the number of milliseconds. Excuse me, frames per second. Okay. Here's another thing. Games have to run at a constant speed. Okay? Your game speed can't be dependent on your frame rate, especially for multiplayer games. You don't want a person with a more powerful computer to have the game run faster on his machine than somebody with a Windows box. Sorry. So what we want to do is we want to move at a constant rate. Okay? We don't want that rate to fluctuate or be influenced by frames per second. At this point, I'm going to have to go to Firefox. And I'll tell you why I'm going to Firefox here in a minute. Let's open this and we want... Okay, so here's the same presentation, but now we're running in Firefox. And let's... Okay. Now, what I have is I have two identical applications. It's pretty simple. I just have a bunch of these disks and I just let them loose and they bounce off the walls. Okay? But I have two identical applications in the same slide here. Tell me what's going to happen to the frame rate when I click that second button. It's going to go what? Up. Who set up? There's always one in the crowd. It's going to go down, right? I have two applications running at the same time. It's going to go down basically from 60 to 40. Now, what I want you to do is watch these disks. Now they're moving fast. Now they're moving slow. Fast, slow. Fast, slow. You see that? It's kind of like being at the eye doctor, right? Okay? You see that? Do you see the effect? Okay. What I'm going to do instead is I'm going to use time-based motion and I'm going to tell these disks to move at a constant rate no matter what the frames per second is. So now I start up here like before I have 60 frames per second. I click the second one like before. The rate goes down, but notice that the disks did not slow down this time. Watch. It's best just to watch the top part. Okay? So I'm going to name animation speed regardless of the frame rate. Okay? Okay. So that's Firefox. And I told you I'd tell you why I used Firefox. And here's why. Because canvas in Chrome is now hardware accelerated. So I ran this demo before I left to come over here. And this is what happened. Okay? Look at that. No drop at all. It just hums right along. So that's pretty impressive. And now, of course, nothing speeds up or slows down regardless of whether I'm time-based or not because the frame rate never changes. So pretty cool. Okay. So for time-based motion, what we want to do is specify the time in pixels per second. In other words, I want to tell those disks, I want you to move at 10 pixels per second or 50 pixels per second or whatever I decide. And then I want to calculate how many pixels to move each disk for each frame of the animation. Does that make sense? So what I want to do is calculate pixels per frame given pixels per second, which is how fast the disk moves, and the frame rate. And so here is the equation to do that. Okay? Of course, the one on the bottom is the same as the one on the top except I switched things around and inverted one of them. Right? If you're familiar with math, you know how that works. But anyway, pixels per frame is pixels per second, which is the speed of the disk divided by frames per second, which is the frame rate. So to do this, this is how I did it. I calculated how much to move in the X and Y directions by dividing pixels per second by frames per second, which is what I'm doing right here. And that gives me how many pixels to move each disk for each frame of the animation. Does that make sense? Yeah? No? Okay. All right. So let's talk about something else. Let's talk about scrolling backgrounds. So a lot of times in games, you want to do stuff like this. You want to have some background scrolling by. Maybe you're going to write a side scroller. Okay? So how do you do that in Canvas? Well, what I do in this demo is I translate the Canvas coordinate system. This is the original Canvas or the Canvas that we're looking at. And what I do is I draw this cloud in the visible Canvas. And then I also draw it outside of the Canvas. Notice where I'm drawing. X, Y. I'm drawing this cloud at the width of the Canvas, which means the cloud is off the edge of the Canvas. You can't see it. Okay? And what I'm doing, then I start out with a visible cloud and an invisible cloud. And I translate the coordinate system so that I move this way. When I get all the way to the edge, I go back over here and start over again. So it appears that that thing is just constantly scrolling by. So like I said, I draw the cloud twice. I draw it in the visible part of the context and I draw it outside of the context, the Canvas. And here's how I scroll the background. Ultimately, I call the translate method of the Canvas context. And I tell it how much I want to translate in the X direction. So I'm just constantly, as I'm animating, translating that context and redrawing the two clouds at the same spot every time. So it looks like the clouds are moving because I'm translating the context. Now, here's one trick that you have to do. The image, the right side of the image and the left side of the image are identical. Right? These strips of pixels right along this row are identical. Otherwise, when I get to the edge as I'm translating, I move back to the beginning, I'm going to have a discontinuity. As long as the drawing has the same pixels on the two edges, then I can keep scrolling and it just looks like it's that same cloud going by and by and by. Okay, so we know how to scroll the background. Let's look at parallax. So this simulates 3D. And this is parallax. And parallax is pretty simple. Things farther away appear to move more slowly. So you can see that the clouds in the back, the trees are moving much faster than the clouds. The trees in the front, which are the big bushy trees, are moving faster than the tall skinny trees behind them. Okay? So what I have is I have four layers that look like this. And here's what I do with them. For each of these pieces, I translate the context. First, I save the context, translate it, draw the piece, and then restore the context. And I do that over and over again for each of the four pieces that I have here. These values are all different. This is small and this is big. And these values get bigger as you go down because the grass in the front moves much faster than the clouds in the back. Okay. So this is how much time you have to draw a frame if you're running at 60 frames per second. That's not much time to do your business. The hard thing, the challenging thing about animation is taking care of the background. It's easy to animate if you don't have to worry about repairing the damaged background. You just draw like crazy and get a big mess. The challenging part is maintaining the background. We're going to look at three different ways to do that. One is with clipping. And with clipping, what I'm doing is I'm erasing and drawing the entire background for every disk. But I'm clipping it to the arc, which is the circle that describes the disk. So I draw the entire background, but the browser restricts it to where the disk is, which takes the background and puts it in wherever the disk was. Does that make sense? Another way to do it is to copy from an off-screen canvas. So you can write your background into an off-screen canvas and then copy from that off-screen canvas on-screen with the draw image method of the canvas context. A third way, which we've already seen, is to just redraw everything for every frame. So what is the best approach? Redraw everything, use clipping, or copy from an off-screen buffer. Let's try and see. So here I have three of these little rings. Let's do a hundred. And notice that my frames per second is basically 60. That's good. So let's try, go back to three. I get 60 with clipping. I get 60. Notice the clipping is different, right? What I'm doing is as I'm moving those, I'm replacing or erasing those disks by filling in the background. Normally I would draw the whole background and draw this on top of it, but I wanted you to see the effect of just erasing behind the rings as they move. So I get 60 frames per second there. If I copy from an off-screen buffer, I also get 60 frames per second. When things get interesting is when I go to 100. So for 100, if I redraw everything, oops, if I redraw everything, I get 60 frames per second. If I do clipping, I drop all the way down to 20. And if I copy from an off-screen background, I'm somewhere in between, around 40. Okay, so how do you figure all this stuff out? There's two really useful tools in WebKit. One is Profiles. You can profile your code and see how much time your program is idling at the top. You can also do timelines, which tell you all the events and how much time they took, which is very useful. For three disks, this is no clipping and this is clipping. For three disks, my idle time is almost identical. But for nine disks, notice now when there's no clipping, I'm idling almost 70% of the time, but when I'm clipping, I've lost 10%. So the more things you have, the less efficient clipping becomes. You can run the Profiler in WebKit by clicking on a button, or you can run it programmatically. Not a lot of people know this. You can call the Profile method and the Profile end method to run profiles at specific spots in your code. Here's a little animation lab that I put together. This is kind of interesting because when Canvas first came out, well, not when it first came out, but when I first started using it about two years ago, these things made a huge difference. I have a background. If I take that background image out, watch the frames per second, by the way, up here. If I take the shadow, there's a shadow behind this canvas. If I take that out, I have rounded corners if I make them square, and I'm drawing a grid. You can barely see it under there, but it's there. If I take that out, notice my frames per second never changed. There used to be huge changes in frames per second when you had shadows or rounded corners, but evidently, that's all been fixed now in Chrome. What that means is some of my recommendations here are already out of date. Here are some best practices for animations. Use profiles and timelines. Clip when you're animating a small number of objects. Don't double buffer. Canvas is double buffered in every browser on the planet. If you do it, it's just extra work, and it'll just slow you down. You should avoid CSS shadows and rounded corners and canvas shadows. As you saw in the last demo, that's probably not so important anymore, but this is. Of course, you should always do that and that. Let's talk about sprites. Here's sprites. I have a scrolling background. You know how I'm doing that now. I have a sprite, my Android guy who just sits there doing nothing, but he's a sprite. I have a toast. A toast is something you present to the user that in more mundane terms could be described as a dialog box. Here's my sprites. I create sprites. By the way, sprites are not part of the Canvas API. You can implement your own sprites, which is what I did here. My sprite constructor takes a name, an image, and a set of behaviors. We'll talk about that in a minute. Here's how I draw my sprite. I just use the draw image method to draw the sprite's image. Now my sprite has some behavior. He bounces off the walls and the floor and the ceiling. Sprites have an update method. Sprites maintain an array of objects called behaviors. Behaviors only have one requirement, but they have a method named execute that takes some arguments. One is the sprite. What I do is the update method of the sprite iterates over those behaviors and calls each behavior's execute method, passing it the game, the sprite, and the width and the height of the Canvas. Here are a couple of sprites, an Android sprite and a bat sprite. Here they have some behaviors. Again, these are all just objects with a single execute method that somehow manipulates the sprite for each animation frame. Here's the fly behavior. Here's a bounce behavior that bounces the sprite off the walls. What you can do is create an array of these behavior objects and attach it to a sprite. Now the sprite can do all the things in the behavior. Here's a sprite animation. Notice when the Android guy gets to the bottom, he does what you call that, but he blows up down at the bottom. What I'm doing is when I detect that I get below a certain point, I run that sprite through a series of animations with a sprite animation object. Here's my sprite animation object. What it does is maintain an array of images and it simply cycles through those images. When I start an animation with the sprite animation object, as long as the animation is running, I just increment that image's array. I set the sprite's image to the image in my array that corresponds with the current index. If my animation is finished, then I quit animating. I restore the sprite's original image if that's what the sprite wanted me to do, and then I stop this interval. Here I'm using set interval instead of request animation frame because I don't really need this to be millisecond down to the millisecond. Sound in HTML5 is notoriously buggy, but it does work some of the time. I'm going to show you how to use multiple soundtracks so you can play multiple sounds at one time. What I have is an array of audio objects. I just create this many of them and push them onto this array. I have an array of audio objects, and when you ask me to get the available soundtrack, I iterate over those soundtracks, looking for the first one that's not playing and send it back to you. If they're all playing, in this case, 10 sounds at once, you're out of luck and you just have to wait. Finally, here's how I play a sound. I get the available soundtrack, I load the sound, and I play it. That's it. Let's talk about highscores local storage. Here's how you set highscores, how you store them in local storage. What I do is I create a key with the name of the game and something, I forget what it is, underscore highscores or something. Using that key, I pull the highscores out of local storage, I stick in the new highscore, and then I stick them back into local storage. Here's how I get the highscores from the local storage. I'm going to return the highscores if there are any, and if they're not, I'm just going to return an empty string. God, Canvas sucks. Excuse me, let me, you guys have never seen that, have you? Okay, let's talk about collision detection. So here's a little game that I did, and what I'm going to do is try and shoot the ball in the bucket. If you go outside the bounds of the canvas and into the bucket, then you get a three pointer. Notice I'm keeping track of the score. I don't really have much time. Hang on, hang on. I have to do this, I'm sorry. Ah, yes, there it is. Three pointer. Okay, sorry. I did the collision detection. This is really simple. In fact, this is the simplest possible collision detection anyone could ever do. Detect whether two circles are colliding. All you do is look at their radii, and if the centers are closer together, then the combined radii you have a collision. Does that make sense? That's it. Easy stuff. But that's not going to suffice for everything. This is the separating axis theorem, and this is what I'm using in the pinball game for the most part to do my collision detection. The separating axis theorem is really the gold standard for collision detection in both 2D and 3D. And here's how it works. Mathematically, what we're going to do is shine a light on two polygons. On the left, I'm shining a light from the right. On the right, I'm shining it from the bottom. And I'm going to look at the wall behind the polygons. If there is separation between the shadows, I don't have a collision. Agreed? Okay. Now, mathematically, lights and walls are projections, basically. So what I'm going to do is I'm going to project these polygons onto the X and Y axis and look to see if there's any separation between those projections. If there are, if there is a separation, then I know I don't have a collision. If there is not a separation, I know I do have a collision. Here's how you get a projection. You take a polygon face, you calculate a normal vector to that, and that is your axis. This can be anywhere in space. It doesn't matter. You could move it up here, but the projection is still going to be at the same location. Okay. What you need to do with SAT is you need to check all the axes of the two polygons that are potentially colliding. And here's how you get all of the axes, and here's how you test them. So here I have separation between these two projections, and that one too, I guess, so I know I don't have a collision here. So what you do with SAT is you get all the axes for each polygon. You iterate over those axes, project each polygon onto those axes, and see if there's any separation. Once you find separation, you're done. Okay. If you find it on the first axis, then you're done. Otherwise, you have to iterate through all the axes to make sure there's no collision. Okay. Before I let you go, let me just show you a redux of this pinball game. So let's start this again. The pinball game uses the SAT for almost all of its collision detection except for the flippers. The flippers are moving at high velocity. It may not look like it, but believe me, they are. And the ball can get moving pretty fast too. So there I augment the SAT with something known as ray casting, which I didn't cover in this talk. But you might notice I have this checkbox up here that says polygons. If I click this checkbox, now it's just going to show me the collision detection polygons, so I can see exactly how the collision detection is happening, which is really useful when you're debugging. It was really useful to me when I was debugging the flippers. So I had three days to get those flippers to work. But anyway, that's another story. So you can see up at the top, I have triangles forming that rounded dome. The dots that you see all over the place are the approximate centroid of each polygon. And so what I do is I just shoot the ball, which I represent as a square polygon. And now I'm using the SAT to detect collisions between these polygons. And of course, if you click on that, then you get the graphics back, which looks a lot nicer. Okay, so that's Canvas, and that's HTML5 game programming with Canvas. Thank you guys very much for coming.
Video games are fun to play, but they are much more fun to develop. Taking an idea for a game to fruition in the browser is one of the most gratifying experiences for a software developer. And it makes testing the best part of your day.
10.5446/51147 (DOI)
Hello. Cool. Welcome. Thanks for coming. I am going to talk about WebGL a little bit. My name is Rob and I've been playing with this for a little bit. Anyone else here been doing any WebGL stuff? No one at all? You all love JavaScript, right? That's why you're here. Some smirks and the like. Don't like JavaScript. Well, tough. It's all JavaScript. Have you done any game-based stuff at all? This should be fairly familiar, but generally people coming to this talk haven't. Has anyone done any games development? 3D games development? Groovy. OK. Good. That's some experienced people in the audience. So anyway, WebGL. What on earth is it? Well, it is a standard for hardware-accelerated 3D graphics in your browser. And that's cool. It's a standard. It means browsers can implement it and they have a set of known behaviours and loader tests. And it pretty much works cross-platform, cross-browser quite nicely, which means you can write an application once and it works everywhere, except IE. But we're used to that because we have developers. So, original JavaScript. That's cool. You may not. That's OK. So, it's based very strictly around the OpenGL ES 2.0 API. So, what that means is if you've done any iOS development or Android development and you've used any of the OpenGL APIs on those, the API here is pretty much the same. It's very familiar to you. Works the same way. Difference, of course, is you're using JavaScript and you're targeting the HTML5 canvas object. Which is kind of cool. You get to make 3D games for the browser. OK, it's not just games. You make visualizations and Google Maps uses this even. But I like making games. I think games are cool. We have to do something when we're not at work. So, I'm going to take you through the process of creating my first WebGL application. This is important. It's nice to know where to begin. First thing you need to do is you need to have a canvas element on your HTML page. We've all seen those before. We get that element and we create a context for that element. If you've used any raw canvas stuff, this will look very familiar to you. Create a context. You get to do things with it. Ordinarily, if an ordinary canvas element 2D, you get a context and you can draw to it. WebGL, you get given a GL context. You get the entry point to all of your WebGL API. It's fairly simple. This is pretty important. You need to set a clear color. So, when you clear the screen, everything goes black. Quite important. And then we have a render loop. Every application needs a render loop if you're doing WebGL stuff. Now, I'm being cheeky here and I'm not using next render animation frame or whatever it's called in every single browser. It's different. Thank you very much. You can say interval. I'm cheating. But also, I'm only drawing at once every quarter of a second. So, it's okay. All I'm doing is clearing my viewport and clearing the depth buffer. Let's not talk about the depth buffer. That's complicated. Hooray! We have a black window. So, that's good. It's nice not to break out of my slides as your demos. That's very impressive. I'm sure you'll agree. We need to draw some stuff to this. Unfortunately, this is WebGL. So, we're going to have to go through some very simple steps to demonstrate how to draw stuff and maybe teach some things to. We might have to do some maths. Matrices will be involved. I'm sorry. I'd like to avoid them, but they're important. What I won't do is show you any matrices. I'll just say they exist and they do these things. Don't worry about them. They're an implementation detail. It's a useful phrase so you can hide things away and pretend that stuff doesn't exist. So, let's learn some basic concepts. Now, I was hoping to draw something. Oh, well. Yeah, blank canvas. I was going to draw something for this on a piece of paper and I can't. So, I said I'm just going to wave my arms around a little bit and hope that achieves the same effect. When you're dealing with the GPU, when you're dealing with hardware-excelerated things, you've got two environments in which you're going to write code. You've got the CPU and RAM. Into RAM, you place data and you operate on that data from the CPU, from executing code on it. The GPU is just another processor or a set of processors. And it's got its own RAM and its own state. And you write programs for the GPU to operate over that state. There's no real way of enacting change on memory on the GPU directly. You don't kind of work like that. So, for this, you need some kind of communication mechanism and that's what buffers are. So, if I create a buffer from WebGL or OpenGL or in DirectX or anything like that, what I'm actually doing is allocating memory that can be uploaded to the GPU so I can execute instructions on it. And this is the essence of everything in WebGL, effectively. I'm going to create a buffer and do things on it. Eventually, I'm going to display something on the screen. So, an example of something I might wish to send to my GPU is an array of positions. If I've got a model, a model normally contains a list of vertices, vertices are positions within that model, they need sending to the GPU so I can do things with them. A texture would be another kind of buffer. So, here's an example of me creating a buffer. This is a triangle. It's a triangle which is from 0 to 0 to 1 to 1 back to 0 to 0 again. And to do this, I create a buffer and I say, look, this is your data. Thank you very much. Get on with it. Fairly simple so far. Okay, so this is where I thought I made to see that. I'm sorry. I want to avoid this. What we're going to do is we're going to hide all the maths using GL matrix, which is my favourite library of all time, because it means I haven't got to ever look at the maths. Sometimes I look at the source code so I can work out what functions there are, but I don't actually do the maths myself. Top notch. It's also really fast. It uses all the... What's the right word for them? It uses actual types, type arrays in WebGL. So you have an array of ints and an array of floats and things like that. You don't get that in JavaScript normally, but it uses those. And it also unrolls all the loops for doing matrix multiplication. It's actually written them all out in longhand. So it's got a lot of patience and respect to him. So you need a few things if you're going to render something on the screen. You need a world matrix. Now a world matrix operates on the vertices you have uploaded and transforms them from model space into world space. What's that mean? I don't know. If you've got a model and you've got local coordinates and you've got five instances of that model, you need to transform it five times and create five world coordinate versions of that around the system. So, for example, I'm going to translate my object back ten notches. I'm going to rotate it a little bit and I'm going to make it slightly bigger. That's what I'm doing over there using my lovely library. You'll need a view matrix to represent a camera within your world. I apologise. Basically, this means I'm going to say I'm standing back by ten paces, so make that thing even further away. That's all I'm doing. Again, there's handy methods for that kind of thing. And you need a projection matrix. What this does is takes these 3D coordinates you've got in this lovely world space and transforms them to screen space, which means you can actually flatten them and rasterise and draw triangles on your screen. Quick maths lesson. It's only a very quick one, though. Given a point p, multiplying that point p by a world matrix will give it vision in the game world. Basically, you multiply vectors by matrices and you get new vectors. Great. That's wonderful. You also multiply matrices by matrices and get new matrices. That's great, too. Kind of boring. That's mewn fawr thing that. All right. This is really important. So, I've got my buffer. My buffer contains a list of positions in model space. I've got some matrices so I can transform them into various other coordinate systems like world space or projection space, screen space. But that's no good if I can't actually write code to do that. And this is what shaders are for. A shader is a program which you execute on the GPU, on the data you have uploaded. It's just pretty important. It's written in a special kind of language. In WebGL, it's GLSL, in DirectX, it'd be HLSL. If you're going back to 2003 and you want it to be really hardcore, you write it in GPU assembly, which is really scary. I can't read it anymore. I never really could. But, yeah, the programs operate on that data. Now, what's kind of cool is that they operate in parallel. Up there, my first thing I pass in is something called an attribute. And this attribute is a single vector within my model. So if I've got, for example, my triangle, I've got three vectors within that. I upload that entire buffer and I can execute this program three times, one on each coordinate within that buffer. I can do that in parallel. That's what the GPU does. It executes these things in parallel. There's no mutation between these objects. Obviously, the projection of you in a world matrix because you need to multiply all these things together with the position and get the actual position of the output in my program. You have to do this yourself. This is why it's important to know. There's no default pipeline. You can't just upload some vertices and say, go and render that stuff. Ten years ago, I did things. None anymore. Dwi'n credu'r prrogram. That's okay. We like power. Once you've transformed coordinates into a new space and transformed other things into other spaces and done some really crazy maths, what you need to do is output something to the screen. So given these new coordinates in screen space, I'm now going to, for every pixel I'm going to display, pick a colour. In my case here, I'm just going to set everything to white. So if I'm drawing a pixel to the screen within this program, I'm going to put everything out of white. That's kind of cool. I have to write some code to actually do things with my shaders. Here, I extract them from the element in my document, and I compile it. I've got to compile my fragment shader as well. I've got to link my fragment shader and my vertex shader together. There's some errors there. You might want to catch them. You can see what goes on. I'll show some of that later. You have to build this whole thing together. You have to then pull all the inputs out of that program so you can actually upload data to it. So if we go back to that program, because we saw our attribute, a vertex position, we have some uniform items, the projection view and world matrix, we have to get indexes into our program in order to upload data to that program. This is an important step. Generally, you do this when you first load the program. We are nearly there, I assure you. This is all really important stuff. To render it, we use the program, we upload our matrices, we upload our buffers, and then we call draw arrays. This is the final render method. Here, I'm going to upload all this stuff to the GPU. I'm going to run this program, and when I've done all this stuff, I'm going to say, make it so. Hopefully, it makes it so. More often than not, you get a black screen stuff, and you go, okay, why have I got a black screen? I need to go and change some numbers in one of these 15 steps to work out why I've still got a black screen. This is how I start every web-gel project with a black screen and scratching my head for five hours while I try and get everything set up. It's a nightmare. But as you can see, we have a triangle on our screen. Wow! Okay. That's fairly funky. How do we get there? Let's ignore that for now. Let's have a look at the, how would I turn that into a square, for example? I've written all this code. It's quite a lot of code. How might I do that? Well, let's have a look. Oops. To apologise, I'm going to look around at the screen. I'm in my foolishness. I installed Linux onto my MacBook Air, and nothing works properly. That will teach me for being an idiot. So, what I can do is add an extra row to my vertices over here. So here, I'm going to add another one. It's down here. And then all I have to do, down here, is say, rather than... Yeah, this is all the code I wrote to make a triangle. How cool is that? Very simple, few steps to get a triangle on your screen. I'm going to draw four. Wow! I've got a square. Awesome. So, this is quite important. You build things out of triangles. First, I had a triangle, now I've got a square. What else can I do today? I don't know. What else can I do today? How much time have I got? What about 3D? Can you do 3D things as well? OK, well, this is a cube. As you can see, it's clearly a cube. There's a lot of vertices. They mean something. My cube is 2x2 from minus 1 to 1. Awesome. This is a cube. It spins. Isn't this great? So, I've got a cube. I've done exactly the same thing to make this happen. I've done the code for that. Here are my vertices. Wow. It's another normal for now. The importance. Ignore the instances as well. Buffers, buffers, buffers, program, program, programs. All I'm doing here is I'm creating a world matrix where I say, please rotate it by some value. Everything else is exactly the same. I'll draw elements, and we get a cube on the screen. However, this cube is kind of boring. felly ni considerwyd sequences y cywb, dîm nhw i'w f aminole ac rwy'n ni autographiteil arých. Roeddem yn sicr ac eogel fanodau, a feddyCI filedill gwithawydd eir f Puny witnesses. A hynny, eto am yr yw wrthod flippf pum host? S 아무 ddigrwysALL, bod hynny newid yun i chi uchyd ond foilighthaf ychynigio, ac mae'r per anche'r hyn o'u dar国 Brig companies. Ychdyddai bod hyn de'chぐ aut Turbo a yn yruseddodol o'u unfoldwr. Y cyfrifant o gyffin a'r ysg�dd baiici mewn b NF,ser os adwyd y gyferal yma dim ell, ac mae'r llwyn yn eidem a yn unig podrfu buffer o'u ddweud annibyrm ni yn fwy merge i bydd Caerddiff yn e multiplayer fwith yn nà, That's the web giddydd wedi dw i iawn llawer Think over everything above zero Now I need to be able to get that lighting component from the vertex shader into my fragment shader That's abit important on by way대 I create an obiwc here called a varying I can pass it to that And the fragment shader will say, you know what I need lighting Lynodd ang Mehdo, Tramdu Hylwch Llydeddl jackets walletem ac maedd y gallem hyn gyda tra'n hi beth outcomes Easy Attma Plynydd yma? Felly os clubs pro forma y te Louise Freél ycerafadiwch. yw'r ysgol, y dylai'r ysgol, bo'r ysgol wedi bod yn gweithio'r ysgol yma. Rwy'n gweithio'r ysgol, yna'r ffordd. Mae'n gweithio'r ysgol. Nid yw'r ysgol yn ymgylchedd. Yn ymgylchedd, GLSL yn ymdweud, yn ymdweud ymdweud ymdweud o'r ysgol, yn ymdweud ymdweud, bo'r ysgol yn ymdweud, yn ymdweud, yn ymdweud, yn ymdweud, yn ymdweud, yn ymdweud, yn ymdweud. If you have a float and you want to make a vector you cannot make a vector out of floats and ints, it will not cast that int for you, he'll go, can't do that, no idea what you're talking about. If you have a VEC3 of anything and you want to pass that into a VEC4 you'll be very specific and explicit about it а'r ydych chi chi fwylo pan arddio fynd你们. Ond han amdangwyd ri yn zeon. Ac haeth yn y gwaith. Un i gymweithio phrygaeth, feddewid wedi cael eu penai confidence neuroscience, i'n ddweud� Pendogir Marioswch ramserau I-d Catherine Marshall ac ond o ran grwp. Wel y tu Jedi'r b Tonga oedd. Bu fydd hynny'n bedwydd Wel han yn buff intrinsae sydd... Ac fod ydyn nhw ydd hay! ein見て Andrew! D Seddw digono nad yma yn gwn i tro brydd front ychydig ymlaen, ydych yn ymlaen y ffordd y ffordd y ffordd yw'r ffordd. Felly, rwy'n cael ei wneud eich bod yn dweud y ffordd yw'r ffordd, a'i ddweud yw'r ffordd yw'r ffordd yw'r ffordd. A'r ffordd. A'r ffordd yw'r ffordd yn ymlaen. Yn ymlaen, mae'r ffordd yn ymlaen. Felly, yna'r demo'r ffordd, wrth gwrs, rwy'n gweithio'n gwneud o'r ffordd o'r gwaith ar y dyfodol. Mae bryd yw'r rhai, wrth angen y ffordd llent yn sicr, arall donisio'n ffordd yn hystyly. Yn gyda'r ran Blaenau, rydyn ni dda. Felly, fydd y Llanf BC yn enniad? All diethau'n gweithio rheoli. Felly, mae'r ffordd yn hyn. O boblump cheat yw ar. Mae gyda'r Tae, oherwydd ychydig yw y redyn nhwy. Mae yw Llanf eich bod ysgr Yaystra Fe Moog этот. fod y rhaglen y resources slider fe'i rai fจíng. Dw i....% Hy! Yn gwrth bwcotiaeth ni'r Cymru Clwrs. Random PTV HavenB wasn't evenichai! So... Yn i starter â ti Relant fel actai goiriad, i ddi licender wedi gw<|zh|><|transcribe|> Heitaeth fe Goodyniad! Wylych... Cwng inni enquiem at tunnel New y Lledw youll ac yw o piano, y maenai atliemosi arhaig, a adeilad o'r awygrabog ar my breastsach, yn ddedig sydd i gael chi'swdyffod y gwaith fel y maen에isio. Onw'r hunwn e'u ch reclaimod y formエidledd rad payloads, yn y ffordd y form ei iawn y byddai, Nawl eriau ynEEEEEEE wi angen f slevitfodd.gynnyb i eich myllwr,Your caus cwil. Ac fest i g demoner bendini. Y cofig mewn! Dyma gyn Rai? Maen nhwín Dwylo, mae ta gennym YuMS shock ac maen nhw'n tymil hear leadwgwch. Ond iddi cenderlaeth uchenni a fellowed резurwyd. Rydyno, mae gydag inniad rhai ofer Superman. A you enter Mor Sooner слова. committees byddai eisiau cendwn yn gwert iddyll dim awydd ent i eu gondol, y perfecfyrd y graffics lib, felly, y gallwn y internet, y gallwn y Twitter, ac mae'n dweud y ddwyf yn ddwyf yn dweud o'r cyfnodau cyfnodau cyfnodau yn y gwirio'r 3DS. Mae'r cyfnodau yn gwirio'r cyfnodau yn y dyfodol, mae'n bwysig i'r pwysig i'r gwirio'r 3DS, mae'n bwysig i'r 3DS, mae'n H2ML, mae'n CSS, mae'n SVG, mae'n canfys, mae'n OpenGL, mae'n cael ei ddweud. Mae'r cyfnodau yw nesaf, mae'n dweud, mae'n dweud o'r prys, mae'n dweud i'r bwysig i'r ffordd o'r dweud o'r dweud. Mae'n gwyllt delsod, mae'n gweinio mewn cyfrwys, mae'n gwyllt yn gwyllt, mae'n gweinio mewn gwirio. Mae'n dweud yma, mae'n dweud yma, mae'n dweud o'r ffliwr fath, mae'n dweud o'r cyfnodau cyfnodau, mae'n dweud o'r dweud, Cyrnnos ymllan o'w wyszig sydd yn rhan swoed simplement gyda eu cyiant o gweithwyr mae derbyn gyda eu gymrydr Mae yw gred Abertaeth Cymru..nau bod eto yw differs 有ethrofodon wydd o'i dyn Ethereum bwysig mewn gim儿u cym storesol o � newydd iawn. araddodd afterlife cymun holl. File unwaith ffyrddconnecto noktrian narôl mawr dORE. dymiwyd i yr hur 잠깐 a hwy fydd desw proceedrgers ar hyn, a fentyard y gallu yn'r ad managers gwahanol yn dழion.absgwysig pa'r hynny cydwyd. Felly, codi infernyn Swydd! Felly mae genettwch amgweithioaf yng Nghymru. Beth ydych chi adon? Felly yda chi'n gwleidio fy modd a ichi ein rhaiorshipr fleeingol. Sumwnaeth ymw Crunch 아이 silyr. o'r ffordd ar gyfer y pethau, ac mae'n ffwrdd ymlaen, oherwydd mae'n fwyaf yn ymweld i gael y ddim yn dweud o'r ysgol. Yn yw'n dweud, mae'n dweud, ac mae'n dweud o'r masybydd o'r seidys, oherwydd mae'n dweud. Yn ymgyrch, mae'n dweud o'r ddim yn dweud o'r ddim. Mae'n dweud o'r GPU. Mae'n dweud o'r dweud o'r dweud o'r dweud o'r dweud, mae'n dweud. Mae'n dweud o'r ddweud o'r ddweud o'r codi. Felly mor siwr mewn siwr mawr â flawnnen,og yn fach yn bwysblygr ac yn fwaith poems o'r ddam gwy rising All right, so first off I'm going to enable this demo, kind of important. Particles naive, be that one there. No, no, no. Awesome. Okay, let's enable this demo and say please do your drawing. We'll start off with 10,000 particles. That's reasonable, right? 159 frames a second. All right. That's awesome. I'm fairly sure that's not right. When I was doing this earlier, it was 30, so that's concerning. But anyway, it looks kind of cool. I've got some particles and they float around and the frame rate's kind of cool. Let's just upload this. Let's upload it to 100,000. Now, because I've got a fast CPU, because most of us have fast CPUs and CPUs generally have no problem adding numbers together, it kind of works all right. However, let's look at something, shall we? What's the best way to look at it? I know. Let's pull a profiler out. Oops, let's get rid of GL. You will use this a lot if you're young Chrome. Let's do some profiling. See what's going on in this place. Wait for five seconds. Have you ever used the Chrome Dev Tools? They're kind of cool. When doing web GL stuff, you find yourself in here all the time working out why everything's so slow. So I'm spending quite a lot of my time in run frame. It's not quite high enough yet. 3% that's kind of boring. Let's go to 1 million pixels. All right. All right. So we're slowing down a little bit. It's 15 frames per second now. Quick profile and see what this looks like. All right. There we go. So I'm spending 20% of my CPU time in run frame. Well, that's not very nice. I don't want to spend 20% of my CPU time moving pixels around. That's ridiculous. I want to use my CPU for game logic. I want to use my CPU for AI. I want to use my CPU for doing user interaction. I don't want to use my CPU to move things around the screen. It seems silly to me. So what do we do about this? Well, look at this and the way it works. I've got some initial positions. I've got some velocities. The thing about the position of an object over time when you've got a velocity that doesn't change and an individual position is that it's a function of those two things. So what I can actually do is rather than this loop here where I update everything on the CPU, what I can actually do is disable this program so I don't bias everything. Generally that's useful. Let's go and find that draw. Next slide. What I can do instead is move it to the GPU because the position of a particle after a amount of time is merely a function of from the start position add velocity times time. It's all I've got to do. Very simple. So what I do here is get x and add the original position based on time and the velocity. Get y the same thing and multiply these together. Great. Let's look at how that works. Let's go to particles. I may start at a million. I've got a GPU. It works all right. So what I'm doing in this program over here is I've no longer got that loop. It's gone. It doesn't exist. All I do is every single frame. Let's go find it again. Every single frame all I do is upload the buffers and render. That's all I do. Nothing else because I've got a shader. It exists over here somewhere. There we go. Where I take the positions and calculate them from where I am. Activate this thing. Cool. So I've got a million particles there. The frame might still be about the same but if we look at the profiler for this many, we'll see what it looks like. I've got a more physical representation of this too which is when I was doing the other version, my fans on the MacBook Air were going completely crazy and this one they don't. So with a million particles, my CPUs disappeared. I'm no longer doing it on the CPU. I'm no longer waiting for that to happen. It's just something that exists on the GPU. So basically, if you've got something which you can parallelise and it doesn't mean changing data all the time, moving that to the GPU makes an awful lot of sense. If you actually use this to accelerate ordinary applications as well, this is not just a toy for building games with. If you want to do anything, any client-side calculations in the browser and you've got everything other than IE, then you can do this. Obviously, there's always a fallback path. Let's use the CPU and a web worker and be very slow about it. That's basically IE's fallback. We'll talk about IE a bit after it innovates because I can spit some invective on that subject. Let's go and say what I did. Let's not kill my CPU. So great. Let's go and look at something I made earlier because all I've shown you here is triangles, yellow dots and a lot of code and maths. None of those things are exciting unless you're me, in which case they are exciting and I think about it all the time, especially at night. Ponies. Let's go and find something. Let's do the waiting for Mac dance. Do, do, do, do, do, do. Come on. Normally, I haven't got a dance for this long. Oh, there we go. Oh, no. It just thinks it's been deprecated. I should update this code. It's quite old, you know. Awesome. Now, if you're on the wireless, you can play with me. This is multiplayer. This is something I made last NDC. Unfortunately, it doesn't work very well in Chrome on battery when I've had the battery turned on. You know what? Let's take this and take Firefox. So, what I'm demonstrating here is that it's still crap, right? Just because we're doing WebGL doesn't mean the browser is any more friendly. It still sucks. So this is on Chrome after I've been on battery for a little bit. I think it's a kind of weird. This is Firefox. It doesn't care. Let's move that is. Awesome. Hang on a second. I'm going to have to blow something up because I have to blow things up. There's bots over there. Let's kill them. Who's getting dizzy? I'm getting dizzy. My fourth estate in a second. If I do that, then dear. Come on. Oh, no. Awesome. Well, that's good. I felt the world. So, what's also being demonstrated here is I actually have different graphics in Chrome and Firefox as well. My particles barely work in Firefox. This comes just crashed, in fact. In Chrome, they look amazing, but everything's really slow. So, lessons here, wow. Lessons we've learned here basically are WebGL is difficult to use. Just because you're in WebGL doesn't mean the browser is any nicer to you. You will spend a lot of your time in the debugger going, why is everything working? It's only a year to make this. Look how simple it is. It's just a rubbish little game. It is multiplayer. It's online. hoverbattles.com. You can play it with each other and have some fun. I suppose it looks awful. Anyway, you can do that kind of thing. Now, what's kind of cool about WebGL is because it's quite fast, you can actually do 2D things in it as well. So, here's something I'm currently working on. We'll see something about it in a second. I haven't showed anyone this yet. Well, good. No, don't note that. Load something else. Yes, I know I've been disconnected. Okay, here we go. This is a 2D game using WebGL. Look at that. So, you can tell which graphics have been done by me and which ones have been done by a guy I've paid to do. The spiders can be after me. Oh, my word. Help. This is 2D. I'm just drawing pictures to the screen. Awesome UI as well, as you can see. Errol Balkyn would be proud of my UI. It's got character. So, this is 2D. I actually started off by doing this in Canvas. Here's a hint for you. Canvas sucks. It is probably the worst thing I have ever used. It's slow. It's consistently slow across all devices. Probably the first thing is actually consistent across all my devices. It's just slow. I don't know why it's slow. I only assume they're making lots of compromises because of cross-platform. I don't know, but my word is bad. WebGL is super fast. Unfortunately, it doesn't work in IE and it doesn't work in iOS. It's just sad because I want this to work on my iPad. But it's kind of weird. So, let's go into some politics around this stuff. Now, I've shown you how to debug things and how to write some code. IE will probably never get WebGL because Microsoft go, whoa, security risk. Apple will never give it on their iPad because, oh, we can't make money in the marketplace if you do that. That's really sad. It makes tears come into my eyes because it means, well, I'll never make any money with my Stickman. I want to make money with my Stickman. I'm not allowed to because Apple will never allow it on their iOS devices. It's just a shame because the consortium which makes the standards of WebGL actually consists of Apple, Cronos, and Mozilla and everyone under the sun over the Microsoft. Again, Microsoft has just been weird about it. It's a security risk, but we'll do it in silver light. Can't explain it. Okay, so I've left the world and everything's gone green. Wonderful. So, I'm going to my slides again. I'm going to do the waiting for the MacBook dance again. Yeah, that's right. My slides run on Node.js, which means that's super scalable. Parth from what they don't work when I use the wrong IP. 2002, I believe it is. Anyway, I'll finish off a little bit and say basically WebGL is pretty darn cool. It's awesome. I love it. It's a pain to use. It's a pain to debug. It's a pain to develop. It doesn't work properly in all the browsers. Are you will never support it? Not on the MacBook? It's a toy to make shiny things with so you can get retweeted on Twitter and hacking news. I'm okay with that. So, I've got time for questions. Cool. Anybody got any questions? I realize I've basically just thrown a little of information at you and most of it won't stick because most of it's maths. If it didn't stick the first time round at school, it's not going to stick round this time either. Any questions? Yes. Did I understand the correct answer? The question is, when you upload these buffers to the GPU, can you modify them? The answer actually is yes, but you have to create special buffers for that purpose. You can basically use render targets, which are textures, and you can render to a texture and extract information from that. In the native world, that's cool because you get a bitmap, look at the bytes and do things with it. In the JavaScript world, that goes a little bit slow. Neapolating bitmaps and memory in JavaScript ain't very pretty. So, as a rule, you don't tend to do it. There are circumstances where you might. You may have noticed in the Hoverbattles game there is lovely blue glow around everything. 22 hours to implement that. Basically, it runs to the scene four times to four different textures, and I stretch them and merge them back together again, and that gives me a blue glow. That's nifty. That's basically outputting to a buffer. That's modifying a buffer I've uploaded. You can do it. It's just most data you upload is largely immutable because let's not explain that. Speak me afterwards and I'll explain more and draw a diagram on pieces of paper of how the different memory interacts and how that actually works across different devices and WebGL DirectX, et cetera. Any other questions? Groovy. So, you're going to go and make some WebGL stuff now? No? No? It's only a little bit of JavaScript. It won't bite you. I'm going to be out over there most of the day today, maybe tomorrow, and I'll probably work on WebGL stuff so you're going to come and tap me on the shoulder. I can dig through some more things and show you even more crazy lines of code to put triangles on the screen. Thank you very much.
WebGL!Allocate your buffers, upload to the GPU, write a shader to execute transforms on those buffers and generate something shiny in the browser. Want to learn how to do this? Then this session is for you! We'll not only talk about what WebGL is, but go through a practical example of creating our first WebGL app, covering all the concepts from the process of allocating resources in Javascript to writing your own massively parallel code to execute on the GPU - all from the comfort of your favourite browser*. Don't worry if you've never done 3D graphics or maths before, this session will show you that you don't need to have existing knowledge in order to create wonderful things that even your non-techy friends will be able to appreciate and go "ooh" over.*Unless your favourite browser is IE, sorry!
10.5446/50820 (DOI)
Hi everybody. Hi everybody. I don't need to talk quite so loud. Thanks everybody for taking some time to come talk to me. My name is Todd Gardner. I'm from Minnesota. This is my first time in Oslo, my first time at NDC, and it's amazing. Seriously, like when I got here about a week ago now, and I didn't know anybody here, and since then I'm actually looking across and I see like a lot of familiar faces that like we become friends over the last couple of days, and that's amazing. So I'm from America, and in America, friends take selfies. So would you guys take a selfie with me? That's awesome. Thank you. So this talk is called a heaping stack of scripts, which is about how to get stack traces in a JavaScript application, and then how to get better stack traces that can actually help you do something and understand how your application is breaking. Because you're building something truly amazing. I know you, I've talked to a lot of you about the applications that you're building, either for your customers or for just your side project or for your open source thing, and I'm truly awestruck on like some of the things that are going on here. But as amazing as anything that we're building is, that turns to crap really fast when it blows up in your face. It's even worse when your customer or your user is really liking your application, and then it breaks in some really frustrating way, and quickly all of that joy that they had using your tools turns to such frustrating anger because this unicorn of a person, you know, is a product that they thought existed, now they feel as buggy and that they can't use it. And that makes the customers angry, and it makes me as the developer angry because all of the hard work that I put into my application just totally disappeared. And so we go to debug our applications to like get rid of these things, and rather than finding errors that are useful in JavaScript, typically we find errors that look like this. To tell you almost nothing about what actually went wrong, how to fix it, how the user got into this state, I have no idea where to even begin. So what I want to talk about today is I want to talk about the JavaScript error object itself and how it was implemented and how it behaves. And then I want to talk about the paths that errors take through our applications and how we collect information about them. And then I want to talk about some common misconceptions with asynchronous errors because JavaScript is an asynchronous language. We bind up events and then they may get called later. And so we don't always understand how our errors will bubble back through those asynchronous chains. And then I'm going to totally destroy some JavaScript in front of you and do things that most good programmers would tell you to never, ever do. But we're going to play with it a little bit and try and do some useful things. Sound good? All right. So we're going to start with the docs. The JavaScript docs are MDN. Don't ever use W3 schools. Use MDN. On MDN, here's the error. And it looks fairly straightforward. There's nothing particularly magical here. We can construct it with a new operator. We get all kinds of good things that we would expect from any other language. The message, file name, et cetera. And the stack trace. That's just there. We're supposed to just get it for free, right? So what's the big problem? Well, my buddy Spock does some JavaScript. And he doesn't believe any of that, because that is not true at all. Because the reality of an error is that none of that works everywhere. The file name, line, and column are not consistent in any browser that you'll use even today. Different versions of Internet Explorer and different document modes of Internet Explorer will behave differently when it comes to the error. Every single one of them has its own quirk. And the stack trace itself is not standard at all. There's so many more footnotes if you dig into the error object itself to understand why it's how they're different. And in the end, we're just left that the Internet is broken and nobody cares. Because how is error handling at this state of affairs in JavaScript at this point? And so we just feel like George Costanza. And it's just terrible. So let's play with it. I'm going to do a lot of demos. Can everybody read that? Can anybody not read that? Good? All right. Thank you. So I'm going to do a lot of demos today. All my demos are fairly straightforward. I just simple HTML documents with some JavaScript slammed into it. Not a whole lot going on. I write everything here in vanilla.js. I'm not using any significant web framework. The only magic that I've written is this little property called print props, which you'll see in my demos. What print props lets me do is it lets me take an object and render it inside of an element just so we can look at how different objects look in different browsers. And that's actually not the demo I want to start with. That's the demo I want to start with. So let's start with an error. So in this demo, I'm just going to do the simplest possible thing. I'm going to create an error object, and I'm going to see what it looks like. So if we run that, here in Firefox, here's the error. And it looks pretty close to what MDN told us it would be. Column number is zero. That's kind of weird. I don't think it was on the zero column. But I mean, everything else is there. And it has a stack trace. And that stack trace has some information. So that's pretty cool. But if we compare it to Chrome, which I was not on the right one, if we compare it to Chrome, already it's different. So latest Firefox, latest Chrome, arguably kind of the standards for a lot of web development, and the implementation is already different. Chrome does not have the column number, file name, or line number on the main error object. But there they are. They're embedded in the stack trace. But that stack trace is even a different format. Those are just strings with various different white space characters and decorations around it. So you couldn't just parse a stack trace and reason about it unless you also knew what browser it came from. We also look at this in Internet Explorer. Where it is different again. And this is Internet Explorer 11. All the browsers that I'm showing today are the latest and greatest because showing things in old versions of all of these browsers just makes me too depressed to get up here and talk about. But in Internet Explorer 11, the error object doesn't really have anything meaningful on it at all at this point. Well, so maybe we should try throwing an error. Throw is, you know, that's typically what we do with an error. Very rarely do we just create one for no particular reason. So let's try throwing an error. And so here comes the live coding part. So I hope the demo gods are smiling upon me. So we're going to just throw a new error and we are going to catch it. And we will print it out what we caught into a, let's put this somewhere else. Let's put this in JS Air Throne. And we will just make a little home for it. All right. So now we can take a look at this again and we can compare a thrown error and an unthrown error. Well, in Firefox, they look pretty close. But now column number actually has a value in it, which is kind of weird. But the active throwing seems to be what populates a column number in Firefox. As an observer of their API, I would have imagined that line number and column number would have a similar implementation and similar behavior because they're giving me similar kinds of data. So that's a little weird. If we take a look at Chrome, it looks entirely sane. Apparently, the active throwing in Chrome does absolutely nothing at all to the object. They look identical. If we take a look at Internet Explorer, bam, now we actually have meaningful information. So apparently in Internet Explorer 11, an error is not really anything at all until it's thrown. But at that point, we do get a stack trace, which does have some meaningful information. But if you'll notice, yet again, we have a third format of stack traces. It does not look like Firefox's. How come the column number is different? You know, I'd never noticed that before, but that's really interesting. I have no idea why it's different. That's great. We should figure that out afterwards. But the stack trace itself, you'll see that it's a different format than either Firefox or Chrome. And so we have yet a third kind of a third format of stack traces to parse. If you expand this and start looking at other browsers, start looking at other implementations of WebKit, look at different versions of Opera, old versions of these browsers, their formats will change. And so reasoning about a stack trace is very important in JavaScript because it's where almost all of our information comes from, but it's very difficult to do because they're all in a slightly different format. So I want to talk about a little library that's out there that's very cool briefly. It's called Stack Trace JS. And it's an open source project out on GitHub. And so it's trying to solve some of these problems. So if we just include that here in our demo, and we'll give this another try. So I'm going to add another attempt here at an error, a stack trace error. What stack trace JS introduces is it introduces a global function called print stack trace. And what print stack trace does is it attempts to do all kinds of really clever things to guess what a stack trace is based on the browser and the current environment. And then it normalizes that stack trace unless I screw it up. Oh, you're right. I did miss the markup. Thank you. See, you guys have my back. I love that about you. So now if we, well, I'm not even printing the right thing. So print stack trace will generate a stack, a normalized stack that you could do other things with. For what I'm going to do is I'm just going to stick it right back on the error so that it looks normal. And so now what I get with a stack trace JS version of the error is I get not just a set of strings, but I actually get an array of strings, things that I could actually program programmatically enumerate across and do things with. And if I look at it in different browsers, now they are still not 100% the same because not all the browsers have all the information, but they're closer. I could actually run these things through a similar algorithm and be able to reason about what went wrong. Stack trace JS is very cool. I encourage you to look at it. There's a similar project called trace kit, which does some additional things with function wrapping, but they're both very, very neat projects. I want to talk about some other things that developers tend to do in JavaScript when it comes to errors. Because a lot of times you'll see an example in an old jQuery example on Stack Overflow where people throw anything they want because it's JavaScript, right? You can just do whatever you want. I could just throw something bad happened. Because I can just throw a string. I don't have to throw errors, right? But when you do that, where did I do that? Oh, I didn't save it. JS Air Throne. When I do that, it's not particularly valuable. Really, you're just exercising another path through your code through the try catch sequence that's very performance expensive to do, but then you don't even get any good information. You don't have anything at all. The browsers are remarkably consistent about not giving you anything at all, but I still don't think it's a good thing to do. The other part of the base JavaScript error object in browsers is the window on error function. So this is implemented in all browsers that you'll care to interact with, but it's a little different in all of them. So I want to play with that. So if you attach a function to window on error, any unhandled exceptions that come out of your code, the browser will kindly try and pass there to you. And you can get some information. But if we take a look at what we get in Firefox, I'm printing the arguments array, and so I'm printing out that we get three things. Well, four things if you really care about counting how many things you got. But you get three things. You get the name of the error, the file it came from, and a line number. That sounds like type error is undefined, script's JS line one. In a minified piece of code, it will not tell me very much about what things are happening. But if we go to say Internet Explorer and take a look at that, something amazing has happened here. Something truly remarkable. Internet Explorer is giving us not just a little bit more information than Firefox, but way more information than Firefox. This fourth property that we're getting passed in is the actual error object passed into your global error handler. Chrome has this as well. Chrome actually beat Internet Explorer out the gate with this, but being that this is kind of a Microsoft heavy conference, I wanted to give IE some kudos because that's awesome. That's way better than what you get with Firefox over here. But this is both amazing and terrible. It's amazing in that I now have access only in the very latest browsers. I have access to the stack trace from anywhere in my application. I just have to attach this one global error handler and I get so much information. But it's also terrible because we have been developing web browsers that run JavaScript for a really long time. And this is the first time we're starting to think about, hey, maybe we should actually pass the error to the global error handler. It's only now occurring to us to do this. Yeah. That's what I want to talk about. So that was my first demo. I like that one. It's a little demo that I like to call the game of throws. The future looks amazing. I'm so excited for a time when we can just attach to this global handler and get all of this information. But right now, chances are we have to deal with all kinds of browsers that probably don't support any of this. Probably we all still have to support IE10. It's not very old. Probably nine, eight, seven, six, five, five. Man, how deep into that rabbit hole do you want to go? Netscape Navigator. Yes. All right. So that's errors. So now let's talk about error paths. So this is a typical demonstration of what your application will do. So at some point, the native code, the browser decides, hey, I want to start your function. I'm going to call your main closure. And it calls spoo, which calls bar, which calls baz. And then at some point in execution, something terrible happens. And an exception is thrown. And it bounces back up that call change just like it would in any other language that I'm sure you're used to. But then it lands back on the native code. And we don't really want an unhandled exception to like crash the native code. In a server-side application, I heard a bunch of people argue that an exception should crash the main thread. Well, the main thread is Chrome or Firefox or Internet Explorer. I don't think anybody wants to allow us to build applications where we can crash the web browser. Generally, that's not a good idea. So instead, what does the native code do? It passes it back into JavaScript across this window on error function. So try and give us as the developers of this web application an opportunity to deal with it. So let's take a look at that quick. This should be a fairly trivial example. If we take a look at this example, I have a simple JavaScript function here, a couple of functions that I'm going to call through. I have a function that I've labeled the outer function. The outer function will call this thing, this iterator across each of a single element of array of fruits and blow up because apparently it does not like apples. And we'll just print it out. And on each stage of this execution, I'm capturing what does the error look like and letting it bubble back out. All the way out to window on error. Sound good? So if we take a look at what this actually runs as, we see that if we, this full stack trace at the point of the inner function, at the point the error happened, we actually know the full execution chain. So we don't need to allow the error to recurse back, which is, I'm sure, obvious, but I just want to point that out. So we see the full error here in the inner function. We see it passed out identically to the outer function. The outer function doesn't have anywhere to go, so it passes to the native code. We wind it back on window on error with nothing. And that's kind of terrible. So if we look at this stack trace, can you all see the stack trace? Let me zoom in a little bit more on that. Maybe a little less. So if we look at that stack trace, it's kind of weird. It doesn't tell us a whole lot about what's actually going on there. We see outer function and then some crazy characters that don't mean anything. And then another outer function. And then some anonymous thing and then some anonymous thing. So all this is really telling me is that somewhere in my code base there's something called outer function and that has something to do with this error. And that's really all it's told me. But it could tell us so much more. We could name some of our other functions. So for example, if I go in here and we take a look at this, I imagine that here, this line 39 is right here. It's where we're throwing it inside of this inner function. Well, why don't we just give this a name? We could call it like the fruit iterator function. And now if we run this, now we have some better information about it. I actually have a name attached to each line in the stack trace. The outer function is already named. But if I go out to line 56, 56 is where we're invoking outer function and we're invoking that from my closure here. So I could even name my closure and call this my main closure function. And now I get a lot more information. So this is really cool. You can name your functions today in JavaScript and you get a ton more contextual information. However, I do want to point out there are some not quite bugs, but implementation decisions that some old versions of Internet Explorer decided to make, where the active naming a function here exposes that as an object in its parent scope. And so for example, if I was to do something like, if I was to create an object and then I was going to put a function on that object and I named that function foo, there are two foos that actually got created. There's the one sitting on this object. There's also just a foo out here that Internet Explorer would have created because it leaked it out into its global scope. And there's a ton of articles about this that I can tweet out later if you're interested that go in very in-depth in it. A handy just rule of thumb is to just prefix the naming here, which I was doing before, just add fn. Just to prevent those global conflicts from happening. You should give something a name that's different from what you're assigning it to, but it's descriptive so you get information for your stack trace about it. So that's just a handy little tip. Name your functions but don't name them the same as the variable you're assigning them to. So I think that's all I want to do with that. But that was all pretty trivial, right? That was the same kind of errors that happened in any server-side language that you might be using. The interesting thing about JavaScript and the part that trips a lot of us up are callbacks. Because when a callback happens, it's some code that we wrote that's calling back into the native code. I'm calling add event listener because I want to respond when the user clicks on a button. Or I'm calling window.set timeout because I want to do something later. And then the native code says, great, thanks. That's awesome. I'll get back to you. When that event happens, it calls your function for you. But what happens when that function blows up? What happens when you didn't properly implement the button handler or you got an Ajax back from your server that you were not expecting to get? Well, what many of us will do is we'll write some code expecting that, hey, foo will catch that error for me, right? Because foo is the encapsulation of that. But the time of that error, foo doesn't exist anymore. It is no longer executing. And that error just goes straight back to window on error. And it's very confusing. So I want to show you that. So if we look here at, I don't want to say that. If we look here, I'm going to use this demo for a while because now we're starting to look a little bit like an application. I've created an application object. And my application object initializes. It binds up a button. And when the user clicks on a button, I call my click handler, which I've geniusly named onclick. And onclick, I improperly implemented to throw an error rather than doing something real. But I want to be a good developer. And so my application, I want to do some cool things when my application dies. I want to, like, pop a promoter to my customer and say, hey, I'm sorry, you know, give me your email address and I'll shoot you a t-shirt and we'll fix it in a week or whatever. I want to do some interesting things. So I've implemented my own on-air handler. And because I want to catch everything, I just surround my entire application blocks. I want to try all of the things and I want to catch all of the things and pipe all of those through to my error handler. And then just in case, the purpose of this demo, we'll throw in a global handler as well. And I kick off the application and initialize it. So we go take a look at what this looks like. Here's my app. It doesn't do anything until you click a button. And I generate an error. And I totally don't do what I expected. I wanted to catch it in my application error handler. I wanted to do interesting things with it. But it didn't. Because I passed this on-click handler, I gave it to the native code. At the time when this click handler is executing, it's not in application anymore. It was a pointer. It was a reference back from the native code. And so it didn't do what I expected. It doesn't do what a lot of people expect because I've seen a lot of try-catches like this where you don't get the information that you want and you just end up on this global error handler. So now I want to talk a little bit about how do we deal with that problem. Because I feel that it is a problem in that there's a ton of context about how an event came to be bound that is important to know. So if we take a look at, let's see, here's where we're going to get into some very ugly code that I am, I don't know if embarrassed is the right word, but definitely not proud of. So let's see which one is first. So here's some code that I'm going to bring in. And what this is doing is I've heard it referred to as monkey patching, as duck punching. I don't know how well those translate into Norwegian. What we're going to do is we're going to change what the API of our native code is. And we're going to tell it to do something different. In this particular case, I'm going to rewrite what add event listener does. And I'm going to introduce another property on it. So in addition to passing the name of an event, like click, and the callback to execute, I'm also going to pass through a function that I want to be called when an error happens. So that I can choose how to do that. And then I do that with a bunch of JavaScript trickery that isn't all that important, other than I'm calling back through to the original add event listener, but I'm calling it with my function, my callback, not your callback. And my callback tries to execute your callback. And when your callback, how many times can I say callback in a single sentence? My callback calls your callback. When your callback doesn't do what your callback is supposed to do, it calls back into my callback where I pass it to the error callback that you passed through on the original callback. Was anybody counting? So that's what I'm doing here, is I'm trying to execute your function. I'm catching if it errors, and then I'm passing any errors to the global. So let's include that here. That's not what I'm doing. So that file is called listener with error. And what that allows me to do is I just extended the API of add event listener, and I pass my on error function right through with everything else. And now if we load our application, I catch the error. I'm able to catch it in my error handler to not pass back out into the native code. But you'll notice that there's some cruft here, right? The stack trace still isn't everything. There's some garbage in that stack trace from how I duck punched that function. The error did come from onclick as we expected, but then there's this other crap that I had built where I had wrapped up your callback. You really don't want to see that. But then where's all the stuff before that? Where's the stuff before the event happened? I want to know how did that application come to be bound? Where's initialize in this? Where is the main closure? How do I trace back to the beginning of time for this stack trace? I think that's a very interesting problem. I think there's a lot of opportunity there. So I wrote this other thing, which kind of takes the same idea and just goes a little bit farther. That's not it. And so what this is, is it's the same kind of thing. I'm duck punching add event listener, and I'm adding a third parameter. But this time I'm doing something. The act of binding, I do a few things. I actually throw an error at the time of binding, and then I grab its stack trace, just because I want a copy of it. I'm going to save it for later. I'm also going to grab some of the time that you bound the event. And then when I go through to try and execute your function, if it breaks, now I have two stack traces. I have the stack trace after the event broke, and I can reference back into my closure and say, give me the stack trace that existed before we bound the event. And if we do that, I'm in the wrong thing. I didn't bring it through. Sorry. Man, totally ruined it. Async listener. So now when we do it, check out this stack trace. I think this is cool. I don't know about you guys, but I think this is amazing that I can tell so much more about my application and how it died from this. I can see that the application died and on click. I still have a little bit of garbage in there from how I'm doing things with this add event listener callback. But then I know that 13, 42 milliseconds passed between when the callback executed versus when it was bound. I can tell you how long did the native code sit there waiting for this event to happen. And that event came to be bound with some more of my garbage from here I did it, but initialize bound the event, which was called from main, which was executed from our main entry closure. And so now I have a full chain of events that I can look at and reason about my code base here. I can see exactly what happened the entire way. I can see that, hey, on click died, garbage, garbage, garbage, called by initialize. And I can see both sides of this. And this is kind of trivial in this little application, but it's anybody built large scale JavaScript applications where it might have hundreds or thousands of JavaScript files but over a code base. It can be very important to see some of this. But there's more callback functions than just add event listener. There's other things that we do. So maybe my application wants to do something like four seconds after load, I want to do something. I want to, I should probably implement an after load function, huh? Function after load. We're going to, oh, I don't know, throw a new error. You probably don't actually want to do that four seconds after load, but this is an error. An error talk. A timed error. So four seconds after load, I blow up for a different reason. But I want to understand not just the add event listener, I want to do all of them. So if we look at, you know, I took the same idea and I just went, you know, one step farther. And so rather than just duck tight or duck punching add event listener, kind of figured out a way to duck punch any function. And so in this case, I'm overriding set timeout as well. And event, add event listener. And how did I do that? Callback index. Oh, yep, sorry. Sorry, my bad. What I did here is because I wanted to wrap this at such a low level, I just wanted to insert this into any project I wanted to work on. And I didn't want to have to change the API of like add event listener. I didn't want to, because oftentimes we'll use jQuery, or we'll use knockout, or we'll use backbone or Angular or something. And you're operating at a much higher level of abstraction than actually manipulating base host functions. And so you can't change that API without like throwing out a ton of work. But wouldn't it be great if like we could leverage some of the infrastructure that's already there? What if we could just overuse window.onair from any function I wanted? So in this particular version, this also duck types or duck punches the base level host functions. But rather than taking an onair callback, we just call directly into a function on window on air. But I don't let the native code do it. I do it. And I have full context. So I'm going to pass the name. I'm going to pass the line number. I'm going to pass the column number. I'm going to file name. And I'm going to pass the error object itself. And in this way, I can totally beat out, oh, man, I have to implement it again. That was async error. And now I don't have to even pass that in. And now if I run this guy, I've implemented proper window on air for Firefox. Because I've not hit the native code at all. I've built out my own asynchronous stack trace the way I want to build it. And I've passed it to the global air handler, which I'm already expecting to exist and is already guaranteed to be there in all browsers. But I'm not letting the native code with its limited implementation worry about it. Also because I've called it, and I haven't relied on the browser to call it. I have to actually go to it. You'll see that it is, it has the same error object in all browsers. I don't know why it doesn't have those other things. Now I actually have the same implementation, the same details in all of the browsers because I've taken control of it. Now I know that that is some terribly ugly code. And I fully recognize that. But I think this is interesting. And I think this is valuable. And if we can find a way to get here that doesn't have some of the drawbacks, I think this could totally change how we go about debugging and understanding our web applications. And when we can debug and understand our web applications, that makes me as the developer and the support person really happy because it makes my customer happy. Because that awesome thing that we built together, like it didn't get ruined by something terrible happening. And so that's what I had to show about async paths. But I want to leave you guys with a counterpoint. Is that this talk was all about how to get stack traces and how to get better stack traces. But I've developed a lot of JavaScript. And stack traces aren't everything. A lot of times the stack trace doesn't give you anything of value at all. Here's a real one I scraped this out of a project I was working on. I have no idea what's happening here. Somewhere in here I screwed something up. I know it's me screwing up. I'm sure jQuery is fine. Somewhere I did something terrible. But I have no idea what it was. Here's another example from Angular. And I'm sure it's my fault. But I have no idea what I did wrong. And so the stack trace by itself isn't really enough for us to reason about our applications. So I've been thinking a lot about how can you capture more context. And this is specific to your application. So I'm kind of asking you to think about your applications. How do you know what your user was doing? What information can you record about them to know, like, what did they click on? What did they go? What page did they go to? What state was my application in? And how did it get there? And what information would it make sense for me to record about the state of my application so that when it blows up, you know something about it? And then what was happening in the overall environment? Maybe the user has, like, a really crappy Chrome extension installed on their browser and it's, like, interfering with your application. Or maybe it's just you're getting weird Ajax timing and your calls aren't coming back in the order that you would have expected them to. And so I'm working on a little project called TrackJS where we're trying to solve some of this. And we're trying to build a way to track what the user network and consoles are doing as part of the application so that when an error happens, we can try and give you some more context. This is me. Here's my contact information. I'm going to hang out and talk about JavaScript and hacking and some very, very ugly code. If you'd like to chat with me now or later, that's cool. Here's how to get a hold of me. Thank you very much for your time. Thank you. Thank you. Thank you. Thank you very much for your time. I got no other shoulder to cry on but my own. No other but my own. Please won't you bring me back home? I miss my home. Oh, I got the big city blues I didn't wanna stick around and I know That she knows that this here's all concrete and I miss my little town and I know That she knows that I got the big city blues I didn't wanna stick around and I know She knows, she knows This here's so concrete And I miss my little town And I know, I know She knows, she knows It takes so many faces To fill me up Oh, we passed the rim Of this here, come And got no one to talk to No one to hear me out No one to do nothing Nothing, nothing, nothing, nothing, nothing
JavaScript applications keep getting bigger, more complex, and harder to debug. Without stacktraces, how are we expected to find, decipher, and fix our bugs? "Stacktrace or GTFO!" Getting the Stacktrace is on us. Only we can design the code to capture stacktraces effectively. Let's explore some popular libraries like stacktrace.js and tracekit, some techniques for catching exceptions without mess try/catches everywhere, and what's coming next with expanded error objects. Let's talk about finding and fixing our errors and stop this proliferation of a broken JavaScript web.
10.5446/51004 (DOI)
Okay, people, welcome. This is a talk tested on real hipsters. We're going to do a bit of hipster-gramming today. We're going to be using sort of progressive forward-leaning frameworks, a couple of them. I'm Anders Noros. I work as a chief technologist for Tereq here in Oslo. And this talk is about really two things. It's about service stack.net, which we're going to be using for building the backend for our application, and backbone.js, which we're going to be using for building the front-end. Now, what we're going to be building is a single-page web app. And you probably have used those kinds of apps around the web. They're becoming hugely popular. Like, whenever you go on Twitter, you're using one of those. And what we're seeing is that for many of the clients I'm working with is that they want better user experiences, more snappy user experiences. And the way to get that and to get that really good feel with it is to use lots of JavaScript and build something that runs mostly in the browser. So today I'm going to show you how to do that and how to properly architect such an application. So there's going to be quite a lot of code. I was thinking about doing this as an entire live coding session. But there's a fair amount of code here. So if I did that, I probably wouldn't have time to talk. And that's already been done. So I figured instead of doing lots of live programming, we're going to look at lots of code. We're going to fill in a little bits and pieces around. And we're going to build a quite nifty app. That's reminiscent of something that you actually would build. So we're going to move a bit past Hello World. So let's just get cracking. So the first thing we're going to be using is service stack.net. So just to be sure, has anyone used service stack before? Yeah, a couple of guys. For you others, service stack is basically WCF done right. So it's a framework for building services on the.NET platform. And that's it. It also has a couple of other components. But in essence, that's what it's all about. Now, for our application today, we are going to be using that to create the back end. And I figured we'll just start... Come on. We're going to start by building a service to handle contacts. So the application we're building is a contact manager. And using service stack for something like that is really, really simple. What you're looking at here is the actual service. So we have a few using statements up here. What we're using here is service interface and the web host dot endpoints. And we're building a rest service. So service stack has like tons of different interfaces and abstract classes you can implement that will help you along the way, depending on how much boilerplate you want in place for you. Now, since we're building a plain old rest service here, we're using... I'm using the rest service base abstract class. And I'm telling that that I'm using the contact DTO. Now, the contact DTO looks like this. It's about as plain as C sharp can get. And the usage of DTO is a very central concept in service stacks design. So it's built around DTOs going in and out of your services, which is a very good design pattern to follow. If you doubt that, you should read Martin Fowler's book on the topic. So service stack uses those. And since this is a rest service, I have like all the HTTP verbs as events coming down here. So whenever I receive a get request, this method is going to be invoked. And something is going on inside here. We'll get back to that later. Whenever I receive a post request to the endpoint, this method is going to be called. Whenever I receive a put request, this is called. And on delete, this is called. So there are like four methods. Really, really simple. So this is the service. In addition to that, we need to host the service within our application. And for that, we use a application host. Again, we just inherit from the app host based abstract class. We override the constructor to give our service a name and just tell the service which assembly it will find its services in, the app host, where the services are located. Then we, I also do this override the configuration thing, which is something that you don't have to do. But here, I need to do a couple of things with the configuration of service stack. So I'm just overriding that method. Just changing the default to a metacamal case names for the JSON code to get this to work with backbone, which we'll be looking at in a minute. And just setting up some routes for my app. That's it. That's everything that I need to have in place to run the service. And if I start this now, this is where all the magic happens, I get this, which looks pretty much what whatever you get when you build a WCF service. But here I have like tons of different endpoints. I have an XML endpoint. I have a JSON endpoint. I have a JSV endpoint. I have a CSV endpoint. I get so 1.1 and so 1.2 endpoints for the service. This plain little thing built with just three classes. And it's a fully functional service here. So this is really the beauty of service stack. Service stack makes it extremely simple to build these kinds of services and get them out there real quickly. And now I'm running this within a ASP.net environment. I didn't even have to do that. Service stack can be self-hosting. So I could have this running as a single XE file that I would just start up where it would run its own HTTP server inside. But I could just as easily run this as a Windows service if I was running this in Windows or as a Unix daemon as I would on the Mac. So this works on any platform. And remember, this is built to work with Mono. So you can actually run this on Linux boxes as well, which can cut your costs in your production environment quite drastically. And this is a very important point. Because when we're moving into building these kinds of rich JavaScript clients, we're doing one thing and that is pushing more functionality from server out to the client. And that's where backbone comes into the picture. Now is anyone familiar with backbone.js? Quite a few people. So other JavaScript embassy frameworks, just to check. So you've been using something. So surprising. Backbone is the most popular framework in the room. Good. So we've seen a transition towards more rich JavaScript clients, meaning that we push more of the work out into the browser. And that's a good thing because most of you guys' laptops is probably more powerful than the servers that I run. I run small little Linux boxes, virtual ones that don't have too much memory. They're not fast. So they really don't have anything. Your laptops are more powerful than that. And what's the sense in doing everything at the back end as we've been doing for quite long when we can do more and use your power and your computing power to get things done? The trade-off and the benefit for you is that you get a better use experience as well. And backbone is a framework for building those kinds of apps. So what about all of these other things? So I was thinking that lots of you guys would have used knockout, which seems to be very popular within the.NET community. No one used knockout here? Now you remember. So why not use knockout? Well, knockout is a great framework. It's a bit different from what backbone is. You have these other things here. You have like Ember, which is another of my favorite frameworks, which is highly opinionated. So it really leads you down this golden path. You should build your app this way. And knockout does much of the same. Knockout subscribes to the MVVVM pattern as the other ones are more pure MVC like Ember. And you have things like cappuccino. You also have spine, which is probably the most similar to backbone. Doeit has taken a much more opinionated stance on how you should design your application. So what is backbone then? Well, for the last like, how long has the web been around now? It's nearly 20 years. During that period, we have gotten lots of great frameworks for building server side apps. And that market is very mature now. Finally, we've gotten a proper MVC implementation on the.NET platform as well. And as that has happened, the shift has been towards doing more on the client. Now in client side JavaScript programming, there have been emergence of very good frameworks as well for doing lots of manipulation of the DOM, like jQuery is brilliant for that. But there hasn't really been much until lately to structure your applications and get a good function in architecture in place. And as our client side apps grow larger, that becomes increasingly important. Because what we've been doing when we've been creating these sorts of rich apps lately is that we've been using all sorts of tricks to handle application states in the client. We've been doing things like having hidden divs in the DOM, just placing our data in there and copying things in and out. We've been having tons of Ajax callbacks to the server, usually nested within each other and having success or handling in there, many levels deep. And once your application grows and your codebase grows, that turns into quite heavy technical depth. And that's something that you don't want to live with because it slows you down. It makes your application error prone. And in the end, you don't really know what your application is doing at the different points. It's really hard to introduce new developers into codebase that's designed that way. So these frameworks and Backbone as well really help you with getting that architecture in place. Now, what makes Backbone different from any of these other more opinionated frameworks is that Backbone is more like framework for building your own application framework. So it doesn't really mandate how you should do things like binding, model binding to forms and things like that. In fact, Backbone doesn't do that at all. It leaves it up to you to do that. So Backbone is like a real simple but efficient framework that addresses the core issues here and leaves it up to you to do the rest of the work. Now, there are quite a few plugins for Backbone to achieve things like model binding, which isn't there from the get go, but you have the choice. And Backbone doesn't even force you to do HTML stuff. I've seen this used with Canvas, SVG, because you define your views and what your views do and what they represent is really up to you in your application. So I've learned that it's popular to be singing in your talks. I did that yesterday. How many of you guys were at NDC 2009? Wow. Quite a few people. How many of you saw my opening show that year? And how many of you remember my own take on Nordic Monarchers OPP? So you all know that song, OPP, Nordic Monarcher, and the old hip-hop heads in the room. So it basically goes, you're down with OPP and then the crowd goes, yeah, you know me. So I was thinking I'm going to be as cool as I'm all. So you're down with MVC? You're down with MVC? Yeah, there are a couple of guys who know MVC in the room, but do you really know MVC? This is the pattern, what it looks like. And most of the people doing MVC these days know MVC from working with a web framework. So be it ASP.NET MVC, be it Rails, be it Play on the Java platform, anything like that. That's the MVC frameworks we work with today. You know, guys, that's not MVC. That's a pattern called Model 2 that was described by Joshua Block sometime around 1999 in the Enterprise Java heyday. And it's a pattern that was devised for servlets and JSP programming. MVC, yes, it was described by Trigve, who gave a talk yesterday. I heard that was a real good talk. Didn't want to attend. Was it as good as I've heard? Yeah, great. I'm going to catch that in the video later. It's this pattern. And MVC is heavily event based. So you, whenever something happens, whether it's to your model or in your view, there are events going off. And that's represented by the dashed lines here, whereas the other lines are concrete references. And events is something that JavaScript does incredibly well. So what we're seeing with these new frameworks that spring up is that they're shifting back towards more traditional Trigve Ranskav style MVC. Except for backbone, though, because backbone isn't an MVC framework in that sense. It uses entirely different names on things. In backbone, you don't have models, views and controllers. You have models, collections, routers and views. And views are not views as you would expect them to. So views are sort of more like controllers. But then again, they're also renderers. So they can be looked upon as presenters, which is taken from the MVP pattern, the model view presenter. And then you also have the routers, which also share some of the controller responsibility. And your models are models. But your collections are quite interesting, because that's one of the powerful features of backbone when it comes to creating apps. Because when you're doing MVC style apps on the server, you're usually persisting your models into a database, right? In the web browser, you wouldn't be doing that. You could be persisting it to local storage in the browser. But at some point, you probably want to store this on the server as well. And then you're persisting your objects into a REST API, usually, which is similar, but then again different. So the collections in backbone really represent some sort of local collection of objects. So it's a store for your models. And it's a store that also has events. So you can get events raised whenever something happens to your model. And this is powerful for tying your application together. So building an application like this is really an exercise in growing your application architecture by like four layers. And that might seem really daunting at first. And you're going to spend quite a lot of time getting all this stuff in place. So when you're building these kinds of apps, it's not really like you're bootstrapping anything. If you want to do that, you're better off with knockout. But once you have this in place, it's going to be real helpful for you to keep a good structure on your code and let it live over time and grow organically. So just to explain what goes on in the life cycle of this stuff, I actually did a build point slide. And this is the first time in ten years I've done that. So like every single web app, it all starts with a request. So the request is handled by the MVC router on the server side. And based on what type of request that is, it's usually a get for the index page. The router calls the appropriate control reaction. Then a view template is rendered and it is all passed back to the client. Then backbone, the backbone initialization code needs to be run to bootstrap your entire application. What happens then is that you probably bring some data back. We're going to look at that later. To populate your application with data from the get go. And then you start the client side router. So there is a router running in the browser now. And whenever the hash part of the URL changes, that router is going to invoke a route method based on the parameters in the URL. And then one or more views is going to be rendered and the user sees your application and is really, really happy. So there's quite a lot going on, but it's not all that difficult. So this brings us to where we start looking at the real application here. So we've seen the back end service, right, built with service stack using minimal mechanics. Now we're going to look at the ASP.net page. So this is my default page. This is plain old ASP.net. And there's nothing in there. There is really no HTML markup except from getting a layout framework in place from application. You guys in the back can read this, okay? Yeah? A little larger. We can always do that. So like 16 points, would that be good? Like that? Yeah. No problem. You're welcome. Apart from that, it's really just including a whole bunch of JavaScript files. But what we'll be using here is that we're using jQuery, obviously. We're using underscore, which is something that Backbone depends on. Then we're using Backbone. And I'm also going to be using an add-on for Backbone called Backbone support, which is developer for bot in Boston. And I'm going to be using handlebars for templating. And a little preloader I wrote myself to load and compile those templates. Normally, you would probably do that compilation server side. But I wanted to keep this as minimal as possible. And so these are all the dependencies. And then our app comes down here. So we have an app. We have those models and stuff I just talked about. It looks like this. So we have our application models. We have our collections. We have our routers. We have our templates. And we have our views. And then we have our app. So I want to start by looking at the model. So this is the model code. It's basically three lines of JavaScript code. And what this does is that it defines a contact class. And Backbone is using class-based JavaScript, which might be a bit unfamiliar at first. But luckily, it's using underscore mechanisms for that. And it's really, really easy to write those classes. And the way you do that is that Backbone has model class, which you extend using the extend function. And then you just pass in a hash of your implementation. Now, there's really not much going on in there. The only thing I need to do is that I need to provide a URL route to tell Backbone where my service for this model is, where is the service endpoint at the back, which was the service endpoint that we saw here. The other thing we have is collections. And collections are closely tied to models. You usually have a collection per model. And just like the model, we have, we provide a reference to which model object this collection is for, and then the URL to the service. And finally, there's the router, which is a bit larger. What I do here is that I define a hash of routes for my application. So if there are no parameters, nothing there. I'm assuming that this is the index. And if we go to context slash new, we create a new one. This is very familiar for anyone who's worked with an MVC framework before, right? It's just the same thing. So whenever the router isn't worked on any of these routes, it's going to call one of these functions representing that. And as said earlier, the router has some of the responsibilities that you would expect a controller to have. And this is where that comes into play. These are really controller actions, if you like. And usually, a controller action does a couple of things. The first thing it does is to handle the data. And that is already available to me here. I'm going to show where it comes from in a minute. And then it creates a new view. And then it just asks Backbone to swap in this view. Show me that view. And this swap function I'm calling here comes from Backbone support. So I'm just going to recommend that if you're going to be using Backbone, you want to use Backbone support as well. Backbone support is this little extension library for Backbone that fixes a bit of the stuff that's painful with Backbone. Because of the way this works behind scenes, usually you would use a view show, view method in Backbone. But that method doesn't do its own garbage collection. So you would probably have some memory leaks going on, and especially large applications. And that is fixed in Backbone support. So I recommend using that. So I mentioned this, that we have this collection coming in. And that comes from a bootstrapping code, which is here, which is the code that initializes the entire client-side application. It looks like this. What I'm doing here is that I'm defining a namespace for my application to have places to put my different JavaScript classes in. So I have a context app namespace. Within there, I have models, collections, views, and routers. And then I have an initialized function. Here, I'm just preloading and compiling all the templates. So we don't really need to go into the detail of that. But then I have a create my contacts instance using some data that is passed on from the server. And that's to have data already available in the application. So that when the user first loads your application, you have it pre-populated with data there. So you don't have to load the application, then go back to the server, grab some data, and then it all starts. So you really skip a request back to the server. And doing something like that is very, very simple. If you look at my ASP.net code here, you can see that I have a little script tag towards the end of my page, which is of type application JSON, and I have an ID on there, bootstrap data. And then I just go and grab the bootstrap data property from the code behind file, which is this. To keep this extremely hipster compliant, I'm using Redis as the back end store. Anyone familiar with Redis? A couple of people. Redis is basically a distributed key value database. It's one of those new fancy no-sequel databases that are around, and it's extremely good. And it's very, very nice to work with. You can look here that I'm just creating a new client manager. Then I'm executing it within the context of an entity. And then I just have this little link expression in here that calls get all, and then I call to JSON, which is an extension method from service stack.txt. So you can use that within any project to get JSON data into the bootstrap data property. So I just print that out. Let's just go over here, and I'm going to go back to root URL. So this is what the application looks like. It's a contact manager. If I go into the page source here and look at this, you can see that down here, I just got a bunch of JSON data, which is the stuff being passed down from the server. And if we look at our application.js here, you can see that I'm using on the client, I'm just using jQuery here to grab the content of that script tag, as you would to grab the content of just about any tag in your HTML DOM. And then I'm using JSON Pulse to get the data structure out there. I just passed that into the collection constructor, and backbone is smart enough to know how to handle that stuff. So if you look at this, I just sell backbone that I want to collection property to have this assigned. And it does that behind the scenes for me. I could go in here and override the initialized function and do all that myself, but I don't really feel the need to because it's already been done. And then we go on. We just check if the router is started. That's done by checking the backbone.history started property, if that is true. The router all the render runs and there's no need to start it. Otherwise, fire it up. And then our application is running. And the next thing that would happen then is that the router would be invoked. So for this page, as you can see up here in the URL, we're hitting the blank URL, connecting this to the index page. So that would cause this function, the index function to be invoked. What this does is that it creates a new context index instance and it pauses on the collection down there. Swaps the view in. Now, let's head on over and look at the context index view. This is a composite view. Are any of you guys familiar with the composite view pattern? It's not a design pattern that stems from the enterprise Java heyday back in the late 90s. But it's basically a good pattern to build your view from many subviews. So you have a hierarchy of small little views doing one thing and you compose them together to create your entire application view. It's basically what a portal does. And backbone support has a composite view class, which is a... This extension of the backbone view class, which gives you lots of help by managing that structure and again, doing that sort of garbage collection that you otherwise would need to do yourself. To see a lot of you guys in the back are reading code now and that's good. That's good. We're going to be reading code. So views, as I said initially, are basically renderers. They're presenters. So the view's responsibility is to render something. And that happens within the render function. That is the only function that backbone expects to be on a view. And since this is like the entire framework for my application, what puts pieces, all of this together, there are some subviews in here. So the first thing I do is that I render the template for my master page, so to speak, the entire layout of this thing, which happens in this function down here. What I'm doing there is that I'm just setting this view's element. So the dollar element thing is the DOM element that represents this view. At this point, that is a element disconnected from the browser DOM entirely. So it only exists locally here. And it's going to be handled by the backbone infrastructure later. And I just set the HTML content of that to whatever is in my preloaded context index template. And that is just plain old HTML code that looks like this. So this is the layout for my app. Then I move on to render the contact list. And here I'm going to be using a subview to compose this together. So I have a contact list view. That view works on the contact collection. So I just pass my contact collection in there. And I render that as a child to myself. And if we go into the context list view, it's going to look like this. So again, the render method renders a template which looks like this. Again, it's just plain old HTML. And then it iterates through the collection of contact models. Now for each contact that is in that model, I'm going to create a contact list item view. So again, a little view going in there. But this time around, I'm going to pass the model in instead of the collection. Because that item view works on single items instead of entire collections of stuff. So again, the same pattern. I render a child. And the contact list item looks like this. So this is a handlebars template. Now, are anyone familiar with handlebars template language? Yeah, up there. Anyone familiar with moustache? Which is the one handlebars is built on? Yeah. Good. It's a templating framework that doesn't let you put any logic at all virtually into your templates. You can do some conditionals. You can do some iteration but you can't really put logic in there. Which is a good thing. Because if you look at the old classic old school templating languages like ASP or JSP or PHP or any other three letter acronym you can think of, that has led to many of those applications where you actually write an app in between the markup. How many of you guys have ever written an app like that? I for sure have. It's only a couple of weeks since I lost it. And I did it on purpose though. And it was a Java app. Yeah. So handlebars really just takes a hash of different properties of different values in and then you use these double moustaches to put those into your HTML code. So it prints this in place here. So these are just placeholders. And that's just about what happens in these context, in the use rendering like this stuff. And of course the entire framework around this. And if I go in and I want to create a new contact, I can press this button and I get up a little form or a side here. And I see Bouddhila in the front row here. So I'll add her. And that's just about as much as I know about you. So I'm adding you like that. Now we were actually on a couple of routes here. The first one when I press this is that we came into the contacts slash new route. So let's look at what happened. Again, everything starts in the router. So the contacts new route was invoked. It has a new function. It renders yet another new contact view, which is this little form here. Again, that is a little JavaScript class. It actually extends the index view to get the same layout. And everything I have done here is override one of my own methods, which is the detail pane. And then just renders the contact form, passes in the model and stuff. And that view class looks like this. Now we're going to see a bit more of that controller logic coming into play. So views have lots of controller logic. And that kind of bakes your noodle with the name views. So I was having a discussion with Dan North about this yesterday. And he, like halfway through, started calling these renders. So think of this as half controller named views. Makes sense, doesn't it? So there are a couple of interesting things going on in here. The first thing is that for this little partial view, I'm going to be handling events. So I'm going to be handling the form submit event. So I'm just setting that up. For the submit event, I want my save function to be invoked. And also, if we click the delete contact link, which is not visible here, because I'm reusing this for the show contact as well here. If we go into edit here, I'm going to show the delete button. It's the same view, just rendered differently with different features in there. And then I just go ahead, render this. Same fashion as we did with all the other views. But whenever I click the done button, the submit button on the form, we have the save event fired. Now, as with just about anything you do in JavaScript, the first thing you do is that you prevent the default event from firing. Because you're telling the browser that, okay, I'm a responsible adult. I know what I'm doing. I'm taking over from here. And then you actually have to do your own binding. And for those of you who are familiar with knockout, this is going to be different. This is something that you miss. Luckily, you have extensions like something called backbone forms that is very similar to a Rails framework called Formtastic if there's anyone doing Rails stuff in the room that just creates and binds the forms for you. But here I'm doing my own binding. And that is actually quite easy. All I'm doing is that I'm using jQuery to go grab the data out of the form that's done here. So I'm just referencing these inputs by name. And then I'm calling the set function on my model specifying which property I want to set on the model. Then I call the model save function, which is one of the places where backbone really excels. And that is the integration with the backend. Now, right here, I'm just using the most simple form of this. But I could also go in here and specify which fields I want to include and a whole lot of different stuff. But what this does is that backbone invokes my backend service for me. So if I head on over here to the database and look at the contacts, let's have this in history. So let's get contact A. You can see that Bodle is now in the database. So she's been persisted. All of that service stack magic that is so earlier has actually been invoked in the background by backbone just because I called the save method here. So it's really, really smooth integration with the backend. And backbone actually figures out for me whether this is going to be a post to create a new entity or if it's going to be a put to update something existing. And when the callback from server comes in, which I hope to be a success, so I'm only checking for that, the save function is going to be invoked. And what I do then is that I just add this to the model. And when I do that, I'll head on back to the contact list view again and look at not that one. Sorry, I changed my code last minute. Here it is. In the context view, I'm actually also subscribing to the add and destroy events on my collection. So whenever I add something to the collection, that add event is going to be raised. And I'm hooking that up to my render function. So whenever the collection changes, for whatever reason, I'm going to re-render my entire list of contacts here on the left-hand side. So as I said, this view here is used for two purposes, is used for the edit mode, which you see here, but it's also used for creating new contacts. And there are a couple of differences. You see down here, I have a link, I'd rather not create a contact than done, whereas if I hit this, edit, I have the delete button as well, and the text changes to I'd rather not edit this contact. And all of that stuff actually happens within the template. So I have this form fields template, which is this real repetitive thing, which repeating everything down, printing out the properties from the model using handle bars. And down towards the end here, I'm showing you the little tiny bit of logic that you can do with handle bars, because it's not a whole lot. You can have conditionals. So I'm basically checking if there is an ID property on my model. If there is an ID there, already, I'm going to know that this is an existing model, I mean edit mode. And then I render the destroy button. If not, I'm also going to change the edit this or create a text. So that's it. It was all it took to create that reusable form across. And it's used from the new view here. And if you look at the edit route, you're going to see that I used the same class. So it should probably name this differently from new contact. And when I'm editing these, I'm also doing another thing. That is, the first thing I do is that I whenever I get the ID for the contact in, I go to my collection of contacts, I grab that out. Then I do a fetch from the server to ensure that I have the entire data set. Because I could have chosen to use just a partial data set when I bootstrap this from the server, only include the names, for instance, and keep the rest of the data out of the way. So then I just do contact fetch. Again, this is backbone features to do a get request back to the server. And as soon as that calls back, I create new view instance and I swap that view in. So this is where common pattern creating these kinds of Java applications is that you're going to do some async calls. You're going to have lots of them. These are very async in nature. In fact, that's one of the things that spine does, which is a take on backbone that is more opinionated. And their opinion is that everything should be asynchronous. So it forces you into using this pattern. And this is actually a good pattern. So the last thing that you can do is that I can actually go in here and delete contacts. And this person here I don't really know. So I'm going to go in here, hit delete, and it's gone. What happens there is that I just call the model destroy method and that takes care of everything. It removes it from the collection. So the collection removed event is going to be fired, causing this list on the left-hand side to be rendered again. It's going to do a HTTP delete request back to my server, removing this from the database. And that's about it. Now you've seen this a couple of times now. So you're probably wondering what this thing appears. So this is basically a filter. So I can write inquiries like this and it updates straight away. And this is where we suddenly start to rely on the techniques that we're familiar with from creating more traditional sort of web apps. This all happens in the contact list. And if you look up here at my events, I'm hooking the key up event on the contact query, which is this little text field here. And I'm connecting that to my filter function down here. So whenever the user releases this key, I'm going to use jQuery to grab the value out of the text field. And then I'm going to go grab the list of things here displayed in the UI. And just iterate through each one. The comparison to see if the text contains the string entered into the filter box and then hide it or show it. This is nothing to do with backbone. Nothing at all except for the event wiring. This is all jQuery. So backbone really doesn't push anything on you. You have to choose your own tools to use within. And for something like this, you probably use jQuery to do this a hundred times before. And it feels really good to just keep that convention, stick with that convention. Because that's a language you and your dev team already have. You know jQuery. And backbone is going to become the same thing. You're going to build yourself a little language consisting of these four little words. Models, collections, views, and routers. Because that's really everything that backbone is about. It has really powerful event binding and declarative event binding and declarative routing, which is one of the core features in there. It also has really powerful synchronization features for hooking this up to the back end. You saw that there was virtually no code in there to do that. It's much easier than your old dollar.ajax, something, stuff that you've been doing for a while probably. And it's basically just a framework for you to build your own application framework on top of. That means that backbone is probably something that you should use for big apps. And whenever you're going to build like this little proof of concept app where you need to get something up and running much faster, you're probably better off choosing something like Ember, which has lots of features in place for you already. And you will get going with much faster. So this has been like a real whirlwind tour around a backbone and the service stack back end backed, back end backed. So it's a strange thing. App. And one of the things I've tried to do here is to show you something that looks pretty much like a real world app. And I've used backbone and this sort of architecture for a couple of real, large projects now. And my experience with it is that if you follow these conventions, you're going to have an app that is very easy to grow and it's going to be easy to maintain over time. So to do this the traditional way, are we going to have any questions from the audience? Probably. There's probably new to lots of you guys. Yeah, Rob? I was curious. As far as the URLs go, if you decided you didn't want to use push state or a library, you can just go back and see. Yeah. Yeah, it's a very good question. I'm going to repeat that. For the URLs, yeah, whenever I click into this, it's using a feature that is present in modern browsers called push state. And if you didn't want to use that, how would you go about that? Now, one thing that backbone does is that as any other good JavaScript library is that it provides tons of backwards compatibility for browsers that don't support push state. So it has lots of nifty features in there to actually make this possible in things like IE6 event. So it supports the push state concept in browsers that don't even support push state. But then again, it's really built around push state. So you're going to have these hash URLs. That's just the way the convention is. So it's built around that. You can't really do your own any different routing. At least I've never seen that done. If you put it in history.start, you say push state true, it'll take hash, it will? Ah. Yeah. I wasn't actually wasn't aware of that. So if I do. Yeah. Wow. I wasn't thinking about that. But then again, it's using push state. Because I screwed up my URLs. Yeah. I'm not going to change that around. Yeah. Yeah, because now you're just using push state. I wasn't even thinking about that. But I'd like to keep it this way because I probably want this to be backwards compatible with other browsers. Yeah. And anyone else who wants to teach me something? It's fine. It's good. Good knowing this. No? Okay. We're pretty much on time. So you have like three minutes to run down and grab something to eat before everyone else shows up. So thank you guys for attending. I hope you. And you hope you all find some time to play around with backbone and switch that all your WCF for service stack. Thank you.
In this talk you'll learn how to structure web applications by using the good old Model-View-Controller pattern in the browser rather than on the server. You'll learn how to build models with key-value binding and custom events, collections with a rich APIs and views with declarative event handling. You'll also learn how to connect it all to REST interfaces built with the Service Stack API. After attending this tutorial you'll be able to build modern web applications with super snappy user interfaces using technologies you're already familiar with.
10.5446/51009 (DOI)
Greenstone Dwi'n credu mi 8 ruda Putans,альнойgu nodd maethaddag ar ein bod'r ysty distinction honrif y G Bob kidw fanon dros. Rwynt i ad tension drafod yr adrwn trapeindd, latlwn i gy repairingd amser o'l ystafnd iawn, mae'n fel amser gochio'r newydd sferi she valleyol, mae nei dy chwilco y duda llwydiad, dy pan y pryd grŵr gwiksyn, ond rwy'n drew fïrLo bood penedig o fednig. I was not doing all the text stuff anymore. And I was missing that. So I left there and I joined a trading firm called DRW. And I joined a very small team writing trading software. And I had culture shock. It is the only way I can describe it. Cos I was fairly convinced that all these things that I knew and I had figured out and learnt and whatever, and had been teaching, I thought that these were all really good ways to write software. And I'm doing a talk later on today that goes much more into what I learnt there in a theme I'm calling patterns of effective delivery. Cos what I found was that these guys were able to deliver like really, really good quality, robust production ready software into a live environment, into a live trading environment, tens of times a day, tens of times an hour. They could just straight into production. And they're sitting in with traders and they're kind of writing these systems. And they were breaking all the rules. And I was really upset by that. Cos I like rules. Or rather I like understanding what's going on. I didn't know what was going on. I didn't get why what they were doing worked. And so I've been spent the last two years kind of researching this and trying to document it and understand what they've been doing. And I've come up with this suite of patterns of effective delivery. And as I've done this, I've realised that there's one pattern that kind of underpins all of these others and they all sort of emerge from that. And that's this idea of uncertainty. And that what these teams were able to do, that other teams I'd seen weren't able to do, was embrace uncertainty and understand the nature of uncertainty and work with it rather than what we usually do which is resist uncertainty. And so what I'm going to try and do with this talk is kind of explain to you a bit about what uncertainty is like, what it feels like and why we're spectacularly bad at managing it and why I think that even after I've told you that I think we'll still struggle with it. So who knows what's going to happen in the next hour. So I was looking at process and looking at how people do things. And process, I think process is a way of managing that uncertainty. It's a way of getting some degree of certainty and predictability and that kind of thing. And it's a way of managing risk. And risk is just a business word that we associate with fear. Because if you say, oh, I fear this thing and people point at you and laugh because you're weak. But if you say, oh, I see risk here, they go, oh, you are a sensible business person. So it's the same thing. So it kind of pans out like this. So with apologists of George Lucas, fear leads to risk. Risk leads to process and process leads to hate and gancharts and suffering. And in fact, I discovered a newborn baby is hardwired to fear only two things. One of them is sudden loud noises and the other one is gancharts. The other one isn't gancharts, but it's an interesting thing to go and Google. So what's the other thing that a newborn baby is hardwired to fear? What's interesting about that statement is that there are only two things. Every other thing in life that we fear, we learn to fear. Think about that for a minute. All the things you fear, all the things you have anxiety about, you learn to do that. You could unlearn that if you want, but that's a different talk. So I want to focus on this word risk. So risk is an interesting thing. Risk is multivariate. I'll come on to why risk is multivariate in a little while. I have a very simple one-dimensional model of risk mostly, and actually it's more than that. I want to go back now. I want to go right back to 2001. In 2001, a bunch of techies met in a log cabin in Snowbird in Utah. They wrote the Agile Manifesto. Who hasn't seen the Agile Manifesto? Excellent. Who has seen the Agile Manifesto? Fantastic. Who's not putting their hands up this morning? Brilliant. The Agile Manifesto is a work of genius. It's one of these things that's really small and fits in your head and you can carry it around with you. It has this preface. It says, we are uncovering better ways of developing software. So just unpack that. We are uncovering. We haven't done it. We're still uncovering it. Better ways of developing software by doing it and helping others to do it. Through this work, we have come to value these things. We've come to value individuals and interactions over processes and tools. We think we value processes and tools, but we think individuals and interactions are more useful. We value work and software over comprehensive documentation. You should have documentation, but that's not the point. The software is the point. Customer collaboration is more important than contract negotiation. Yes, you need contracts. You need boundaries. You need all that certainty. But the actual collaboration aspect is more important. Again, having a plan is important, but responding to change is more important. We embedded that into our DNA, into our psyche and we said, we can take this forward and we can be agile. Then we came up with methodologies. We came up with XP and Scrum and all this other stuff that goes around with fluffy edges like BDD and some of those other things. I had a bit of a ha moment last year and it was a really sad a ha moment. So I'm going to have to share it with you so we can move on and be happy, but I need to share my sadness with you. This is what I realized. By about, well, last year, but certainly now, we've kind of gone the other way. So I don't see teams talking about how to make individuals happy or how to figure out how to be more productive. I see them arguing about whether we're doing Scrum or Kanban or Scrumban or Scrumbut or XP or Pure XP or, what's Pure XP? Or whether we're going to do BDD or Scrum like their alternatives. I don't know. The tooling, well, I've got to apologise. I've got a mayor call for here. BDD is probably responsible for more tiny little tools than lots of things. But they're getting these massive religious debates about what CI tool you're going to use, what deployment tool you're going to use. This suddenly burns up a load of our energy. What else? Comprehensive documentation over working software. This whole obsession we've got with executable specification. It's still a specification. It's not the software. It's not delivering the value to the business. It's a useful thing. It's a thing we need to be aware of, but we're valuing it over the working software. We're getting obsessing about the documentation side of things. What about contract negotiation? I have a customer collaboration. Surely we're not doing that. What are we doing when we commit to a delivery and a sprint? What are we doing in our sprint planning sessions or our iteration planning sessions? What are we doing when we commit to delivering a bunch of things and don't deliver them and then beat ourselves up and then sit around all glumly in the retrospective saying, how can we go better rather than aren't we great? This is fantastic. We're going as fast as we can be going because we're in a system and therefore we are currently the output of that system. If we want faster output, we need a different system. Finally, this whole thing about responding to change. How can you respond to change when you have a backlog of 600 stories? I'll tell you you can respond to change. You can spend hours and days grooming your backlog. What does that mean? Delete your backlog. You don't need your backlog. It's drag. It's slowing you down. This is what I see. All this process stuff, it seems to me, is because we would rather be wrong than uncertain. That's a really unsettling thing to discover. We would rather be wrong than uncertain. We will take something we know definitely to be false rather than say, well, we just don't know. I'll give you an example. I'm a Christian. I became a Christian a few years ago. Whilst researching Christianity to prove to myself it was a load of bunk, so don't do that because they might catch you as well. One of the things I discovered while I was researching Christianity and some other faiths as well was just how horrific some things have been done in the name of Christianity and in the name of loads of other religions as well. It's this idea that faith, which is a simple small thing, C.S. Lewis wrote a fantastic book called Mere Christianity which describes what Christianity is. No, we put religion around it. Religion is a human thing. It's a man-made thing. Religion is all the rules and the structure and the hierarchy and all those kind of things that allow us to say, I'm a Methodist or I'm a Baptist or I'm a Catholic or I'm a... We like to have denominations. We like to have these little separations. It turns out, when you read a bit of scripture, there's another great book I read called The End of Religion. Jesus in the Bible is a religious agitator. The one thing he does again and again is bashes on the Pharisees who are like the established church of the day. If he was around now, he'd be bashing on the Catholics and bashing on the Anglican church and bashing on all the big churches because that's not the point he says. The point is, you have a relationship with God and that's it. Everything else is detail. The church says, you've got your priests and your bishops and your archbishops and then you've got your... I'm particularly upset with the Catholic church when I've done all this research. So they've got this guy called the Pope. The Catholic church believes the Pope is infallible. In the Bible, rule one, everyone's fallible. No exceptions, everyone. One guy ever wasn't fallible, that's Jesus. As long as the Pope isn't Jesus, he's fallible. But no, they have this rule that the Pope is infallible. He's the pontiff. I'm thinking, that can't be right. So similarly, these complex questions become simplistic answers. So things that are complex, there are things that are necessarily hard to solve or things that you can't know. For instance, I'll give you another Christian example. The Trinity. So there's this idea of you've got God, the Father, the Son and the Spirit. Are those three things or are they one thing or are they both three things and one thing? To a lot of people, it doesn't matter. The point is, you can't know. The answer isn't in Scripture and that's the only place it could be. So what happens isn't a bunch of people going along and being Christians and saying, we're just going to have to live with that uncertainty. They go, well, we are going to draw a line down here and we're going to say it's three things. We are going to draw a line down here and say it's one thing. Well, then we must go to war with you. Well, we're Christians. We're Christians. I know, let's fight. What? If you decide that it's one thing or decide that it's three things, you are wrong. You are guaranteed to be wrong because the one thing you can't be is you can't know which one of these is right. So by making that decision, you are taking a thing that's wrong rather than living with the uncertainty. Complex questions become simplistic answers. Buddhism has a fantastic model for this. They call them co-ends. Who's heard of co-ends? Right, so a few of you. So co-ends are things like, what's the sound of one hand clapping? Okay. If you read any Terry Pratchett books, it's clur. Now, the sound of one hand clapping, the point about a co-end isn't that you go, oh, I'm just going to look up the answer in the book of co-ends answers. It's that whilst you ponder the question, you have a moment of revelation, a moment of enlightenment, and the Buddhists call it sattori. So the point of the question isn't to have an answer to the question, it's to ponder it and to grow. So I was pondering the question, what's the sound of one hand clapping? The point is that the clap sound is both hands, but you don't get a partial clap for only one hand. Neither one hand can make any part of a clap sound, both hands make a clap sound. And it occurred to me, this is just like pairing. This is just like pair programming. There's a dynamic that happens when two people sit and try and solve something that doesn't happen even partially when one person sits in there. It's something about articulating a problem or sharing a thing or that kind of stuff. Two people pairing is like two hands clapping. There's some metaphor of clap that happens when two people pair programming or pair solving anything that simply doesn't happen elsewhere. And I had this moment of sattori. Oh, it's like two hands. Wow. Again, back to Christianity, it has these things in the Bible called mysteries. And they're underlined. It says this is a mystery. It's not like, oh, what do we think this is a mystery? It says this is a mystery. Judging people. So whether you go to hell is a mystery. Did you know that? No one is allowed to know whether you're going to go to hell. Only Jesus is judged. He decides who goes to hell and who doesn't. So anyone who says, well, I know that someone says he's going to hell or I know that someone says he's going to heaven is lying. They're wrong. You can't know. And so what we do instead is we introduce this idea of doctrine. And so interpretation then becomes dogma. And I think the saddest example of this is the Spanish Inquisition. Ha ha, no one was expecting that. Sorry. No, so the Catholic Church blessed them. They invented the crime of heresy. And heresy is the crime of not believing what the Pope believes. It's basically it. So if you decide that your interpretation of the Bible is different from the Pope's and the Church's interpretation of the Bible, we burn you. That's what we do or we stick you on a spike. That's how understanding and tolerant and loving we are. That's the way we roll. And you look at that and you go, when did the wheels come off? What part of love your neighbor and love your enemy and love God and love, love, love, love, love, love is and we're going to put you on a spike? Because you don't agree with the guy that we decided is always right, which means we have to be wrong. And this is what we do. We build up these constructs. So we resist uncertainty. So, okay, enough theology back to software delivery. So what do we do? We resist uncertainty of scope. How do we resist uncertainty of scope? We create this illusion of certainty with things like planning and estimation and scoping and our story backlogs. The epitome of this that I saw, which I don't know, it caused me to write an article a few years ago called The Perils of Estimation. It was a spreadsheet and the spreadsheet for this project and it was used on several projects. It became the spreadsheet. It had different columns and so for each story you would have an estimated, a high, medium and low estimate. You have a variable for risk, a variable for volatility, how likely it was to change. A variable for, I can't remember what the other ones were. There were seven columns, okay, for every single story. On one project, the way they've done two weeks of planning, they had 400 line items in this. Imagine the effort that's gone into this. Now let's just go right back to kind of 1970s standish report stuff. Let's assume that two-thirds of that aren't going to be useful. Okay, only one-third of that list of stuff is ever going to be delivered, or rather ever going to be used as features. Another third roughly is going to be used but in a completely different form and the other third is just going to be deleted. It's never going to be used. And some other stuff is going to come in in the meantime. How much extra effort and work went into creating this massive thing? It's an illusion of certainty. It's a thing we do to give us a sense of security. We resist uncertainty of technology. So we have blueprints. We have white-coated architects handing down tool chains and standardised tools and standardised development stacks. Because again, we want to resist this uncertainty. We resist uncertainty of effort. This is fantastic. Who's read the book Slack? Anyone? Read the book Slack. Rachel's read the book Slack. Rachel's read all the books. Read Slack. It's fantastic. The point of this is, and especially read the goal, Eli Goldrath, the goal, is I've seen it in the bookstore here. Going by it, it will change how you think about software delivery, I promise you. The point is that in a globally optimal system, you will have lots of local non-optimisations. Or rather, let me invert that. If you locally optimise everything, you will end up with all sorts of bottlenecks in the system. And it's really, really hard to do anything with those because everyone is working at maximum capacity. So you necessarily need areas of Slack and areas of low effort, if you like, activity in a system for it to be optimally conditioned for flow. Now, if you don't know how to measure the throughput of a system, you'll never see that because all you're doing is measuring the local effort. And so we end up with metrics like cost per use. We buy a really expensive licence for, say, I don't know, Oracle database or something like that, a tool kit. And we say, well, this cost so many hundred thousand dollars a year. And so if we divide it by the number of projects we can foist it on, it becomes cheaper. It's still the same amount of money. You still spent it, you might as well have burnt it because every single project you're putting on it, if it's not appropriate, not only have you spent the money, you've slowed down the project, win. And we do this because we resist the uncertainty of this effort and this investment thing. And we resist uncertainty of structure. We are, man. This is where churches come from. Sorry, back to the audience. We're like this hierarchical churches come from. I'm looking at, I'm reading about Islam at the moment as well. And Mohammed, it turns out, fairly early on, says, don't have structure. He says, there's no hierarchy in Islam. Jesus, all the way through, says, oh, there's no hierarchy in Christianity. So what do we have? We have imams and we have archbishops and we have cardinals and all this kind of nonsense, yeah? Because we need some certainty of structure. And particularly hierarchical structures. Why do you think we have hierarchical structures? OK. I've got a theory. My theory is this. We want someone above us telling us what to do because that gives us certainty. And they want someone above them telling them what to do because that gives them certainty. Or another way to look at it is somewhere to pass the buck when it all goes wrong. OK. But at some point up that hierarchy, you can see it thinning out above you and you know you don't know where you're going. And so you start to think, right, now what I can't do is tell these guys that I don't know what I'm doing because there's too many of them. And they'll be terrified. So I'll just act as though I know exactly where we're going forward, everybody. Follow me. And so we end up with these hierarchies. In the same way, the whole sort of waterfall gated delivery model came out of a desire for certainty. Well, if we have these gates, these various different points, we will know that something is true at this point. No, you'll know that everyone's terrified of giving you bad news. So we'll rub it under what we'll have is a series of rugs under which we put bad news until the end when it fails, but everyone's left by then. So it's OK. So we resist uncertainty of the future. OK. The future is uncertain. A famous baseball player called Yogi Berra, he wasn't very well educated, but he was a brilliant baseball player, and he used to come out with these accidentally very, very profound statements. Like, when you come to a fork in the road, take it. And he famously said, I don't like to make predictions especially about the future, which I just love. So now, let's have a look at what uncertainty looks like. We have a model of risk, and our model of risk looks like this. A risk is a big, messy space, and it happens on two axes. You have an axis of impact, which is how bad it will be when something goes wrong, and likelihood, which is the probability of something going wrong. Does that sound reasonable? Right. Think about your software process. Think about how we do stuff, and look at which of those we're trying to optimise for, we're trying to manage. So our mental model of likelihood is a probability where scientists, OK, it's a number, it's a real number in the range 0, 1. And somewhere along there is the likelihood of a bad thing happening, and we want to make it as close to 0 as we can, because then the bad thing won't happen. Does that sound reasonable? And that's what we do. A lot of our risk mitigation is about minimising likelihood. Then we have this mental model of impact. What will happen if the bad thing goes wrong? And our mental model of impact is this. Infinity. When anything goes wrong, it'll be a disaster. People will die. People won't die. So we optimise all of our software process around minimising likelihood, rather than minimising impact, or even looking at impact. An awful lot of the ways that we can embrace uncertainty is about flipping that axis, because the thing you can't do, you can't ever make that likelihood 0. Going forward, assuming that because it's small, it may as well be 0, is a recipe for disaster. That's that lying to ourselves thing again. We would rather believe the lie that there is zero chance of this thing happening, than embracing the fact that since there is a non-zero probability, it will happen at some point, and we don't know when. Thinking about how we might change our behaviour because of that. So this is our model of risk. What would embracing uncertainty look like then? What would it look like if we went forward and said, I want to embrace uncertainty, I want to accept that there is uncertainty, and weave that into how I think about things. Kent Beck, when he first wrote down Extreme Programming, when he first documented it, his strap line for his book was Embrace Change. I think he missed a trick there. I think what he was talking about was embracing uncertainty. A lot of the resistance I hear to things like XP is, we know what this project is about. This project isn't going to stop being about this halfway through, so we don't need to embrace change. Right, fine, but you can't know everything that's going to happen going forward. So there is uncertainty there, and so all these kind of practices do apply. And it's actually about embracing uncertainty. So let's look at this. If we now go back to our idea of scope, how would you embrace uncertainty of scope? What are some things you could do? Interactive part. We're Norwegian. Carry on talking. Scrap most of the backlog. Who wants to burn this man? What could you do instead? If you scrap most of the backlog, what could you do? Say something about what you want to be better in some way for the users. Right, you could say what you want to be better for the users. There's a lovely model called rolling wave planning. Rolling wave planning says we know roughly what we want to achieve with this spend, with this project, with this piece of work. We want to reduce operating costs. We want to make this thing more reliable. We want to enable online transactions. We want to encrypt our passwords, LinkedIn.com. What is anyone on the planet not storing passwords in Clare Text these days? I saw a tweet this morning that said I wonder if the director of security for LinkedIn has updated his profile this morning. If you haven't heard, LinkedIn.com was hacked yesterday. Your password is in Clare Text out on the internet somewhere. If you have a password for any other system that's the same, go change it. That's quite important. The idea of rolling wave planning is that you have these big rocks, boulders, that are the things that are the point of this project. As some of those things become important, you chip a bit off so you have smaller lumps, little rocks. In the near term, which is maybe a week or a couple of weeks out, you then break those things down into manageable, deliverable chunks. You're doing this continually. Your backlog is the next half a dozen things you're interested in. Then maybe the next two or three big-ish themes in order of current priority, because it might change tomorrow. Then going further out is, oh, and here's the other stuff you want to achieve. It turns out you can have a surprising amount of certainty in terms of delivery, in terms of direction, with just that. That's a super lightweight way of doing things. The team I was working on at DRW, they have a whiteboard. Every Monday, what they do with the whiteboard is they go like this. Then they write up the themes for the week. They just chat and they write them and maybe three or four things they're going to try and do. Then they have a corkboard, which is the work in progress index cards. They move stuff across the corkboard. They have a weekly planning session. It takes them 10 minutes. They have a 10 minutes of, what are we going to try and do this week? It's not just the developers, it's traders as well. They're saying, what kind of stuff do you want to see? What should we experiment with? What should we explore? That stays on the board for the week. As they're coming up with things to do, they stick them on cards. That's where they have the little stand-up is around the cards, around the corkboard wall. That's their process. That's all of the planning they do. They're measured by how successful the software is that they produce. How about that? The software they put into production makes money or doesn't make money. The stuff that makes money, they make it better so it makes more money. The stuff that doesn't make money, they either make it make money or they throw it away. They've got a pretty good metric for whether software is successful or not because they're trading. That's how they can embrace uncertainty of scope. Embracing uncertainty of technology, we've got, there was a pattern I talked about yesterday and I'll be mentioning again later on today called spike and stabilise. The idea of spike and stabilise and in engineering, they call it concurrent set-based engineering, which is a much bigger word. But concurrent set-based engineering is something like this, is I'm Boeing and I want to write a fly-by-wire system. What I would do is I will take two or three vendors and I will engage all of them to create me a fly-by-wire system in isolation. They're not allowed to talk to each other. I'll pay all three of them. It's not like the winner gets the gig. I'll pay all three of them. Then at some point I will make a decision. I will exercise an option to go with one of those three systems. At that point I then disengage the other two and just stay with that one. Concurrent set-based engineering is wasteful. It's not efficient, but it's very effective. It means I get to where I want to be, which is having a really robust fly-by-wire system much faster than going through a series of experiments and evaluations and committees and all that kind of nonsense. Spike and stabilise is the same thing. Try loads of different things. Try lots of right software. Don't worry about making it robust and test driven and production quality and all that stuff. Get it out there and see if it seems like it's going to help. If it does seem like it's going to help, then make it robust, then stabilise it, then give it the love. You can embrace the uncertainty of technology. You can accept that you don't know whether a particular technical solution is going to be the right one. It's okay not to know. It's not okay to both not know and pretend you do. We've got a word for that. Embracing uncertainty of effort. I would ask you to look at theory of constraints and Eli Goldrath's book The Goal, which is a lovely book. It's a book about a guy who's in a factory and is failing and is in a marriage and that's not going so well either. He has a school teacher who he bumps into. A school teacher introduced into theory of constraints through a series of conversations. It's a lovely narrative arc over the story. I had a 90 degree shift. It was brilliant. Where you start thinking rather than in terms of effort and activity, you're thinking in terms of throughput, results. Why does it take this long to do this thing? If you break down the steps of this process, you discover that maybe 10% of the time you're doing stuff and the other 90% of the time you're waiting for stuff to happen. You're waiting for handoffs or you're waiting for things. What we do is we optimize the 10%. Now imagine you can optimize that 10% infinitely well. You still have 90% left. We obsess about this stuff rather than chopping down this stuff, which is a much easier target, but we don't see it. It's behind the scenes because this is where the activity is happening. We can embrace uncertainty of effort. We can say we know in order to get a globally optimal system, there will be areas of slack and there will be areas of waiting for things and there will be areas of buffers and those kind of things. That's okay. If we start obsessing about that slack, we can get into all sorts of trouble. If instead we say, let's measure the throughput of this system. Let's see how much we're investing in this system, how much we're getting out of this system and what the cost of operating this system is. If those three things seem to balance, the system is fine. This goes back to a lot of the self-organizing stuff. If you weren't in Roy Osherow's session just now, you missed a lovely Scrum Master's lament to the tune of Adele. It was rather lovely. I kid you not. It'll end up on Vimeo at some point. He's talking about self-organizing teams. The point about self-organizing teams is they can see the stuff that needs to be done if you give them a vision. If you say to them, the way you contribute to this bigger system is by doing this thing. If you can do this thing a bit better, the whole system will benefit because of this. You go, okay, we can go figure out what that means. That's a useful thing to do. Rather than saying, well, how much percentage are you utilized? What's your percentage utilization? How many of you are using a license for this thing that we have a site license for because I want to reduce my cost per license? Yeah. Then we embrace uncertainty of structure. This is where we get into this idea of generalizing specialists. Again, in the teams I'm working in now, we don't have testers. We don't have analysts. We don't have programmers. We have a bunch of people. All those bunch of people are wearing those different hats at different times. We'll do our own build and deployment. We'll do our own programming. Sometimes we'll do our own trading. A couple of the programmers know more about trading and how the trades work than some of the traders. Likewise, the traders are becoming suspiciously technical. What they'll do is they'll work up an idea they've got in Excel. They'll say, we think there might be something interesting here. Hand it off to the programmers and they go, yeah. Let's just carry on with this in Excel for a bit and see if we can... Right, well, now I'm going to turn that into an application. How about that? Ka-ching, money. The innovation comes from all over the place. Again, you don't get this siloed thing where, as an analyst, I'm allowed to think analyst thoughts. As a tester, I'm allowed to think tester thoughts. It's like I'm in a team and I'm trying to make the team go faster. If I have an idea, I don't need to be able to solve it if I can spot a thing that could help the team, that's okay. What I want to do is give you a couple of tools, if you like, that I've encountered that help me embrace uncertainty. I hope you'll find them useful. Real options. A few years ago, a chap called Chris Matz, he was a business analyst at ThoughtWorks a few years ago. With me, I was a developer there. I was talking about this thing called behaviour-driven development. He realised that the thing I was trying to teach TDD with applied to analysis as well. We were going, oh, that's really cool. We should do that. He carried on thinking about this. He took the idea of financial options and applied them to any decision you make. This is the complicated bit. Does anyone know what financial options are? Excellent. Could you explain to these guys what financial options are? No, shaking your head. It's not that complicated. A future is a contract to transact at some point in the future. We might agree that it's now June. We might agree that in August, I'm going to buy so many Krona from you and I'm going to give you so many pounds. We decide that now. In August, we're going to do that transaction. It may be that the value of Krona has gone down, in which case I win, and it may lose. Maybe the value of Krona has gone up, in which case I win. That's a future. It's a contract to transact at some point in the future. There's uncertainty associated with that. An option is the right, but not the obligation to do one of those. In other words, an option is like an insurance. It says, I'm going to buy the right to buy so many Krona for so many pounds in August if I choose to. That's an option. In August, I've got two choices. If all these Krona have gone up in value, I should do this and I do the transaction, and you have to transact with me. That's what the option contract says. If the Krona's value has gone down and the pound has gone up, I'm fine because my pounds went up. I take this option and I tear it up. I don't exercise it. That's an option. The point about an option is options have value. At any point you can plot on a graph, using all sorts of complicated mathematical models, the value of an option's contract over time. That's the first thing about an option. It has value and the value is a function of time. The second thing about an option is it expires. There is some point at which the option is now worthless. It's now a contract that happened in the past or didn't happen in the past. That's how options work. What Chris Matz and a Dutch guy called Olaf Marston did is they applied this idea to every decision and they said, well look, options have value. Any option, not financial options, real options, any choice you can make has value. If it's the choice when to buy my servers. I could choose to buy my servers now. What does that mean? That means I've sunk some cost now that I could have kept in the bank for a bit. It also means I have my servers on their way. I now know my servers are three weeks away because that's how long it takes them to deliver servers. I've bought an amount of certainty of when my servers are going to be arriving. What have I also done? There's the opportunity cost. There's money that I just spent on servers. I'm now not able to spend on some other stuff. There's a bunch of factors in that decision to buy servers. Cloud versus data centre. Do I buy my own servers and put them in a data centre? That's called capital expense. I bought stuff. Or instead do I rent space in the cloud? That's now operating expenses, like a monthly rental. Again, there are different decisions associated with that. All these options have value, and that value changes over time. If you think about my server purchase decision, as I get towards when my project is due, that option is only going to be useful up until if I've got this three-week lead time. If I don't buy servers at least three weeks before the project ends, I'll be late. I won't have the servers. Whatever else I've done in terms of software delivery, I won't have the servers. Options expire. They have a value that changes over time, and they expire. What he said was, in which case, then commit deliberately. In other words, know why it is. Never commit early unless you know why. We commit early. Why do we commit early? Because we fear uncertainty. Committing when we have insufficient information, i.e. when we've got a fairly high likelihood of being wrong, is more comfortable to us than living with that uncertainty. Knowing that the uncertainty is the better place to be, we don't have a good reason to exercise an option, so let's not yet. We know that's the right thing to do. We know that committing early is a wrong thing to do because it eliminates a bunch of options that may be useful. We still do that. We do it all over the place. We do it in our sprint planning. Especially anyone who's doing four or six-week sprints. You just close down the option of all the things you could do for the next six weeks. Then, bang. Fantastic. Well done. That's making the assumption that you're going to learn nothing over the next six weeks. If that's also true, that's really sad. I don't believe you. I believe you'll learn tons of stuff over the next six weeks, but you won't be able to action any of that without causing pain and suffering and meetings. So commit deliberately. Don't commit early unless you know why. Buying hardware, buying licenses, any kind of cost decision is almost always an early commitment, an early investment. Technical decisions. You can make technical decisions such that you leave options open. Bob Martin this morning was saying one of his criteria for good architecture is that it allows you to defer decisions about tooling and about the tool chain you use and about the technologies you use. You can defer those decisions. Good architectures isolate little subsystems so that you can make independent decisions within those subsystems. You're not coupling decisions to each other. In other words, exercising an option here doesn't cause another option over here. Sorry, it doesn't close off another option over here. So real options is a model. Deliberate discovery. This is something I wrote about a couple of years ago. I wrote an article called The Perils of Estimation and basically saying I think that estimation as a process isn't useful. I want video saying that. I'm now, that's going to be on the internet. I think estimation, certainly in the way we do it on projects, is pointless. Actually that's a bit mean. I think that by doing estimation you will learn stuff. So by doing the process of estimation and planning and all those kind of things you'll discover things and those things might be important. You might discover that actually there's a whole bunch of technology decisions you haven't thought about. There's a whole bunch of interfaces with third party systems that suddenly come up while you're discussing the detail of a story and planning. So actually there's enormous value in having those sessions, getting everyone together and doing the thing we call planning and estimation. But because what we're trying to do is create a big backlog, what we do, any of those really important discoveries are accidental. We do accidental discovery. Whilst doing the objective of the session, which has come up with 400 stories with seven data points on each story, we accidentally discover that there's some architectural decision we need to make. We accidentally discover because the business sponsors are in the room that when the guys are talking about where they're going to put the data centre, the business guys talk about latency and they say, well if you have the data centre over here, it's going to take too long to get messages to the exchange over here and so that's bad. Oh crap, we didn't even think of that. Right, now what do we need to do? We need to go and do stuff. But those things, there's an Australian agile coach I was working with. He called them oh shit moments. So I'm British so I call them oh crap moments. So he says, you know those moments that even when you're done all your agile and your BDD and your Scrum and your whatever else, and you get in towards the end and you go, oh crap, we didn't plan for that capacity. Oh crap, we didn't think that all these things would come out of testing. Oh crap, dda, dda, dda, dda, all these things. He says, what I want is to pull those old crap moments back. I want to make those things happen earlier because I want the actual release in the path to production to be really, really boring. So deliberate discovery says this, what if instead of pretending that we've made the likelihood come down to zero, we assume the following statement, some unexpected bad things will happen. I want you to write that and put it on a wall above your desk. I want to have posters of this. I'm going to unpack this, some. Some is a non-zero amount. On your project, on the next project you do, in fact on the current project you do, I'm going to go with three. Maybe two, but not none. You don't get to choose none. You get to choose a number bigger than none of things that are going to happen. So a non-zero number of bad things are going to happen. Unexpected is this, you cannot plan for them. It's what Donald Rumsfeld calls the unknown unknowns. So all of the planning, all of the contingency you could possibly do, this thing will still jump out and bite you. Oh, doesn't that suck? So a non-zero amount of things that you cannot possibly plan for are going to happen. Bad things means that they will adversely affect the delivery of your project. They're not a thing that you can suddenly sidestep. They will get you. So let's do this again. Some unexpected bad things will happen. What would you do differently if you were to assume that? What could you do differently? This is where I was going with the whole perils of estimation thing. Yes, get all those stakeholders in a room, because that's really important. But don't do this story thing. Do whatever you can think of to do. There's tons of exercises and games and things you can do, group activities you can do, that are going to help you discover this stuff. This is deliberate discovery. So assume your second-order ignorant. Do you know what I mean by second-order ignorant? Second-order ignorant is you don't know that you don't know. First-order ignorant is I don't know how to do something. So if I'm five years old and my mum drives me everywhere, I don't know that I can't drive. Because I don't care that I can't drive. It's not even in my world that mum operates the car. I get in the car with mum in it and I go places. When I get out, I get out of the car with mum in it. When I come home, the car turns up with mum in it. It's fantastic. It's like they were made for each other. At some point, I'm about maybe 17 and I want to take my girlfriend on a date. I'm very, very aware that I can't drive. I'm now first-order ignorant. I now know that I don't know how to drive. Second-order ignorant is I don't know what I don't know. In other words, you don't know what unexpected things are going to happen. That's what I said about the unexpected thing you can't predict. What you can't do is make a list of all the things it might be. Or rather, you can and the thing it is won't be on that list. Make as long a list as you like, it still won't be on that list. Your second-order ignorant of the thing that's going to bite you, of the non-zero number of things. Are you uncomfortable yet? This is reality. This is what actually happens. This is why we're so wrong-footed by it all the time. Now let's assume that you can actively reduce your ignorance. The way you do that is you get together and you think of all the axes. It's a group activity. The great thing with group think is they will come up with stuff on that list that you wouldn't have. Aha! There's your second-order right there. Brilliant. So, trading, market conditions, maybe a thing that one of the business guys, one of the traders comes up with, that as a techie I might not have come up with. We end up with all these different axes, all these different vectors along which we might be variously ignorant. Then we could maybe put a number against those or a low, medium, high of how comfortable we are with that part of the domain. Maybe the technology stack, maybe how we're going to integrate with our third parties. Maybe one of the things that I was doing was connecting to exchanges, financial trading exchanges. Every single financial exchange when you connect to them is subtly and evenly different from all the others. Writing software that connects to an exchange is a black art. I tried it once and I sucked at it. I spent a long time trying to be good at it and I sucked at it. We've got guys at the other who are really, really good at it and they scare me. But you can actively reduce your ignorance. If you know that you're going to be connecting to a new exchange, you know right there there's going to be dragons. Let's do that thing early. Let's do that really uncomfortable thing early because then we'll have a sense of certainty around it. Then there's this idea of double loop learning. I call Chris Argeris. He talks about this. He says double loop learning. Single loop learning is your lean plan-do-check-adapt cycle. The idea is you decide that you're going to do something. That's your plan. You say we're going to start automating our builds. What do we want to make better? Let's choose a thing we want to make better. We want to make our mean time between deployments. We want to bring that down. We want to make it quicker to deploy stuff. Let's start automating builds because that takes a load of time. The plan stuff is you baseline. You say we're going to measure how long it takes us to get stuff into production at the moment. What's our mean time? Then do is you then do this thing, like you maybe automate your build. Check is you then see now what kind of impact that has. Did it make it better? Adapt is now what? That was really cool. Let's do another cycle of that. Let's do it again. Let's automate the next thing. Actually, that made things worse bizarrely because the kind of thing we have has so much uncertainty in it that we're not ready to automate it yet. That was a dumb thing to do. We've introduced more issues. Let's back it out. That was cool and we're done now. What's the next problem? That's your app. Plan, do, check, adapt is a common mantra, if you like. By the way, what most people do when they're doing any kind of process improvement is they just go do, do, do, do, do, do. That's what they do. They don't do any planning or checking or adapting. They just do stuff and they keep on trying stuff. Unless you baseline where you are and measure things, you can't tell whether you're improving, but that's one of the topic. Double loop learning says, okay, well, we've got this cycle and we've got this way that we're moving forward. Let's step back and see if that cycle is the right cycle. Let's see if we can learn about how we're learning. Are there more effective ways we can learn? For instance, using deliberate discovery rather than story-based planning is a double-loop learning exercise. It says, let's look at using the time with those people in a room and see if there's a more effective way we could use that time to learn what we learned to reduce our uncertainty, reduce our ignorance on this delivery. So here's why you won't believe me. I don't think you'll believe me. I don't think you'll believe me because you're hardwired not to believe me. The first way you're hardwired not to believe me is I think what attribution bias. Attribution bias says this. It says, when a bad thing happens to Dan, it's because Dan's stupid. Dan should have seen it coming. Come on, Dan. When a bad thing happens to me, well, it could have happened to anyone. How could I have seen that coming? That's called attribution bias. When bad things happen to other people, well, they should have seen it coming. When bad things happen to me, well, no one could have predicted that. Surely that was just bad luck. That's attribution bias. We are all wired to do this. It's how we protect our egos. There's a book by a lady called Cordelia Fine called A Mind of Its Own. There's a whole load of stuff in there about this. It's wonderful. It's a really great read. Confirmation bias is the next one. Confirmation bias says, I will seek out data that confirms my position and I will delete data that doesn't confirm my position. Again, we are all wired to do this. There's a great behavioural psychology experiment that demonstrates this where you take two radically opposing positions, two people with radical groups, radically opposing positions, give them the same data and see what happens. The example that Cordelia Fine uses in her book is you take pro-life and pro-choice, pro and anti-abortion, which is a really, really volatile topic, proponents, and you give them infant mortality data. Both groups look at the same data and go, see? See? I'm right. They go, look at the data. See, this proves it. This proves you should be pro-choice. This proves you should be pro-life. The data, they will interpret the data in such a way that it confirms their belief. Again, Chris Argrist has this model called the Ladder of Inference, which I'll just mention. Go and Google it. It's a very, very powerful model. Haven't got time to talk about it now. But yes, a confirmation bias says, I will seek out confirming data. If what I want to believe is that this project will succeed, I really want it to succeed, I will only, I will roast in everything. More importantly, if I want my boss' approval, I will roast in the message I give to him or her. I like being approved of. I like being liked. I'd rather be liked than, I don't know, be right, be honest, have integrity. Oh dear, there I go. So, but my favourite one is bias bias. Bias bias is the reason that 84% of men are more than averagely sensitive lovers. Okay. And why, in excess of 70%, I think it is actually in the 80s again. I'm going to go with men again, are better than average drivers. Okay. We know this can't possibly be true, because that's what average means. Yeah. But what happens is you decide that, on any scale, you kind of, again, it's about protecting your ego. Well, I reckon I'm better than average driver. I wonder what an average driver would do. I would, obviously, I'd be far more considerate than average driver. I know my stopping distance is better than an average driver. I'll let people out more than average. Yeah, I'm higher than average, higher than average. Everyone's higher than average. Okay. My favourite thing about bias bias is it applies to bias bias. So having told you all that, you're all sitting there going, yeah, but I'm not as biased as the next guy. So I'm okay. Okay. And this is this whole uncertainty thing. You're going to be really uncomfortable with that. So I'm just going to finish with this. I'm going to say, yeah, there's craving for certainty. This is why you're not going to believe me, because what you're going to do is you're going to put mental constructs in place that give you the illusion of certainty. Okay. I've been talking for the last two years about this idea of embracing uncertainty, and I'm still desperately uncomfortable with it. Chris Matz, he's built this whole model of real options for embracing uncertainty, and it scares the bejesus out of me. It's an astonishingly powerful way to think, but it messes with your head. Okay. So this is the TLDL. This is the Too Long Didn't Read. Expect the unexpected. Okay. Expect the unexpected because it's going to happen, and in fact, I would go further. I'd say expect the unexpected ball. Right? Even now I've said expect the unexpected, you can't go off and make a list of unexpecteds. You don't get off that lightly. Okay? You've got to live with this. Anticipate ignorance. Assume that there is stuff of which you are ignorant, and assume that there's stuff you can do about it. And then, I guess my party message is embrace uncertainty. It's inevitable. The one thing you have absolute certainty of is uncertainty. The one thing you can be sure of is a non-zero number of bad things that you can't predict will happen to you on your next project. What are you going to do about that? Thank you. APPLAUSE
Agile software development was born ten years ago, with a gathering of industry luminaries in Snowbird, Utah. They were frustrated that so much ceremony and effort was going into so little success, in failed project after failed project, across the software industry. They had each enjoyed amazing successes in their own right, and realised their approaches were more similar than different, so they met to agree on a common set of principles. Which we promptly abandoned. The problem is that Agile calls for us to embrace uncertainty, and we are desperately uncomfortable with uncertainty. So much so that we will replace it with anything, even things we know don’t work. We really do prefer the Devil we know. Over the last year or so Dan has been studying and talking about patterns of effective software delivery. In this talk he explains why Embracing Uncertainty is the most fundamental effectiveness pattern of all, and offers advice to help make uncertainty less scary. He is pretty sure he won’t succeed.
10.5446/51010 (DOI)
El Bead yn gŷakodo cais paperfen hanes yn gyrraig a r Pattir yma. Welcaf, alivell. Reringonadau sydd iddo o'n unrhyw'. O'ch commonly games y beth, mae'na'n ganddo i pluck defnyseadu Genol Adag Australia ISE drum arall. Ond pa'r ganddo iei'r gwbod cyfans odysgol a amser arDAil. Bydd y Rime'r mi ben gyd i'r rrŵjyn o regi pan rhyddought um 5 jarnaid am성. ought to a g exhibited order new more appreci at the time when I was working on what you know book on that and talking to lock me up double titi textles forest pictures from meetings game paper fools and I want to use the company that I worked at versus twooj mace and petrol at the cowboy know next as an example and and use that to just explore the Ooo that we talk during that engagement and use that to explain how it all fits together and how the process works and the sorts of things that you need to think about. I know that the recommendations for giving presentations is to have one idea that you hope that your audience will take away. I've actually got three ideas that I hope that you would take away. So please feel wide-awaking and I'll tell you what they are so you can look out for them. The first one is what I've just described. I want to give you an overview of the process and the approach to it. The second one is really fundamentally, it's where this process came from to my mind and that's about the application of the scientific method to computing. It's all about feedback loops and trying to establish, build on what you know to be working, what you know to be good. The third one is again what I've alluded to. What's really possible when you start with a clean sheet of paper and apply these sorts of techniques fairly rigorously? So here's my agenda. I've started doing the context setting. I want to do a little bit of the theory just as a lead into what we mean when we talk about continuous delivery. Then using this example case and walk you through the process that we developed in this engagement. And show you some of the supporting tools and techniques that we use to achieve that. I'll try and say I've got about an hour and a half with the content fitting to an hour. So I'll try and leave questions to the end. If there's anything really urgent, if there's something that's really not clear, wave anyway and I'll stop and take the question. But I want to try and flow through as quickly as we can. So whom, what is Elmax? Elmax stands for the London Multi-Asset Exchange. It's a retail trading facility and I was engaged by an old friend of mine who was the CTO at the point of inception of this. And what it is, it's a financial institution. It's trying to bring retail trading to anybody that can sign up on the internet and have the same kind of experience that financial institutions have when they trade on large exchanges. That had a number of challenges in terms of high performance computing and all sorts of things. But the reason why I think this is relevant to the presentation that I'm giving today is really just I want to paint you a picture. This is a fully-fledged enterprise system. This has all of the properties of a full-blown exchange, some of the properties of a bank, all of the properties of a web company. So we were doing clearing and trading and matching and financial reconciliation, all of those things. This is a big complicated system. It's not a toy system that's just doing crude things to a database somewhere. It's more complicated than that. So what's continuous delivery? So if you read the agile manifesto, it's the first statement. And really it's what it's all about. What is software? When is software important? It's nothing until it's in the hands of its users. Software development process is all about delivery. It's all about getting software into the hands of its users so that they can do something useful with it, make some money, get some advantage, have some fun, whatever it is. It's kind of the logical extension of continuous integration. So continuous integration has been around for quite a long time now. Can I just have a show of hands of people that are familiar with the concepts of continuous integration? So keeping the build working all of the time. It's really the extension of that into more of the process. So particularly at the back end, where in my experience as a consultant over the years and in various other engagements, the deployment process was often a source of fairly serious errors that I spent too many late nights sitting up trying to patch stuff together, because for some reason we'd screwed it up and the software wasn't deploying correctly on the day when we were trying to deploy it. So trying to get rid of that cost and trying to make it smoother and simpler and less error prone and less nerve-wracking process to get the software into the hands of our users. Another way of looking at it is kind of a holistic approach to development. Actually, if you look at this as a whole process, it's about all of it. It's kind of from how the business capture requirements, how that feeds into the development process, how the developers create software that's usable and how they get it into the hands of their users. It's the whole value chain really of the software development process that it covers. I'm going to be talking more about the back end of this because that's generally the bit that interests people in this kind of audience and me too, because I'm a techie. But it is more than that. There are things at the front end about what's a good way to capture requirements and how that feeds into the process that's also relevant. A fundamental idea here is that every time that you make a change to anything that's going to affect your system in production, whether it's the source code, the configuration of the production environment, the tailoring of the configuration of your application for a particular deployment target, whatever it might be, you're giving birth to a release candidate and from then on, the rest of the process is about evaluating that release candidate to see whether it's fit to succeed to get put into production. I've already made this point. You're not done until the software is in the hand of the users. Finish is not the point at which you throw it over the wall to the testing team. It's not the point at which the testing team throws it over the wall to the operations team. It's when it's in the hands of users and doing real work that you're done. So why? The dream that everybody's looking for is that you want the shortest possible path from a business idea to getting that software into the hands of the customer. Now, we all know that the process of software development and requirements collection and all that stuff is complex. So there's a lot of things that we need to do. We need to apply rigor and careful thought into this. Still, our goal should be to minimize the path of getting changes, useful software, into the hands of a customer. A very, very useful and important metric is the cycle time. If you can imagine the smallest possible change to your code, application, configuration, whatever it might be, how long would it take you to get that into the hands of users? As a consultant, I've worked in organisations where the minimum cycle time was six months. Any change would take six months to get into the hands of users. That's a ridiculous state of affairs. Lmax, we didn't deploy this frequently. Very often we deployed every two weeks on iteration cycles, but our cycle time was two hours. The minimum time that we could get the smallest possible change to our system into the hands of the users having been fully tested was two hours. Mostly software isn't like that. Had we addressed that, we need to start applying lean thinking. This is a theme. This is not an accident. I mentioned about the importance of, for me, the scientific method. Lean was consciously, lean manufacturing and lean production and those sorts of ideas, consciously lifted from the application of the scientific method to industrial and commercial processes. Us doing the same, taking lean thinking and applying that to software development, we're doing the same. We're building on the scientific method. You need to be able to get a theory, design an experiment, carry out the experiment, evaluate the results and iterate. Fundamentally, that's what the scientific method is about. Fundamentally, that's what lean is about. Fundamentally, that's what continuous delivery is all about. Most are agile methods, to be honest. It's all about feedback cycles. It's all about trying to find out where we've gone wrong, that the kind of assumption is that as fallible human beings, we're very prone to making mistakes. The best way that as fallible human beings, we are prone to avoid making mistakes is to make small changes, look at those changes, evaluate them, figure out how we screwed up, fix it and carry on. It's all about establishing feedback cycles. I like to think of the software development process as a layer of onion skins of larger and larger cycle feedback loops. At the tiny cycle, if you're used to doing test driven development, then write a test, write the code to see it fail, write the code to make it pass, see it pass, refactor, move on. That's kind of the smallest feedback cycle. That's kind of the unit test code in a feedback cycle. Outside of that, though, there's more to it than just the unit test. You need to specify the behavior of the system somehow. Unit tests don't always do that. They're very focused on the detail that they tend to be a little bit too close to the solution level to effectively express the business level intent of the solution sometimes. Outside of that, you want to be able to do the same kind of thing, but on a slightly grander scale, you want to be able to specify a feature of the system, assert that it's delivered effectively, and evaluate that. Outside of that, the ultimate feedback loop that I was talking about, you want to be able to have an idea in your business, get that into the hands of the users, see how they react and change and modify. This idea of interlocking feedback loops is very important and fundamental to the way that the process works. As I said at the outset, I'm going to be talking principally about that end of the process. I'm going to be talking mostly about build and release because that's mostly what we're focused on. But please don't forget the other things. They're part of the process and part of the flow of information and keeping those feedback loops as short as possible and as effective as possible is a continual effort on the part of our practitioners of this kind of process. This is a shameless plug for my book, which I'm very proud to mention won the Jolt Excellence Award last year. I wrote it along with an ex-cullig of my Jess Humboldt. It's available all good bookshops, including one downstairs. So, the principles of continuous delivery. If you want to achieve this sort of feedback loop, you need to create a repeatable, reliable process for releasing software. That means that you can't afford manual works of art where Joe knows precisely how to configure this particular server to get it into the state. Because Joe might be on holiday the day that that server goes down and you need to replace it, or you might leave and get a better paid job somewhere else because you've been treating him so badly. And to get over that, you need to automate almost everything. One of the practices of continuous delivery is kind of small scale incremental automation. As I described the system that we developed at Elmax, it's going to look very big and complicated. It's going to look like a complex process. That's from kind of this end of the telescope. Actually, we started off very simple. Our first iteration, our first two week iteration, we built the smallest possible feature that we could think of and we delivered it into a production like in the last year. We built it into a production like environment and people were able to use that feature. That's an important fundamental goal. You need to keep, if you want to keep that feedback loop, you've got to include the business in the feedback loop as well as everybody else. You can't afford to build your continuous integration, your continuous development system for 10 weeks before you start delivering value. Nobody's going to, you're going to make people really nervous if you start doing that kind of thing. If you want to automate everything, if you want a repeatable reliable process, you've got to version control almost everything. Depending on how seriously you take this, it gets to the point where everything, in the ideal world, who would not want to be able to be absolutely certain that every last bit and byte in their system as a whole was the one that they intended? Any one of those bits can screw you up, any one of those bits can fail your system and you don't know which ones. So you want to be certain that it's the one that you intended, the one that you tested, the one that you evaluated fairly thoroughly through the life of this process. A very weird thing about software development is that if something's painful, if it's difficult, if it's hard, do it more often, not less often because that's the way to make it simple. If your releases are horribly fraught, painful prices, I'll sidebar a few years ago when I was a consultant, I worked for one customer that used to do releases at the weekends and their release process was horrible. It was complicated, it was split up into various teams, it would come in at different times through the day that was scheduled, there was lots of handover of walls of documentation of how to do things. It took them the entire weekend to do any release and they made a change and it took longer than in the weekend and then they were stuck because they couldn't release their software, that they had no out period in which to release their software. So if it hurts, do it more often. If you're releasing once every 12 months, start releasing once every 6 months. If you're releasing once every 6 months, start releasing it and so on. There are many successful organisations with big, complicated pieces of software that release on every commit. Every time you make any change, the system is evaluated by an automated set of tools and if it passes all of those evaluations, it's automatically deployed into production. That's not a prerequisite of continuous delivery, you can choose when to release but the philosophy is correct. At any point, if a release candidate passes through all of the evaluation stages in the deployment pipeline, it should be releaseable. You don't necessarily have to release it but you should be comfortable and confident that if you want it you can. Like good processes, I'm going to say it's good and I'm biased, but like good processes, it has lots of positive side effects, a bit like TDD. If you do TDD, well my experience has been that it drives better design. If you apply these principles, the principles of continuous delivery on a broader context to software development, you end up with better design and better processes to releasing your software. That means that it's important to be rigorous about things. It's important not to sweep things under the carpet, so focusing on quality. If something looks strange, looking to it, our system at Elmax is a highly asynchronous, highly performant trading system. Those systems are quite hard to test to set up the cases to evaluate and so on. We would have intermittent test failures. We spent a while being rubbish and not paying sufficient attention to them and just ignoring them. Every single time that we dug into them, there was an underlying cause, some of which were genuine production problems. They were highlighting real problems in the code. It wasn't just a test artifact. A few of them were just test artifacts. It's important to focus on quality and if anything looks anomalous, if anything looks wrong, if any code looks a bit more untidy than it should be, fix it. Work on it. I was watching Uncle Bob yesterday talking and he said you should always leave a code base in a better state than you entered it. I think that's a very important philosophy. This is a whole team thing for this process to work. It's not about the developers doing the right thing. It's not about the operations people doing the right thing. It's not about the business doing the right thing. It's not about the testing group doing the right thing. Everybody is involved in a release. Everybody has a role. Everybody needs to work together. Everybody needs the right sort of insight into the process and visibility of the process and so on. It's very important, as I said. It doesn't really make sense to do this in a big bang. It's an incremental gradual improvement to get to a point where it works effectively. I left Elmax about two months ago. My bet is that they've made five or ten changes since then to the process just to refine it. Because we were always doing that, that was just the nature of the organisation. We would always be looking to improve things on an iteration by iteration basis. Some of the practices of continuous delivery. An important one, build binaries once. If you're going to evaluate your release candidates, put them through a battery of automated tests, which is part of the process, you want to be fairly sure that the release candidate that you deploy into the production was the one that you tested. If you recompile to a particular target on deployment, maybe there's a different version of the compiler. Maybe you're using different versions of the libraries that you're linking to. It's not the same as the one that you tested. Part of the output of the process is you should build binaries once and then if those binaries are successful, those are the ones that you test in all the subsequent stages and those are the ones if they're successful that you're releasing to production. Use precisely the same mechanism to deploy into every environment. If you're deploying to the developer workstation, it should be fundamentally the same mechanisms that deploy the application as into your distributed cloud-based production environment. There are differences to those environments, but you need to cope with those differently. I'll talk more about that as we go on. Smoke test of the deployment. This is again about the feedback loops. You don't want to deploy the whole thing and then say, does it work? You want to know that each stage of the deployment works. If you're laying down the operating system, you want to just verify that that's worked before you lay down the database, before you lay down the application code, before you lay down the configuration. At each stage, just increase that confidence that everything's going to work well. That has the added benefit that if things don't work well, you know where to start looking and so it shortens your debug cycle. If anything fails, stop the line. If during the process, which I'm going to describe the deployment pipeline, if as the release candidate is flowing through anything fails, you throw that release candidate away. You don't go back and try and fix that release candidate. You go back to the head and fix it along with all the other changes and they flow through again. I'll cover that in more detail as we go through. OK, so this is the description of the Elmets continuous delivery process. This is fairly typical and this is pretty close to the process that's described in the book. So we worked as developer pairs and we would make changes locally and this is kind of a classic continuous integration cycle. When the developers were happy and the build was OK, they'd commit their changes to the team's source code repository. There'd be a build management system that was monitoring the source code control system and when it saw new versions added there, it would pull those down and it would run what we call the commit build. The commit build is the same to a 99% level as any continuous integration build. The slight difference is that when it succeeds, it stores the binaries in an artifact repository so that it can reuse those and ultimately deploy them into production if this release candidate is successful. Just like any other continuous integration process, if the build fails at the commit point, the developers are just kind of sitting there waiting and they're going to fix it there and then, it's very important for the commit build to be as fast as possible. Lots of people talk about a 90 seconds kind of being about the longest acceptable build. I can see that, it's certainly optimal at that sort of level. Our build on this project, as I said, it's a very big project and our build, we built everything kind of monolithically because it made the configuration management problem a bit easier. Our build was about five minutes so it's a bit longer than that but it does mean that you need to focus on keeping it that fast. You need to make sure that the tests that you write in the commit build are largely not touching disk, not touching real messaging systems, not deploying to real web servers, anything like that. They're just local process things that will run very fast and you can run tens of thousands of tests in that amount of time if you stick by those sorts of rules. However, you also want to be able to catch the vast majority of errors at this stage. This is a branch optimization algorithm, the whole process and really what we're doing at this point is that we're betting that if it passes this build, it's likely to pass the rest. You want it to give you a high degree of confidence that any release candidate that passes this stage is going to be successful for the rest or at least works. There are some added tests that generally will add in at this level. Usually a simple smoke test of the application at some level that just said that it works correctly. If you're using something like spring which has the nasty property of not really knowing whether it works until you exercise it, there are some tests that will test the spring configuration or the juice wiring or whatever, the dependency injection to make sure that that's correct as well. There are some static analysis tests so we would fail the build on if the test coverage went too low. We would fail if we broke any architectural rules if we tried to access the database from the web tier or something stupid like that, the test that the build wouldn't pass. If it all works, then these guys move on, they'll start working on something new and the rest of the stuff that I'm going to describe is happening in the background. It's very important though to the process that they're keeping an eye, they're aware of what's going on through the rest of the pipeline. As I said, they've kind of made a gamble. They're gambling that the fact that the commit build passed means that this release candidate is going to succeed. But they might lose the gamble and if they do lose the gamble, it's their job to drop what they're doing and fix the problem, to address the problem that they cause with the commit. One of the reasons for that is because this ends up, as you'll see, being a fairly expensive resource and it's a whole team resource and you don't want it tied up and being broken all the time because of changes. But another important thing is the sooner they get to fixing it, the faster they're going to be at fixing it because they're going to have the context, they're going to understand what they changed. It's going to be a relatively small number of things that they changed and so they'll know what's impacted the problem. Worse case, they can take a step back and look at the problem offline. The next stage in the process is acceptance testing. I'm a huge fan of TDD in general and unit tests, TDDs in particular and unit testing in general. But I've worked on projects where we only did unit testing and on the sorts of projects that I was working on, that wasn't enough. Unit tests assert that the code does what the programmer thinks the code should do. It doesn't really assert that the software does what the business think you ought to do. Acceptance testing is about that. So another name for this is functional testing. I like the term acceptance testing because it focuses on what we do. If it passes this, it means it's functionally correct. It's doing the right things. This is monitoring not the version control system, but the artifact repository. This is going to be a resource. These are going to be slower running tests. They're going to be running against the whole system and they're going to take a while. If this was looking at the version control system and taking every build, it would be falling gradually further and further behind. So what it does, it leaps, it leaps frogs over. Each time the acceptance test environment becomes free, it looks into the artifact repository for the most recent successful commit build. And it will evaluate that on the assumption that all the ones previous to it, all of those changes are in there, so it's evaluating those changes. So it can jump over. That means that you lose the direct tie to who committed what. So that's a problem that you have to fix. You have to address. You have to track the collection of people that may have contributed to this failure. Modern build systems are doing that more frequently. When we started doing this, you had to roll your own and do much more of it yourself. But the modern build systems, Hudson, cruise control, Team City, those sorts of things, they're doing this much more effectively now. If the acceptance test deems this release candidate to be good, if all of the tests passed, it tags the release candidate with a tag saying you've passed your acceptance tests. I'm a huge fan of automated testing and I talked about repeatability. That's not all of it though. I think that there is no place in using people for regression testing. I think that that should be automated. It's a repeatable process. It's deadly dull and to be honest, people are useless at doing boring, repetitive, technically complicated tasks. There's not much more that I can imagine that's boring, repetitive and technically complex than manually regression testing software. Right, automate that. What people are brilliant at is exploration, pattern matching, just touchy feely things. You want people to interact with the system and you want them to be able to say the colours don't quite work on that or this doesn't line up right. Or when I press these 15 stupid keys or put 32K of input into my password field, it blows up, you want people to try and do the stupid thing. So exploratory testing and usability testing are very important facets. I think having manual testing in the process is an important step for most applications, certainly for the one that we were working on which had a significant web user interface as part of it. So we have manual testers and they would pull release candidates once they passed acceptance tests out of the artifactory repository. There was no point in them looking before then because before then maybe the system didn't even start up. It would be a waste of their time. So once it's passed acceptance tests, it's free for them to go and have a look. They would pull those down, they'd use the same deployment tools, we put a user interface on top of the deployment tools that were used on an automated basis for the acceptance test. I'm going to go into a bit more detail of those shortly. And they would use those tools to deploy it to a test environment and interact with the system. There was no human intervention other than they didn't have to call on somebody else, they didn't have to call on an operations person or a developer to help them deploy the application. They chose the version that they wanted, where they wanted to deploy it and press the button and it worked. That's an important facet. It changes, when you give people tools like that, it changes their relationship to the software and the way in which they do it. We saw all sorts of things where the testers would treat things differently. We had one nasty problem that we couldn't figure out where it was and they did a binary chop of release versions between versions to kind of home in on which it was. It took them ages because there were a lot of versions. This bug had crept in over a long period of time. It changes the relationship. Demonstrations, we could demonstrate code within about 40 minutes was the duration of the acceptance test bill. Within about 40 minutes we could demonstrate any feature to anybody because we could deploy it in 30 seconds to any test environment we liked and people could look at it. So the business liked this sort of behaviour as well. If they're happy that this release candidate is good, they had a feature on the deployment tool that they could mark the release candidate and that would add a different tag to the release candidate in the Artifactory Pository. You can see in the Artifactory Pository it's collecting information about the life cycle of the release candidate. What we're looking for, by the time we want the decision to release it to production, we want to look for a release candidate that has the full set of tags that's passed, automated unit testing, acceptance testing, manual testing, performance testing and so on. For us, Elmax performance was a critical thing. We were a low latency trading environment. Our turnaround, edge of our estate, a message coming in and going out again was an average turnaround was 1 millisecond. So it was important that performance was a critical part of our business. We were dealing with high frequency traders. We evaluated performance at two levels. We did component level performance testing, which is essentially the equivalent of unit testing for performance. For performance critical aspects of the system, we would write a dedicated performance test that just microbenchmarked that bit of the code. If we did something stupid and made it go slower, it would just shout at us and we would go and look and we could fix it. We also did whole system performance tests where we would start up the whole system in as production like an environment as we could afford. The performance critical bits of that were identical to our production environment, so it's kind of like a thin slice of our production environment for performance testing. And we would run whole system tests with load tests and destructive tests and all sorts of things. This was a great environment for doing all sorts of things. If we had a nasty event in production, we could replay what happened in production in this environment and see what went on and debug it and those sorts of things. Again, this is pulling release candidates out of the Artifactory Pository and tagging them when it's successful as it goes through. So it's accruing this information about the lifecycle. For us, we were controlled by the UK's financial services authority. So part of their regulation is to have a separation between the people that can develop the software and the people that look after it in production. So we have a staging environment in which we evaluate the production release where we could test this. Generally what we were testing in the staging environment because we tested the functionality and the performance of the system, really what we were looking at was the migration of the system. As was an intensely stateful system and as part of the deployment process, we had to migrate the state, the production state into the new version of the production system. So we would take anonomised cut of the production data for the release that we were going to do. We would do the release, evaluate that it all looked sensible as far as we could. If that was good, then we were fairly confident that as well as everything else that we tested, the migration of the production data was also working. If that all worked, then we would deploy the system to the production environment. If that was the candidate that we chose. The stuff that I was talking about, the FSA authorisation, this was actually secured. There was actually, it was a bit more complicated than this. There was a separate hop where you had to have three authorised people that would say, yes, this release candidate has passed all of its testing. We will migrate that into the production, the artificial repository and then somebody else could release from there. That's another nice little sidebar. When we started describing this to our regulators, they were very nervous because it doesn't fit into the normal operations. By the time we finished describing it to them, they were recommending what we were doing as best practice to everybody else because this is actually perfect for regulators. It's hard to convince them because they're not used to seeing stuff like this. But if you think about it, what's better than a fully audited system? If you think about what's happening here, we've got an automated system, an automated process from actually further up the value change in terms of requirements capture and so on, that feeds through so we could see who specified a particular requirement, who prioritised it in our story management system. We could track that through because we tagged each commit with a story number or so, and we could track the developers that worked on that story. We could track those commits and we could see that they had passed all of the commit tests because it made it into the artificial repository and again that was tagged. We could see the acceptance test that it had passed, the performance test that it had passed, who looked at it in the manual testing environment, who looked at it in the staging environment, who'd authorised it to move into that. We've got a complete audit trail from soup to nuts of every change in the system. The other thing that I haven't really talked on here, I've talked mostly about stories because I'm a developer and that's mostly where my focus sits. It's not all that this is about. Any change to the system would go through the same process. We used to update the Java system. We would update the version of Java on a regular basis, pretty much when a new version came out, we stayed with the new version. That's simple. We just put that in, checked that into the version control system. We had a manifest arrangement where each build had a manifest of the dependencies that it had. We'd say which version it depended on. It would run it through the deployment pipeline and if it failed, we'd know that there was a problem with that version of Java. We did the same with the database, the same with the operating system, the same with the web server, the same with the messaging system. Any change to the system, any change to the configuration of the system would flow through this process and be evaluated in increasingly production-like circumstances until it got to production. Brief thoughts, any questions at that point? That's a very hard thing to say. As I said, at the outset, we started with kind of the bare bones, the absolute minimum. We had essentially a very, very trivially simple commit build and an acceptance test build that ran one test at the end of the first iteration. With the project at the point when I left, I was there for nearly five years and we were still changing it. We were still modifying it, enhancing it. So this is an investment. It's not something that you do and then forget about. You live with it and you cope with it. Keeping the acceptance test running, keeping the performance test running is expensive. They do break and when you've got a big complex system like this, it's an expensive exercise to go back and fix them sometimes. Nobody that worked on that project thought that it was money poorly spent, though, but there's a heavy investment. If I was to guess, I would say that somewhere between small, single-digit percentages of the total development effort went into less than that, actually, the tools and stuff. The acceptance testing was more than that. I would say that we probably spent 5% to 10% of the development effort on testing and keeping the test going. Something in that order. It might be a bit more than that, to be honest. We didn't keep statistics of it. Sorry, there was another question at the back. How do you have a database configuration? We did those too. There's a good book, Agile Database Refacturing, or something like that. Forgive me, I've forgotten the title, but it's a good book. If by chat or promo, it's very good. Effectively, what you're doing is try and do additive changes and do deltas. Each change, part of our commit build, was a test that would do data migration changes. We would test that the database looked like we expected. We had some kind of markers. If it didn't, it would fail, and the way that you fixed that test was by putting a delta patch in that did the migration. We had a test that kept us honest to make sure that we did it. We built up these migrations. At deployment time, we kept a revision level of what the last delta patch that we applied was. We applied all of the ones that were later than that and then updated it to the newest one. I recommend the book to you. It's quite good. The whole thing is referred to as deployment pipeline. That's a term that I coined. The reason I called it pipeline was not because it's kind of a linear thing. It's because I'm a technology geek and it reminded me of process of pipelining. It's a branch prediction algorithm. What you're doing, as I said, at the commit stage, you're betting on the fact that mostly, if it's past the commit, everything's going to be okay. You can move on and you can work. These guys, they wait their five minutes for the build to pass. They move on, they're working on something new. They're keeping half an eye on what's going through the rest of the process. If the build breaks, they've lost their gamble. They've now got to drop what it is that they're doing, go and fix the acceptance test or the performance test that broke the build or revert the change or whatever it takes to make it good to keep the build working. If it passes, they've won and everybody's happy. For a branch prediction process, that's all very well, but I mentioned in passing that the acceptance test build took about 40 minutes. By the time that I left, there was something in the order of 15,000 acceptance tests, something in the order of 25,000 unit tests, and the 15,000 acceptance tests, if you ran them all serially end to end, probably would have taken more than 24 hours. I don't know, but lots of time anyway. It was a bit more complicated than that. The way that it actually worked from my slightly simplified picture was that when we got to the acceptance test stage, when a build, a release candidate made it to the Artifacture repository, the acceptance test environment, when it became free, it would look for the newest release candidate and it would deploy that to an acceptance test environment. This was a lightweight copy of the production environment. The more production-like you can afford, the better, but in our case, our production environment was kind of a cluster of 100 servers or something, and this was about five or six. We then had a whole bunch of test hosts to farm out work and to parallelise the work. An important aspect of this is kind of the isolation of the test case. I said our system was a trading exchange, and if you think about it at its root, the kind of dimensions of containment of a case for trading are really the market that you're trading in and the user's account that you're trading in, at least it was in ours. In most problem domains, there are things like that that will give you isolation in a multi-user system. In our case, what we did is that every test kind of started off by first creating a user and creating a market so that it could play in isolation from all of the test cases just that user's holdings, just that market place. We could set it up into exactly the state that we wanted. It had the amusing side benefit that one of the really efficient parts of our system was the ability to create users and market places, and we had a number of external third parties that they were testing, and they said, we think it didn't work, we created this user and it responded too fast. But it gave us the ability to run these tests in parallel. It meant that the application instance that was running in this environment over here was a little bit different in terms of its profile to the production environment because it tended to have lots and lots of markets, but that was something that we thought was acceptable, an acceptable compromise to give us the test isolation that we wanted. That's an important facet. One of the unpleasant, one of the anti-patterns that I've seen many times in functional testing and one of the reasons why many people say that functional testing is hard is because they try and maintain a consistent data set across all of functional tests. It's just not worth the effort. It's best to find a way of isolating each test case. It's a much, much simpler approach to the problem. So our test, we ended up, we got a build grid of about, when I left it was about 35 servers. It was kind of an in-house cloud and we could allocate these differently. The host of the acceptance testing would report the results back. It would collate the data back from these servers. We wrote a little application that would farm out work to these servers so that it would run in parallel. This is a little animation of the application. We called it Remaro and it divided the tests up into a series of different groups. These are time travel tests. These are tests that need a dedicated environment. They can't use the shared environment because they're going to change time and so they would affect other tests. These are parallel tests and at this stage, it looks like there are more of these than there are of those. It's just the stage that happens to begin with. Most of the tests were parallel tests. Most of the tests you could run in parallel and they're running in that group. Then the sequential tests. These were tests that we would run again. They needed a particular version of the environment because they would do things like they would selectively fail bits of the system to test that our system was robust and failover would work and disaster recovery would work and so on. You can see these things. Every now and again, it reconfigures itself as the test profile changes and you'll move tests around. Because when we started, this was a relatively immature discipline, we did a lot of creating our own tools for this stuff. A lot of the stuff that you see, we wrote from scratch ourselves. The situation has moved on. There's a lot of stuff in the open source community and commercial offerings that cover some of this space now. Probably if we were starting this project now, we would have used a bunch of those. Nearly all of the stuff I'm going to talk about, we wrote our own for. Another problem, as I mentioned before, is that when you want insight into the build pipeline, you want to be able to see what's going on across the board. This is what we call big feedback. The top bar is the commit build. You can see who did the last commit, the comments, the history of the last three commits over on the side, some links to some useful of the tools. There are some subsequent stages. We did some static analysis after the fact and so on. Then we've got the acceptance test build and this is the remeroservo, the thing that we just saw, the work allocation process to parallelise the tests. Then we've got some branches where we were running different versions of the application and reports from those. Then some kind of supplementary projects and some performance testing reports here. You can see there's a failure over there in the history that's just reported. This is what it looked like when things were going well, which was kind of most of the time when things weren't quite so well. This is what it looked like. We had big monitors at the end of the work benches where the teams worked. It would be displaying this all of the time, as well as some other graphs, some other important feedback information from the application and from production. It was also available as a web application that you could look at from your development system. You could click through. This was effectively just screen scraping on top of Hudson. If you actually wanted to go and see why this test felt you would click on there and it would go through to the Hudson instance and look at the Lugs or the Romero instance and look at the Lugs or whatever else that you wanted and you would navigate through. Again, the first version of this was really trivial. One of the guys knocked it up one Friday evening, the first version of this, just as a simple highlight. Over time, we evolved it and added features to it as we wanted. Another problem where you've got the acceptance test build and there are multiple people that could have contributed to a failure is determining who and why and whether this is an intermittent problem or whether you just introduced something. We had a woman on our team who, for a brief while, played the team's conscience and she would use to nag us to be honest and stay on top of fixing the tests. She would do this regular analysis of the breakdown and stuff. Her name was Trish and one of the guys automated her and created Auto Trish, which did an analysis of all this information and crouched it down. This is build history across here and you can see something fairly nasty happened at this point where there's a whole bunch of tests and the tests have flavours so you could say roughly what functional areas. That will give you some hints as to what we're talking about. It's also got a blame thing. You can't really read it but it's got the names of the people that could have been responsible whose commits contributed to the failure at that point. It's also got a Trish index which is how often has this failed in the past. This one looks maybe a bit suspect because it failed once there and then passed and then failed so maybe that's intermittent. There are some others down here so maybe it wasn't. Maybe it was just a breakage there and a breakage there. But you would see patterns like that and this was kind of a useful tool. Generally what would happen is that we'd have the big feedback at the end of the benches. We'd commit and we'd be working away and something would start going wrong. We'd start seeing some red on the board and so we'd drill down and look at auto-Trich and it would start pointing the finger of blame and we'd go and oh that was me. I'll go and find out what I did wrong and why I screwed it up. Acceptance testing. What's the right language for acceptance testing? Is it fit? Is it JUnit, JavaScript? None of those. It's the domain language. It's the problem domain. If you're expressing the tests that you're trying to assert the behaviour of your system with in the language of the problem domain, that's the most durable language you can use. That's not going to change. If the requirement is when I register my credit card I can subsequently fund my account, that's the requirement. It doesn't actually matter whether that's delivered using a web user interface or through named pipes. Named pipes don't exist and aren't used very much anymore. Whatever else the underlying technology feature is, it doesn't matter. That's irrelevant. It's the business level feature that's important. Our system had a number of different ways of interacting with it. There was a proprietary API through which you could talk. Our own user interface talked through that. We also publicised it and you could write your own bots to trade on our system through the Elmax API. We also had a fixed API which for those in the finance industries, the financial information exchange protocol, which is a common protocol for exchanging information. There were a number of other communications gateways to and from our system using a variety of different technologies. If we were to do the straightforward thing and write a bunch of tests at talk directly to each of those technical interfaces of the system, and then we change one of those technical interfaces to the system, we're going to break a bunch of tests. In order to fix those, if we've implemented that way, we've got to go and visit each one of those test cases and fix each of them. Like most problems in software, it's better if you raise the level of abstraction. If instead we had an interface that represented the interactions that we want to perform and the underlying guts of how we communicate that intent via the fixed API or any other protocol is hidden away from the tests, that gives us a little bit more leverage in this case if our interface changes and the tests break. We just fix that point, that intermediate point, and it fixes all of the tests that we've got one place to fix. That's a step forward. There's a good pattern called window driver. Effectively, you're writing a device driver to the points. That's true of your user interface as much as any other protocol of interaction with the system. We would write a layer of insulation between our tests that abstracted the interactions, the behavior, and tried to represent those in semantics that were as close to the problem domain as we could get. We actually took that a step further. If you start looking at that, there's a bunch of common things. I mentioned previously that the common important concepts in our application for test isolation were marketplaces and users. We wanted each test to create its own marketplace and its own user. If you had a test that created marketplace US dollar, then the next time you tried to run that test, it created marketplace US dollar. That already exists. You need to alias the name. The system underneath would create something called US dollar 123756243. The test would talk in terms of US dollar. We'd have an aliasing layer. There are concepts like that that make this easier. What this leads you in the direction of is a domain specific language for testing. This is a really important concept because this really starts to get to address the problem that lots of people you will hear complain about the difficulty of functional testing. Actually, this makes it much easier. This starts to get to a really valuable approach. You start to move away from thinking of these things as tests. What they really are are executable specifications of the behaviour of the system. There's a good book by GoikóI'm not sure I'm pronouncing his name correctlyóGoyko Adzik called Specification by Example that covers some of this stuff. I also touch on it briefly in continuous delivery, but his book goes into more depth of it. This is an example of the DSL we created. Calling a DSL is a bit fancy actually because we ran it in a unit test. Really, it's just Java code that's kind of formulated in a way that makes it readable. But if you were somebody that understood the problem domain of trading, this would make sense to you. We would identify the channel hereóthe trading UI in this caseóand then the operation that we wanted to perform and some parameters. We used strings as parameters because we started off with different versions of the language, different parses for the language for the DSL. This one, we're showing the deal ticket for an instrument called Instrument. We're placing an order, which is a limit order. The detailówhat that means if you're not familiar with it, doesn't really matter. It's a bid that means I want to buy for at the price of 10. That's fairly straightforward. Then we look for a feedback message. It's kind of high-level. Anybody that understands the business proposition would understand the intent of this test. Our goal wasn't to make these tests writable by the business. Actually, we did have a few instances where business people wrote this, but that wasn't our aim. We still neededówanted people with the right sort of analytical skills, the technical competence to think about these. But we didn't want it to be very code-y. We tended to frown upon loops and variables in these tests. We wanted them to be essentially just scripts like this because it made them reasonable, parsable, and so on. That's another version of the test. This is a very similar test, but in this case, it's using a different channel, the Fix API. As I said, beforehand, before each of the tests, we'd create the instrument that was a synonym for the market in which we're trading, and we'd create a user, and then we'd operate the test. Underneath, there was actually quite a lot more that we could do. Another thing that the DSL was doing is it's overriding a lot of the features. It's giving us some standard default behaviours. If we wanted to be very, very precise about the order that we were placing, we could specify more detail. We could say there's a trigger, so a point at which we want to do some other behaviour if the price in the market reaches this. There's a stop if my order's going against me. I want to take my profit or eliminate my losses, and so on. So I could be much more specific, but I would give sensible defaults as well. So if in the majority of cases that we're placing order, you were just trying to get a market into a decent shape and you weren't too bothered about the detail, you could just do the minimal parameters into the place order instruction, and it would do the right things. The DSL also hid the complexities of interacting with a fully asynchronous system. It would make things look more synchronous, which is important from a testing point of view, because if you're logging in and then you're going to try and place an order, you want to be sure that you've logged in before you start placing the order, otherwise the place order's just going to fail just because there's a race condition. So the test layer, the DSL, would stage the interactions. It would wait for the right results from the system before it moved on to the next stage, if that's what was happening, if it was doing something that was asynchronous. This is a more complicated version. This is the fixed place order, and you could specify quite a lot more parameters in that. So on. Right, sorry. And again, a lot more defaults. So at its basic, if I wanted to place an order through either fix or the public API, I could just say, place this order on this instrument for this quantity and price. That's the kind of minimum that I would need to do, and that would work. And that would work through any channel. As I was leaving, we were starting working on generalising the channel. So instead of doing what I was showing earlier, where you were saying, trading you are here, you had an annotation where you could say, this test runs against the fix API, the trading API, the proprietary API, and the administration user interface. And then you'd have the same test case, and it would run it against each of those channels, because the DSL abstracted the business operation so that place order was the same wherever you were placing it, through whichever channel you were placing it. That's a very powerful idea. I think there's a lot more to be said about approaches to acceptance testing like this. I'm running out of time very quickly. I want to talk briefly about deployment, because that's another thorny problem. If you want to be able to achieve the sorts of things that we've been talking about this morning, you need to be able to automate the deployment process, and that can be complicated. We developed our own deployment tools. Again, we were rolling our own, because it was a relatively immature hour. If we were starting now, we'd probably pick up something like Chef or Puppet to do it, but we built our own. We called Scotty, as in, beam me up Scotty. Effectively, the way that Scotty worked was the Scotty server sat on top of the artifactory repository, and it provided a state machine on top of the artifactory repository. Each environment that ran a version of the application ran an agent, and the agent would regularly check in with the Scotty server to see if there's any work for it to do. The Scotty server would reply with instructions like start, or stop, or nothing to do right now, or deploy. It was actually more complicated than that, because some environments, for a manual tester, you might be running the, remember, the deployed environment is 100-odd machines, but for a manual, we could deploy the whole system onto one machine with our deployment tool. For a manual tester, maybe they just run it locally. They just run it on one machine, and the QA team would deploy to a single system. The acceptance testing, we want a bit more sophistication, so we had more machines. In production, we got lots of machines, and in performance tests, we got a few again. Each of those, the way that we achieved that, was that each of the places that hosted the system had a collection of roles, and the role said what aspects of the system this was fulfilling. In a QA machine, it was pretty much all of the roles that were on the one machine. In production, it was pretty much there was one role per node, and in acceptance testing, we kind of doubled up some, the ones that were performance critical had their own, in the production environment, the ones that were performance critical had their own machine, and the ones that we didn't care so much about, we doubled up for low footprint. In production, it was actually a bit more complicated than that, because as well as the production system, we had a disaster recovery system, which was a cut-down version of production off-site in another location. So, somebody that wanted to release the system would go to the Scotty console, this is an old version, it got a bit prettier than this later, but I didn't get a screenshot before I left. This shows you a number of different release candidates that have made it through testing. These are available for release, there's a summary of the approvals that it's had, and you can choose action, so you could just say upgrade to that, and you just click on the button and it would deploy that system to the target environment. This is actually a console showing the status of the performance test environment. This is one environment, but 15 nodes, and there's a statement of what each of the roles that each of the nodes is playing, and the health status, the last time that the Scotty agent checked in with the central server, and the current operation that the Scotty agent is performing. This is a similar picture, but for a single, a QA environment, so a test environment. So this has got all of the roles on one machine, and a summary of all of the health checks from that one machine in one place. The important point here is to reiterate, that whether we were deploying the application on a development machine or in production, fundamentally it was the same code that was deploying it. Actually we didn't tend to use the UI console for deploying through a development machine, we tended to call ant targets that went through to the scripts that did the deployment, but other than that it's the same, that's all that the Scotty agent did, it just called the same ant targets and they went through to the scripts. So that meant that by the time we released the software into production, we'd rehearsed deploying that version of the software, with that version of the release tools in that configuration, probably 10 or 15 times before it got to production. So we know that the deployment system works, as well as knowing that the software works. We know that the configuration works as well as the software. Everything was version controlled, everything was asserted. We were run by about two weeks. One final point, we were running for about, we were running for 13 months before we had our first regression book in production live. And that was just a stupid thing. We had other kinds of books, we had things that we missed, things that we didn't think of, but they were the sorts of failures, failures of intent, rather than failures of execution. We kind of almost eliminated failures of execution through this process. I'm very sorry I've run out of time for questions, but I'm happy to take questions offline if anybody's interested. Thank you very much for your time.
Dave Farley is co-author of the book "Continuous Delivery" which describes the use of high levels of automation and collaboration in the delivery process to ensure high quality software and a reduction in errors and late nights. This talk introduces the ideas of Continuous Delivery as a practical everyday process, using some of the techniques and technologies from a real world project as an example.
10.5446/51011 (DOI)
Good afternoon. How you guys doing? Good afternoon. How you guys doing? Good afternoon. How you guys doing? All right, excellent. Me too. Okay, let's talk about Canvas. So in this talk, I'm going to show you HTML5 Canvas. I'm going to show you a bunch of demos and a bunch of code. So let's get started. I have a degree in mechanical engineering and I tell you that because I want you to know I love to build things. When I was a kid, I loved to take things apart and try to rebuild them, which I wasn't very good at. So when I grew up and I became an adult, I went to school for mechanical engineering and I got out of school and I went to work for Boeing in Seattle in a computer-aided design, computer-aided manufacturing group where I immediately began doing C programming. So I've never worked as a mechanical engineer a single day in my life. My whole professional career, I've been a software developer. And as I mentioned, I love to build things. And mechanical engineering is a lot of fun, but there's a lot of physical constraints when you build things as a mechanical engineer. In fact, much of your time as a mechanical engineer is spent figuring out what those constraints are and how to work around them. What I love about software development is that basically anything I can think up in my head, I can make a peer on the screen before me, given enough time and enough effort. At least that's the way things used to be when I started programming back in the late 1980s. Fast forward to the year 2000 and the web came along. And all of a sudden we went back 40 years. We went back to essentially mainframe development. And you could still create anything on screen that you could think up in your head as long as anything you thought up in your head was a form, right? Other than that, you were very restricted. So this is my view of the web developer's world for what I call the lost decade or the dark ages of software development. This picture is taken from a movie. Anybody know what it is? No? Wizard of Oz. The most watched movie ever. Fortunately now we have HTML5. And now we can once again, anything we can dream up in our heads, we can make a peer on screen. And in fact, one of the most powerful ways to do that is with HTML5 Canvas. This presentation is entirely HTML5. The slides are HTML5. My demos are inserted in the slides. I downloaded a bunch of HTML, CSS, and JavaScript from this URL right here. Now it's been about a year and a half ago since I downloaded the code. And I've been tweaking ever since. So I have a lot of things that I've added to this on my own. But there's all kinds of similar stuff out now. There's slide deck and all other kinds of stuff you can get. I just wrote a book that just got published a few weeks ago. Available on amazon.com or at find bookstores everywhere. I also have a companion website, corehtml5canvas.com. And I'm going to show you some demos from that website today. Actually, it's a local copy of the website running on my Mac. But you can go here and run featured examples from the book. And you can also run all the examples in the book. You can download all the code for the book if you want. You can also follow me on Twitter if you want. That's my handle on Twitter. So this is pretty much the first thing I wrote when I started playing around with Canvas a couple of years ago. And by the way, before we start jumping into code here, I should mention that personally, I think there's a lot of misconceptions about Canvas. Canvas is one of the most well supported HTML5 features there are. Canvas originated in 2004. It originally came from where? Do you know anybody where Canvas came from? It's a fruit. No? Apple. Okay. So Apple originally developed Canvas, which they use in the dashboard widgets on Mac OS X. So it's the same technology. That ultimately became HTML5 Canvas. So when I started with Canvas, I remember back to the days when I started programming with X11. I don't know if any of you have ever done that before. But I did X11 programming a long time ago. And they had a clock that looked like this. So this was one of the first things I did was the clock. And I draw a filled circle. For the face of the clock, I draw another filled circle in the middle. I draw the hands and the numerals around the edge, all with graphics calls from the Canvas API, which we'll talk about here in just a few minutes. So here is the Canvas API. This is the whole thing, the whole Canvas API. There's two properties and three methods. And that's it. We have the width and the height of the Canvas. And then we have these three methods. You can get the context. You can call it to data URL to get a data URL for the bitmap that the Canvas represents. So Canvas is bitmap graphics. This is not vector graphics like SVG. There's also no DOM inside Canvas like there is inside SVG. There's also a to blob method. So you can create a blob from a Canvas. The most important part of the API is the get to context method. This returns a graphics context. And inside that is the real Canvas API for doing 2D work. And we'll talk about the methods that are in the context and how you use them as we go through the talk. So the graphics context supports four basic pieces of functionality. First of all, you can draw and paint with it. So you can stroke and fill paths with Canvas. We'll talk about paths here in a little bit. You can also draw text on screen. Canvas just recently underwent a pretty substantial update to the specification. And the text support is greatly improved in the latest rev of the spec. Of course, none of that has been implemented in any browsers yet, but it'll be coming soon. Canvas also gives you support for images. So we can draw images. We can grab all the bits to an image. And then we can iterate over those bits and do something to them. So we can write image filters and things like that. In fact, we'll see that a little bit later in this talk. And finally, Canvas lets us do video processing. So you can use the Canvas element and the video element together to do some really interesting stuff. And we'll see some demos of that near the end of this talk. Okay, let's start with the drawing capabilities in Canvas. So here I have a simple example. Every time I click on the draw link, I'm going to draw a thousand lines. When I first put this presentation together, it's been nearly two years now, a year and a half ago. I drew a hundred lines. But now Canvas kicks ass, so I'm doing a thousand lines. By the way, you may have heard that Canvas is slow. How many people have heard that? Okay, that's true. How many people have heard that hardware accelerated Canvas rocks? Okay, that's also true. So in the old days, before we had hardware acceleration, Canvas could be pretty slow. Nowadays, we have hardware acceleration in most browsers. Chrome just came out with hardware acceleration in version 18. And it makes a huge, huge difference. There's a trend towards going to CSS to do things like games and animations. Don't do that. Okay, CSS got hardware acceleration first. And so it was faster than Canvas to do animations initially, but Canvas is catching up. Canvas is much more powerful than CSS for doing animations. And you can actually do exactly what you want to do without having to totally screw up the way CSS works. So every time I click on this link, I'm drawing a thousand lines. And let's just take a look at the code for this application. Here's my HTML and my CSS. So I'm going to put a shadow on my Canvas. I don't know if you noticed that. There it is. I draw a border around the edge. And then I have the Canvas element. Notice I have an identifier. We're going to use that in our JavaScript to get a hold of the Canvas and then get a hold of the context and then draw our lines. The content that's inside the Canvas element is fallback content. The browser is only going to display that if it does not support HTML5 Canvas. And here's a JavaScript. So what I'm going to do is I'm going to use GetElementById to get the Canvas. Remember that the identifier of the Canvas was just Canvas. So I get the context. And now when you click on that link, I'm going to iterate 100 times. I suppose I should update my slide here a thousand times. And I'm going to start a path with a beginPath method. And I'm going to move to a random location. I'm going to draw a line to another random location. And then I'm going to set the stroke style to a random color. And then finally I'm going to call stroke down here to actually make the line appear. So the way you do graphics in Canvas is very similar to other 2D graphic systems. You stroke a path, you define a path, and then you either fill or stroke the path or both. So what I'm doing here is I'm beginning a path, I'm defining a path, setting my stroke style, and then I'm stroking that path. If I take the call to stroke out, nothing will ever be displayed. I'll just create a bunch of paths that are never, never visible. Is that okay? Does that make sense? Here's the code for this. Is this okay? Does it make sense? Are you sure this makes sense? Because I'm going to give you a quiz. So what if I do that and I do that? Now, what's going to happen? Am I going to have any visible difference? Let me reload. So I clicked and I'm waiting. I'm still waiting. Maybe a thousand is too much. Let's go to a hundred. Two things are kind of curious here. First of all, I had to go from a thousand back to a hundred to get anything to appear. And secondly, all the lines are the same color. Why is that? Of course, you guys know, right? Let's take a look. So what I'm doing is I'm starting a single path and then I'm adding a bunch of stuff to the single path. And then when I'm done, I'm going to stroke the whole thing. Okay? I'm going to use the last stroke style that I set when I fall out of this loop. So they're all going to be drawn with the same style. Does that make sense? Okay. Okay. So we can draw lines. We can also draw circles and rectangles and curves. But here's something a little more interesting than drawing graphics primitives. So now I'm doing a transformation. When I hit the spacebar, I'll stop. If I hit the spacebar again, I go the other way. Kind of cool, huh? I noticed the first time I was giving this demo, I accidentally left my fingers on the spacebar too long and I noticed a great crowd pleaser. Isn't that awesome? Look at that. Let's see CSS do that. I don't think so. Okay. So how do I do this? Well, once again, I'm going to get the canvas element. Again I named it canvas, but you can name it whatever you want. I'm going to get the context for my canvas and then I'm going to translate the coordinate system to the middle of my canvas. Now normally with canvas, the origin of the coordinate system is at the upper left-hand corner of the canvas. So what I'm going to do is I'm going to translate to the middle of the canvas and then I have a little animation here. This is my animation. I'm going to, every time I go through the animation, I'm going to clear out my canvas. I have to offset because my origin is at the middle. I have to go back up to the upper left-hand corner and erase the whole rectangle. And then I'm going to rotate the context, which rotates the entire coordinate system, and then I'm going to draw my text. I just draw my text at zero, zero. I fill it and I stroke it at zero, zero every time. Zero, zero is in the middle of the page. Is this making sense? Okay. So that was cool. Here's something a little bit cooler. So now I'm doing the same thing. I'm rotating this. Oops. Yeah, carried away. But I'm also scaling at the same time. And of course this just required one very minor change to the last code that I showed you. And that is I just add this scale method here. So inside of my animation, I'm going to clear out the whole canvas. I'm going to rotate through whatever the current angle is. I'm going to scale in both x and y directions. And then I'm going to draw the text. Nobody else leaves this talk. Okay. That's it. I'm just kidding. You can leave if you want. I may leave myself if things get too bad. Okay. All right. So here's something a little bit cooler than the last demo, which was a little bit cooler than the demo before that, which is the way things are going to go. So what I've done here is I have some icons. And I'm just drawing these icons with graphics, primitives, and canvas. But I'm actually just drawing them right over here from top to bottom. And what I'm doing is I'm transforming the coordinate system before I draw those icons. So I have kind of a sheer effect. So it kind of looks, at least to me, like these icons are floating above the canvas. And I could actually draw under the icons, which I can. So I can click on this guy and I can create a rectangle. I can click on this guy. Sorry about that. Gotta be careful with two fingers. So I can, that, I can draw a circle. I can also erase stuff. So that bottom icon there is an eraser. And so now I can erase stuff. In fact, I can erase part of what's under there. That's pretty cool, huh? So here's how I did that. Here's how I made those icons sheer and made them appear to float above the page. There's actually a little bit more to it than this. I actually have two canvases underneath that. But you don't want to know the details about that. What you want to know the details about is how I do the transform. So what we looked at in the last demo was we looked at three methods. Translate, rotate, and scale. So we have translate. We can move the coordinate system. We have rotate. We can, we can turn it. And we have scale. We can zoom in or out. Those three functions are a specialization of this transform method. This transform method takes six arguments representing indices in a matrix. And each of these values means something. To be honest, I forget exactly what they each mean, but there's a good Canvas book out there that you can buy that will tell you. And no, I don't remember everything I write. So, anyway, you can transform with the transform method or you can transform things by doing scale, rotate, and translate. One thing you can't do with scale, rotate, and translate alone is do shear, right, if you think about it. No amount of rotating, translating, or scaling is going to produce shear, right? For that, you need the more general transform method where these six values have meaning. So, okay. Is that okay? Anybody have any questions or comments? I know I'm in Norway, but I thought I would ask anyway. No? Okay. Let's look at gradients and patterns. That's kind of cool, huh? So, sometimes I just sit at home for hours just watching this. Especially when I've had a couple glasses of wine. All right. So, here's how you do gradients. You can create a linear gradient. You can also create a radial gradient in Canvas. And then you add color stops to your gradient, okay? For zero, I'm going to start with light green, and at the end, I want it to be red. So if you go back here, you can see that it's green on the left. It's all right? I'm red on the right? It's all right? I'm colorblind, so this one's tough for me. I think green is on the left. So, gotta hate stop lights, but that's another story. Anyway, so you can have as many of these color stops as you want. This first value has to be a value between zero and one. And then the second value is the color that you want, and you're going to get a smooth transition between those colors. And then you just take that gradient and set it to the fill style of your context. So I set the fill style to the gradient. I set the stroke style to blue. I begin a path. I draw an arc that has centered here with this radius, with this start, and end angle, and I draw it clockwise. And then I stroke and fill that path. I don't know if you can tell, but the line around here is blue, and that's the stroke style. For patterns, I have an image, which is my red ball here. I'm going to create a pattern with that image, and I'm going to tell it I want that pattern to repeat in both the x and y directions. So you can say I want it to repeat in both x and y, which is just repeat. You can do repeat-x, which is just the x, repeat-y, which is just the y. I think you can have none too, which puzzles me, but I don't know, it's there. And if I'm moving left, then I'm going to set the value for the fill style to that pattern. So notice that I'm only going to show that pattern if I'm sweeping to the left, if I'm sweeping to the right, I don't show it. Okay. Is that okay? Questions, comments? No? Going once, going twice? No. Okay. I have to come up with some questions of my own. Okay. So here I have three orange balls, which makes me somewhat of an oddity. Sorry. I'm going to get some red cards for that. I know it. This is like soccer, isn't it? I thought about that yesterday. Anyway, sorry for the tangent. So I have these three orange balls animated on top of some text. And when you draw one thing on top of another in Canvas, you can tell Canvas how to compose those two things. So the default is source over. So we refer to the balls, the orange balls as the source, and the text underneath as the destination. So whatever you draw first is the destination. Whatever you draw on top is the source. So by default, we get source over, which doesn't need much explanation. We could also do destination over. Can you imagine what that's going to do? Okay, that makes sense, right? How about source atop? This one's kind of cool. Let's see. What else? Lighter? That's kind of cool. Spotlight effect. Probably shouldn't show you these. Destination in, that's an interesting one, huh? There's some disagreement between browser vendors over five of these properties. So they're implemented differently in one browser than they are another. So some of these properties you probably shouldn't use because you can't use them portably yet. There's a big disagreement between Opera and Firefox versus the WebKit guys as to exactly how composition should work. And so some of these are in limbo at the moment. So here's how we control that compositing operation. We have a global composite operation attribute of the graphics context. And this value, I don't show you where this value comes from in this code, but that's the value that I get out of this list box. I assume you don't want to see that boring code. So I'm setting that global composite operation to whatever I clicked on in the list box. And then here I'm drawing those three orange circles. Adjust position, just adjust the position to move it a little bit in the animation. And then I begin a path. I create an arc and I stroke and fill the arc. This begin path, create a path, stroke and fill is something you'll do a lot in Canvas. Another interesting thing here is to save and restore. So a lot of times you want to make changes to the graphics context attributes like I am here with a global composite operation. But a lot of times you want those changes to be temporary. I may want to change the composite operation when somebody drops a painting on top of some text or something, but I don't want that to stick forever. I want to go back to the original operation when I'm done. So what this does is it actually takes the state of the context, pushes it onto a stack, and now you can change the context all you want. And when you do restore, it pops that context off the stack and resets the graphics context to whatever you put on the stack. In other words, whatever I do between here and here to that is temporary. Does that make sense? Okay. That's it for drawing. There's more to drawing. We can draw curves. If we have time at the end of this talk, maybe I'll show you a few more demos that play around with that stuff. So you can draw curves, you can draw rectangles, you can draw arcs, you can draw BZA curves, quadratic, cubic and all kinds of stuff. But for now, we'll leave it at that and move on to images. So here's an image, and I can scale this image with my slider. And here's how I do that. I have a function that I call drawImage. I'm going to clear out the canvas starting at zero, zero, and the width and the height of the canvas. So that clears out everything, raises the entire canvas. And then I'm going to draw the image. So there's a drawImage method on the context. And you can call it with different numbers of arguments. Here I'm putting the pedal to the metal and using all of the arguments to actually scale the image as I draw it. So this is the image, width and height, but this is the scaled width and scaled height that I'm going to draw the image at. So this one call takes the image, which is this wide and this high and draws it in the canvas, scaled to fit the entire canvas. OK. Besides drawing and scaling images, we can also filter images. We can actually grab all the bits of an image, do something to those bits, and stick them back in the image. So we can do all kinds of cool image filters. So here's a really cool one. You've probably never seen an image filter like that. Who knows what's going to happen if I click negative again? It's a smart crowd. Yeah, that's right. OK. So here's how I do that. Here's my negative filter. The first thing I'm going to do is get the image data from the context with the aptly named getImageData method. I'm going to tell it I want you to give me the image data starting at this location and this wide and this high, so it's going to get me a rectangle of pixels, essentially. And then I'm going to loop over that data. So inside of this guy is an object called data, which has a length. And inside of this loop, I'm going to access the image data, which is red, green, blue, red, green, blue, red, green, blue, on and on and on. And what I'm going to do here is if, sorry about that, if I'm not every fourth component. So the way this works is you get an array of integers that represent red, green, blue, red, green, blue, red, green, blue. So what I'm going to do is I'm going to hop over every fourth value and set the other values. So if it's not the fourth value, I'm sorry, I'm getting mixed up. If it's not the fourth value, I'm going to set the value to the inverse. So I leave the opacity alone. I leave the A part alone, but I modify the Rs, the Gs, and the Bs to be the inverse of what they were originally. And then down at the bottom, I put that image data back in the image. You can do all kinds of cool stuff with images. Here's something else you can do. You can do rubber band selection. So as I select this guy, I zoom in and then I can reset. And what I'm doing here is I'm using an off-screen canvas. If you use canvas a lot, you will undoubtedly use a lot of off-screen canvases. So I'm sorry, my laser pointer seems to have... Oh, never mind. So what I'm going to do is I'm going to get the image data from my canvas and then I have a variable off-screen canvas. If that's undefined, I'm going to create a canvas with a create element method. This is an invisible canvas because it's not attached to the DOM. It's just floating out there somewhere. And I'm going to set the width and the height of the off-screen canvas to the width and the height of the on-screen canvas to make sure they match. And then I'm going to clear out the off-screen canvas and I'm going to draw the image off-screen. So now I have a copy of the image off-screen and I clear the on-screen and draw from off-screen to on-screen from this size scaled to this size. Does that make sense? So I keep a copy of that image off-screen and then I just pull from that off-screen, display it on-screen and scale it along the way using draw image. Okay, so this is clipping. And clipping, as far as I'm concerned, is the Swiss Army knife of canvas. You can do all kinds of stuff with clipping. And before we had this filter, and notice that this filter applied to the entire image. I'm getting all the bits of the image, changing them and sticking them all back in. But what if I want to do this? So now I have some sunglasses. And what I've done is I've taken this image, I've taken all the bits, just like we did in that negative filter, I've processed all the bits in the image so they're a little bit darker, a little bit higher contrast so it kind of looks like you're wearing sunglasses. And I put the bits back into the image and display it in the browser. But before I display it, I set the canvas clipping region to these two circles. And when I blast the entire image modified back into the canvas, that changes only restricted to the clipping region. Does that make sense? Okay. If I take the call to clip out, then the whole thing would look like this. Does that make sense? But since I set the clipping region to those two circles, all that's changed is those two circles. Okay. Let's take a look at a couple of demos. So this is from my website. I removed all the identification from my website. This is a local copy of my website. So I was going to run demos right off my website. And I was showing this to my wife before I left. So you can't do that. It looks like an advertisement for your book. So I said, okay. So I took all the stuff off. So there's a local copy running on my computer. If you go to corehtml5canvas.com and you click on featured examples, that's the same thing. You can run those examples. In fact, if you have a laptop now, feel free to pull it out and try it if you want. But what I'm going to do, if I can do it, is this. Okay. All right. So let's take a look at this. So this is a paint program. Let me reload this so I get rid of that graphic. By the way, this carousel is a jQuery plugin that I think is really cool. What's it called? Don't remember. Sorry. It'll come to me by the time the talk is over. I'll remember. Carousel. J carousel. JQuery carousel. Something like that. But anyway, here is a little paint application. And what I'm doing is I'm drawing a Bezier curve. By the way, here are those icons that we looked at earlier that were sheared in kind of a 3D business. You can grab these endpoints and control points and you can drag them around. And when you click, it finalizes that curve. Let's see. What else? We're going to draw a path that we close and fill. We can draw circles. We can draw, let's do a different color. Rectangles. We can even do this. Actually, let me show you your erase first. So we can erase stuff. When I was working on the eraser, it wasn't working. I was doing this. I thought, that's kind of cool. So I kept it in the paint program. I call it the slinky effect. And you can also type some text in here if you want. So that's just a simple little paint program. Canvas is a bitmap. This thing right here that you're looking at that I'm drawing on is a bitmap. If I go like this to save this image, I can't because it's not an image. It's a bitmap. It's not an IMG element. The two data URL will return you a data URL that you can assign to the source attribute of an image object. Does that make sense? So you can turn a canvas into an image. And I can do that by doing take a snapshot. And now this is an image, not a canvas. And I can save this image to my desktop if I want because for sure anybody that draws something this cool is going to want to save it. And then if I click back to paint, I go back to the original paint. So that's a simple paint application. There's a variation of that that creates polygons. So I can create a polygon here. Let's do an eight-sided polygon filled with yellow and partially transparent. Is that yellow? Something. Anyway, now you can drag these around if you want. If you click on them, you can rotate them to a different angle. And then when you click, it sets it to whatever angle you're on. Let's see. This is a pretty cool one. So this is a magnifying glass, and as you drag this magnifying glass around, what I'm doing is I'm constantly, for every mouse move, I'm capturing the rectangle of pixels right here. And I'm drawing it back in to the canvas, clipped to this circle, just like we saw before in an earlier demo. So I can change the zoom level, and I can change the size of that circle. That wouldn't be possible without the clipping region. That's why I say the clipping region is kind of the Swiss Army knife. Notice these sliders up here. There's one major browser that still does not support sliders. Anybody know what it is? No? Who? You're smiling. You must know. No? Firefox. Firefox, when you tell Firefox I want input type equals range, you get a text box, text field. Every other browser on the planet gives you something that looks like a slider, but not Firefox. I don't know why they can't seem to come up with a slider, but they can't. So that leads us to another demo. One thing that, you know, when people think of canvas, they think of games or animations, which is a great use for canvas, but you can also create custom controls with canvas. This is really cool stuff, because each one of these sliders is a canvas. And inside that canvas, I'm drawing these sliders. I have them hooked up to the background so that the color changes as you change the slider. Not only that, but the color of the slider itself changes. Do you see that? In fact, the opacity, if you go all the way to here, the slider disappears, and now you got to find it. There it is. So not only can you do things like games and animation and all kinds of cool stuff with canvas, but you can also create custom controls, or I hate to say it, but custom widgets, that you can use not only inside of a canvas, but in any HTML5 application whatsoever. Here's another example of a custom control. This is an image panor. When I first started writing my book, I spent a lot of time on iStockPhoto.com getting images for my book. And I noticed they had this really cool little thing. When an image was really big, they had a little rectangle, and you can drag this little viewport around to see parts of the big image because it couldn't fit on the page. And as soon as I saw that, I thought, I have to do that with canvas. And that's what this is. As you drag this viewport in this small canvas, I adjust the image in that big canvas as you're dragging that around. This is another example of a custom control that you can create with canvas. So in this example, I have the custom control up here, the slider, and I also have this image panor. You can use this image panor with anything. This doesn't have to be a canvas with an image. You could pan anything with this. One other thing before we go back to the exciting slides is animation. I'm not going to touch on animation in this talk because I talked a lot about it yesterday when I did my game talk. But I'll just show you this example real quick. So here we have this little naked girl running across from left to right. The line represents linear time. And this is ease in. So she's going to start out slow and then speed up at the end. We can do ease out where she starts out fast and then slows at the end. Or we can do ease in out which starts out slow, speeds up, and then slows down at the end. None of these things are built into canvas itself, unlike CSS, which lets you specify I want to do ease in or ease out. But it's not really that difficult to implement that kind of stuff yourself from the primitives that you get in canvas. Okay. Any questions or comments? So far, really, I'm serious. Yeah, go ahead. So the question is, is there a way to take these custom controls and package them up and give them to someone else effectively? Yes, there is. You can do kind of an ad hoc component thing yourself, which is really the only option available today. There is an HTML5 component specification that is underway. Has anybody heard of it? No? It's just barely started. I expect it'll be at least a year, maybe two before we hear much about it. But there is a specification underway for HTML5 to let you create custom controls and package them up and give them to someone, and then it'll be a standardized way to do that. What I've done here is just my own ad hoc way. What I do is I create a div, I create a canvas, put the canvas in the div, and I give you the div. And then you can do whatever you want with that div. But there's going to be an official way to do that soon. Anybody else? Yeah. So, performance of Canvas and mobile browsers is horrible except for iOS5 on an iPad that rocks because it's hardware accelerated. And that's the trend that we're seeing is that, you know, performance is horrible. On an iPhone, it's just terribly good. One frame per second or something on animations is just absolutely unbearable. iOS5 on an iPad is totally different story because Canvas is hardware accelerated. There you get at least 30 frames per second. So that's the trend that we're seeing. Right now, mobile Canvas is nothing to get too excited about, but I expect that will change as we see more hardware acceleration come along. And of course, we already have it in iOS5 on the iPad, which is a big deal. Okay, let's watch a movie. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Isn't that great? I love that. What I'm doing here, it looks like I just have a video playing, which I do, but actually there's more to it than that. This video is only about this big originally. And what I'm doing is I'm grabbing every video frame and scaling it real time as the video plays. And here's how I'm doing that. I have a hidden video, but a visible Canvas. You're not seeing a video at all. What you're seeing is Canvas redrawing every frame of the animation in real time. Okay. And I have an invisible, okay, I have display none for my video, and I don't have that for Canvas, so I have a visible Canvas and an invisible video. And here's what I do. I have a little animation here. And what I'm going to do is if the video is ended, then I'm done. And I'm just going to set the button to play. So when you click the button, it says pause. And then when it's over, it goes back to play. If the video is not ended, I'm going to draw the video into my Canvas. Now, before we saw a draw image where I was actually drawing an image, okay, draw image is not really just draw image. You can draw an image. You can draw a video or get this. You can draw another Canvas into a Canvas. And you can even draw the same Canvas into itself, which actually has some use cases, believe it or not. But anyway, that's what I'm doing here. Again, in real time, I'm grabbing every frame at 30 frames per second and showing every frame of the animation in real time as it runs. That's pretty cool. Here's how I do the controls. I just have an on-click handler for the button. I'm not sure exactly why this slide is in the presentation, but there it is. Okay, so what I did in the last demo was for every frame of the animation, I resized it, okay, because remember that not only was I grabbing every frame and displaying it in the Canvas, I was also scaling it along the way because that video is this size initially, not that size, okay? But remember, ah, remember that when I put the frames of this video into the Canvas, I can access the bits of that frame along the way. So, besides scaling, I can also do other stuff, like I can drain the color out of this, or I can flip and add the color, okay? I can even do this. Okay? I can have both videos playing at the same time, and I still have instant performance here. Originally why I did this talk, I had three videos. You can run three videos simultaneously. I've never tried four, I'm sure you can. I don't know how far you can push it, but it's pretty powerful stuff. So what I'm doing in this demo is I'm processing the video in an off-screen Canvas. So again, I'm going to create an off-screen Canvas. Here's my animation, and what I'm going to do is I'm going to draw the current frame of the video to the off-screen Canvas, and then I'm going to process the off-screen Canvas in here, and then I'm going to draw the off-screen Canvas to the on-screen Canvas. Okay? Here is that process function. So what I'm showing you on this next slide is this function right here. Is everybody okay with this? Everybody see what I'm doing? I'm grabbing each video frame, drawing it off-screen, processing the off-screen stuff, and then pushing it back on-screen. And what I'm going to show you now is the processing. And here I'm doing the same thing that we did before with the image filters. I'm getting the image data from the off-screen Canvas, which is my video frame from the video. And I'm processing the image data, and then I put the image data back in, either with the color drained out or with it flipped upside down or both, or whatever you specified with this checkboxes. Okay. So inside of here is where I actually do the processing, and here's how I go from color to black and white. So I just get the red, green, and blue components of each pixel, and then I put them together, divide them by three to average them out, and then set that average to the red, green, and blue components, and then I stick that back in, and that drains all the color out. Okay. One more thing. Let's talk about Canvas on mobile, because this is really cool. What I have here is I have the magnifying glass application that I showed you earlier running on both the iPad and my desktop. It's actually indistinguishable. The only way you can tell that it's on the iPad is because you're holding an iPad in your hands. Otherwise, you don't know. It looks exactly the same. With mobile, you have a much smaller viewport, right? Everybody knows this, but there's something that everybody doesn't know. When a mobile browser draws a web page, it doesn't just draw a little piece of the web page, right? Because if they did, everybody's tendency would be to zoom out so you can see the whole page. So that's what they do for you by default. Almost all mobile browsers do this. What they do is they draw your page into an off-screen representation of the page at a width that corresponds to what they think you would have on a desktop. For iPad, I believe it's 900 pixels. For Android, it's 800. They're guessing at a desktop size. The reason they do that is they want the CSS machinations that create your page to create something proportionally similar to what you would see on the desktop. So they draw your page into an off-screen buffer at a huge size that would exist on a desktop. Then they take that off-screen representation and scale it to fit the display so the proportions remain the same. Does that make sense? You can control that off-screen viewport if you want. With a meta tag whose name is Viewport, you can tell the browser, I want that off-screen viewport instead of 900 pixels on iPad to just be 480. And you might do that if you had written an iPhone app that you want to run on an iPad and you want that off-screen bitmap to be 480 so your application scales to fill the entire page on the iPad. If you're taking a desktop application and you're going to run that on mobile, you probably want to do this which says use whatever the width of the device is for that off-screen viewport and then copy back. You can also set things like the initial scale and the maximum scale and user scalable which stops people from scrolling or pinching and zooming on the iPad. Media queries are also very important, especially with Canvas. Here I have the magnifying glass running in portrait mode instead of landscape. What you saw a couple frames ago, this is in landscape mode. Here's what it looks like in portrait. So I have to do some stuff differently when I'm in portrait mode and the way I do that is with media queries. So here I'm detecting that I'm on an iPad and I have some CSS that's specific to the iPad itself. If you're on the iOS 5, you can also create applications or icons for your applications. Here I have the magnifying glass and the paint application. What I've done is I've just ran these from my website. So if I go back here and I go here and here and... Sorry. Now I get the magnifying glass. If you access that URL on an iPad and then in the iPad you say add to home screen, it'll add this as an app on your home screen. So now you have a little application and you click on the app, it's going to bring up Safari. But of course it has all the Safari Chrome. You can get rid of all that Chrome too. Here's how you set an icon for your app on the iPad or iPod or iPhone. Just set a link that's Apple Touch icon pre-composed. It's really picky about these sizes. You have to have the exact size. In this case I'm 72 by 72, but there's different sizes for different devices. You have to get them exactly, otherwise it won't display anything at all. So if this size was 72 by 71, I'd be out of luck. Luckily I know the magic number is 72. Besides an icon you can also create a splash screen. I created a splash screen for the magnify application that looks like this. While the application is loading, the iPad shows this splash screen and then when it's done, it shows the application itself. Here's how you do that. This is ridiculously simple, right? Just one line of HTML. I was going to show you a couple of demos today on my iPad, which I brought. I wanted to show you these two applications on the iPad, but I forgot to bring my connector. So I can't do that unless I hold the iPad up and then you'll be the only person that can see it. So I'm not going to do that. Any questions or comments? Audio? Yes, you can play audio with HTML5, but audio really sucks in HTML5, to be perfectly honest with you. Audio is a mess. You'll get some sounds on some browsers and sometimes sound will just disappear for a while. Sometimes it's just intermittent, sometimes it goes away altogether, sometimes it runs perfectly. It depends on the browser, the time of day, what the weather is outside. I talked to a Google engineer about this and he says audio is just notoriously difficult to do. So every browser vendor has a lot of trouble with audio. So you might look for a third party solution for audio if you want to do a game or something with Canvas. Anybody else? Question or comment? I think we set a record in here for questions, didn't we? We had like five today. That's great. Thank you very much guys, I appreciate it.
Before the web, software developers implemented what we now refer to as desktop applications, using powerful graphics APIs that gave them the ability to program pretty much anything they could imagine. Then along came browsers with virtually no standard graphics support at all. Enter boring web applications, and dull work for developers. But now, with HTML5 Canvas, developers have a powerful graphics API that lets them develop mind-blowing applications. Now you can implement desktop-like applications that run in a browser. Tonight, we'll see how. This talk is a demo-fueled, fast-paced introduction to HTML5 Canvas. You'll get an overview of the Canvas API, and see how you can use it to draw, manipulate images, and implement sprite-based animation. You will get a feel for what you can do with this powerful API, and you'll get a basic understanding of how to harness that power.
10.5446/51014 (DOI)
How am I on? I'm on. Great. Thank you for coming to my session. I have a couple questions for you to help me kind of gauge where I should put emphasis in this talk. How many of you are working on large programs and using Agile now? So quite a few of you. How many can I get a range of the number of teams that you're trying to coordinate? Is it like how many people are working with 10 teams? Greater than 10? 20? 5? 3? OK, so you're working with multiple teams, but it's not huge yet. OK. Does it seem like your products are growing so they're going to become bigger programs? So in some cases, things are so you're here to see what's ahead of you so you can avoid some traps, perhaps. OK, great. I want to start by just talking about the notion about what a team really is. We talk a lot about teams and people have different definitions for that. I mean, I say the word team, and then when I ask people to describe things, sometimes I hear about a functional group. Sometimes I hear about 100 people. Sometimes I hear about 5 to 10 people whose work actually bears no interdependency. So I want to just so that we're talking about the same thing, I'm going to share what my definition is. Because when we are looking at agile, typically we have a particular way of thinking about teams. And we're very, very good at delivering, using agile principles and practices when we have just one or two teams. We know how to do that. We're pretty good at that. But when we get to these larger, larger programs that require many teams working together to coordinate things across hardware, across software, across many, many components, then we come into this notion of having to work across context boundaries and sometimes across organizational boundaries to make things work. So we can get the benefit of the team effect, even when we're looking at large, large projects. But we have to think about it a little differently. So to clarify, when I talk about a team, I am talking about a goal-oriented social unit. And they have typical characteristics. They have a compelling work goal. They have mutual responsibility and accountability. They share an approach to work. Not that they have a lockstep process that everybody has to follow, but they share some general approaches. So they share some commitments around how we're going to be working together. They share some commitments about definitions of done. They share some commitments around where we're going to be using certain practices. Those don't have to be identical for every team, but you do have to make sure that things like definition of done work together, so that those are at least congruent across teams. Because if those are different across teams, you're going to run into all sorts of problems. Teams generally have complementary skills so that everybody together has the skills that are needed to produce a product. Their work is interdependent on the day-to-day level so that they all need to work together and commit commitments to each other to get their work done. And they have some history together, because no group of people is a team on one day. No group of people gets to be a team without having spent some time working together and talking together and having conflicts together and figuring out how to work those out. Generally, teams are pretty small. So what is the size of teams that you guys are working on? Are they like three people? How many people are working on three people teams? Yeah, in my experience, that's kind of the minimum to have any sense of teaminess. If it gets any smaller than that, there's no real sense of teaminess or backup or that we really have a product that requires a lot of interdependency. How many people are working on teams of five? OK, seven. 10. Bigger than 10. Nobody bigger than 10, that's good. Oh, you are bigger than 10. OK, what happens when you have a group of 10 is that the coordination overhead around communication and around just getting your work to work well tends to begin to overwhelm the added capacity of the number of people. So you have one line going this way and another line going this way. And at 10, it starts to cross. And you start to lose that theoretical capacity of additional people because you have so many interactions to coordinate and so many communication paths to manage. There is research that says the ideal number is five because you're getting the benefits of the additional knowledge and the additional mind power, but you're not overwhelmed by the need to coordinate processes and coordinate communication paths. You know, seven plus or minus two is generally a good guideline for a team. But once you get over 10, you start losing some of that ability to really be quick about your work because you have to start having extra processes involved. And very often, if the work is that big, there's some natural dividing line and the group will naturally break down into subgroups, starting around 10. If you get it much bigger than 10, it almost always happens. So that's something to look at when you're thinking about teams and how you're going to put teams together. I just looked at my timer and it says I have five minutes and 32 seconds left. That can't be correct, Kenneth. I've got a little more time than that. There's a little production problem here. All right, reset. OK? It's possible to use teams as a building block for much larger systems. But the coordination mechanisms need to be somewhat different. Teams offer us a lot of benefits around flexibility, around their ability to learn, around the engagement of people working in the team. Because most people find that quite enjoyable. Right? So it makes their work better. They get a better sense of the whole product rather than just I have my little job here. And people tend to take more responsibility when they are making commitments to their peers. There's also research that shows that people are much less likely to fail to meet a commitment they have made to their peers than they are with a commitment made to their manager. Is that interesting or what? Right, so we often think about people making commitments to managers, but really it's more powerful for people to make commitments to their peers. They're more likely to follow through on them. So it really does have an impact on responsibility. So when we have a team, there's coordination within the team. But when we have many, many teams, we start to get issues around coordinating across those teams. And some of those are the technical integrity of the system, because if you have five teams working on the same code base, you can have some issues around technical integrity, both at the product level and the component level. If you're working across many teams, the things have to come together so that you can create a coherent feature at the same time. So you have to be working to coordinate the way the work is distributed to different teams. And you have to be looking at how you take the work of the teams and then bring it back up so it works together in the end when you actually deploy. That's almost always the rub when you're trying on these big, big programs is the coordination and the integration of the work of many teams at the end of the project, which how many of you have ever spent time in integration hell? Yes, I have been there, but somehow I got out. So that's one thing we also want to look at avoiding when we're looking at big programs that require multiple teams. The traditional way to organize many groups of people is hierarchy. How many of you work in a hierarchy of some sort? Yeah, well, I don't think hierarchies are going to completely go away. But the problem with trying to coordinate across multiple teams with hierarchy is that it puts a lot of burden on a single point, often a project manager, or it constrains the communication to go up a chain and then come back down a chain, which, I don't know, to me, that doesn't sound faster flexible. Anybody work in an organization where that's the communication pattern when you have to coordinate? It goes up the chain and then comes back down? Yeah, not fast, not fun, not flexible. So we need to look at some other models for how to coordinate. Unfortunately, bureaucratic hierarchy is the predominant model. And it's the one people have known, it's the one people have lived with, so it's the one people go to first when they think about how do we coordinate. If you've read much of the scrum literature, you've probably heard the term scrum of scrums. How many people have heard that term? Yes, scrum of scrums. Well, it's an interesting term. This is a picture from one of Mike Cohen's talks on scrum of scrums. And if you look at the general shape of this, what does it remind you of? Hierarchy. Yes, it does. It just sort of looks like a hierarchy with a little different funny name. The problem I see with scrum of scrums is that it is putting a lot of demand on this kind of hierarchy of we're going to take one person from each team and put them together. And that would be a scrum and scrum of scrums. And then we'll take one person from that and have them coordinate with another. That's asking them to do a lot of work. That's asking them to look at the technical coordination, the work coordination, the integration coordination, and look at impediments. So that's a lot of work to do. And typically, this model in my experience breaks down after you have maybe four or five teams. Does that match your experience? Anybody have experience? Yeah, so it begins to break down when you have something on me as rattling. What is it? Can you guys hear that? My other earring. Oh, gosh. Well, it's going to come off. Who has the job of reminding me about my earrings? You do. Great. Thank you. So there's another model of organizing, which looks more like a network. When you have a network, you are not so dependent on having information flow up and then flow down. You have information flowing across. You have information flowing laterally. And people are better able to coordinate locally because there's very often redundancy in a network. It sounds inefficient if you're used to thinking about hierarchy, but it's actually much more flexible and it's much more quick in disseminating information and allowing people to work at a pace that makes sense locally, but then rolls up to a coordinated effort. So any questions so far? Yes? Yeah. How are you grouping those kinds of things? Because here you have probably different members from different strategies having meetings together or something. So the question is how are you grouping them within the network? I'm grouping them on the functionality or the same kind of jobs they're working on. How do you get these teams together? So each individual team or the coordinating pieces? Can you hold for the answer? Because that's kind of the rest of the talk. OK. So thank you for asking the question. And this reminds me that if you have a question, hold it until it occurs to you. And then can you do that here? And then ask a question. Does that work? We'll find out. We'll find out if it works. OK. So some principles for figuring out just the answer to just the question you asked. How do you figure out who goes on which team and how does the coordination work? So I think about this in terms of a handful of guidelines and a handful of practices. So guidelines for how you figure out which teams work together or how you build the lateral integrating mechanism and some practices, both social and technical, that enable you to coordinate and integrate without hierarchy. Because once we get in hierarchy, we begin to lose the benefit of the team effect. And then we're just back into the old pyramid where information goes up and down. And there's it slows things down. So the principles. Manage dependencies as much as possible in the backlog. One of the things that I see in a lot of organizations is that there is an absence of a certain level of planning. They might have the vision up here about where the product's going. And then they have a good level of planning on the iteration level. So within a release, the iteration, the folks who are working in iterations know what they're supposed to do. But there's a middle level of planning that's missing. And when that middle level of planning is missing, it pushes the coordination burden to the lowest level in the organization and creates a huge amount of churn. As the teams who are the agile teams are trying to figure out, well, who has that part of that story? And do we split this story? And you work on that part of it. And we work on this part of it. And it rolls back up somehow. That level of coordination pushed so low in the organization is a recipe for churn. So what we want to do is manage as much of that in the backlog as we can and account for it in the release. So we are not forcing that coordination to the teams. This may mean that you don't do things in priority from one to a billion. It may mean that you have to look at what makes sense to group together and what are the dependencies for delivering something and how can we order our backlog so that we are not forcing a lot of coordination overhead and we're not forcing a lot of dependency within teams. So to the extent that you can, manage dependencies in the backlog. Aim for long, live, cross-functional teams. As I said earlier, teams do need to have some time together to learn how to be a team, to learn how to work together, to learn how to learn together, to learn how to improve together and really get the benefit of the team effect. Many companies, at least in the US, shift their teams every few weeks to every few months. How long do you stay on teams here? Does it weeks, months, a few months, a few months? Anybody on teams for like a half a year? Yes, how about a year? So your teams are staying together. Great, that's excellent. That makes it much easier to have some continuity and learn how to work together. When you're putting together teams, go as far down the technology stack as possible, which means that you think vertically through your architecture and pull together the people in a vertical slice rather than on a component team level. How many of you are on vertical teams or you have people across the architecture? Yeah, and some of you are still on component teams? Yeah, well, sometimes you have to have component teams. I have one group I worked with that if they had gone the whole way through their vertical slice, the team would have been 40 people. Because, yeah, it's sort of astonishing, isn't it? Because to get anything out the door, they were touching many, many, many different systems and components. Now, I think that's an architecture problem and something they need to be working on, but they had acquired company after company after company after company and not really done anything to rationalize that. But so they couldn't go all the way down their stack. It was just not practical. But usually you can go at least part way down your technology stack, and that enables you to move much more quickly and have things that are done and potentially shippable. Organize within context boundaries rather than component boundaries wherever possible. So a context boundary might be around a feature or it might be around a subsystem, but it's usually not around a component. So one of the clues that you're crossing a context boundary is when people use different language to talk about the system they're working on. So if people are talking about sales type terms, that's a clue that they're in one context, and another group might be talking about inventory. So when the language is different, that's a clue that it's a different context. When people use the same words but have different meanings for those words, that's also a clue that you're crossing a context boundary. So when I worked in financial services about a billion years ago, actually it was just when they were starting to come up with these crazy derivative things, and we didn't know how to price them then, and it was like, this is going to be trouble. It was trouble. The group I worked in looked at the actual stocks and bonds and investment vehicles. And when we referred to price, we meant the price of an individual security. That was our context. But we handed off information to a group that worked on the mutual fund side, and when they talked about price, they meant the price of a share of a mutual fund. So different meanings for language are also a clue that you've crossed a context boundary. So that's some of the ways you look at that. When you do cross context, make the communication explicit. Have people sit down and figure out what are the things that we need to be sure that we are communicating clearly about, and avoid late learning. Late learning is the thing that gets us in trouble on big programs. It always gets us in trouble on big programs when we push learning off to the end. That's what integration hell comes in, is when we have avoided, we have pushed our learning as late as possible. And remember the principles of the Edger Manifesto, and I'm just going to highlight a couple of them, which relate to being able to deliver software frequently. Because very often when people get into these large programs, they believe that they can no longer deliver software frequently. You can if you have the right practices in place. Having people work together, so rather than having these hierarchies and boundaries that get in the way of communication, keep people working together. Build projects around motivated individuals or teams, so keep the teams in place, and really pay attention to the technical excellence and good design. Because that's another thing that often gets lost when you have big programs. That's one of the huge risks of trying to coordinate work together. One of the huge risks of trying to coordinate work across multiple teams as you lose the technical integrity and coherence of the product you're working on. So a handful of technical practices. Scrum-o-scrums is on the next slide, so I'll go back to the one that I'm talking about now. Continuous integration within context. How many people are doing continuous integration now? Yeah, continuous integration is difficult. It's difficult within a single context, and it's really overwhelming when you get to a cross-context situation where you're trying to build across multiple contexts and multiple components. Like the client who had the 40 different component areas that they would have to be, or subsystems that they would have to be integrating. They were trying to do continuous integration across those 40 subsystems, and it was an overwhelming problem. So they were spending a lot of time on that problem when you can really integrate a cross-context at a less frequent base. You may want to get to continuous integration across context eventually, but I would work my way there rather than having that as a goal from the outset. So what you need to do is look at what are the critical convergent points where we really need to make sure these different contexts are working together, that are talking together, and they're passing the information together correctly, but very often not quite as frequently. So you choose an interval that you're going to integrate across context that makes sense where people can deal with the feedback and you are learning in soon enough, not too late, soon enough, but not so that people are stumbling over the continuous integration issue and not actually making progress on writing software. So you choose an interval. It may be weekly. It may be every couple weeks. When it gets to be monthly, I start to worry. It has to be something that fits in with your rhythm of delivery that gives you the information in a timely fashion so you can deal with it, but not get you stuck with the continuous integration across multiple contexts. One of the things that has to happen across context boundaries is that people need to agree about what needs to be tested. What are those critical points that we really need to integrate on? And it's going to be different depending on what people are working on, but you need to get people from each side of the context together to think about what are the unit tests that we need to write to ensure that our systems are talking to each other, that we're not broken. And they need to write those tests together. This is not something that gets handed off to some continuous integration group somewhere in some other place because that removes the accountability. It removes the fast feedback. It puts the responsibility on someone else and slows down learning and feedback. So people from each side of that context, each context that you're integrating across, need to get together, identify which unit tests you're going to do, and write those together. I think it's really critical to have architectural standards when you're working in a large program. This does not mean that you have to have some group of architects sitting over in some place telling people how they must do things and then being upset when they find out that people didn't do it that way because it couldn't be done that way. Has that ever happened to anybody? That used to happen to me all the time, that the architects would come up with some wonderful vision of how things should be done. But given the system we were working in, because we were working with some legacy code, we couldn't always do it. And then they'd be very upset with us. They'd be very upset that we didn't do their beautiful design in the way they wanted it. We tried, but we couldn't always do it. So I'm not talking about having some group outside there that sets these wonderful standards. I'm talking about having standards that will help people do good architecture. So the goal isn't to have good architectural documentation somewhere. The goal is to have a good architecture. And that comes from having some standards and patterns that people can follow throughout the company. But they have to reflect the reality of where you are living and they have to reflect the reality of where you want to go. They can't be from some tower somewhere. So coding standards are also critical. How many of you have some sort of coding standards in your? Yeah. Every once in a while I run into someone who says, coding standards, that can't be agile. We must do what we want. And I think there are things that do depend on local context. But if you really want to have the ability to integrate, then having some standard about we are writing to certain patterns or we use certain things in certain ways are really, very, very helpful. And I think the more that people who are using those standards are involved in creating those standards, the more likely they are to be followed. If you have standards that nobody sticks to, you should stick it to the standard and get rid of it. Find something more realistic that's going to work for people. That's really critical to having them. I think technical reviews are really critical when you're integrating across large projects, large programs. Technical reviews have sort of an old fashioned feel to them. Some people think, oh, those aren't modern. But what they do is they increase the general level of understanding about what the entire code base is. They increase the general level of understanding about what the standards are and how those standards are implemented. So they're a great way for people to learn. They're a great way for people to find errors soon rather than late. They don't have to be heavyweight. They don't have to be the inspection program. Anybody here have lived through an inspection program? Yeah, where you have many, many forms and you gather metrics. That's not what I'm talking about. Lightweight technical reviews where you're really looking for learning and looking for people to be realistic in applying standards across multiple programs. So those are those technical practices. Any questions about technical practices? Anyone think of something else that would be a really valuable technical practice or an issue that's not covered by these practices when you think about working across a large program? No? OK. Social practices. Scrum of scrums can work in the small. So you don't necessarily get rid of scrum of scrums, but you might focus them differently. Rather than putting the whole coordination burden on them, you may have them just look at impediments, cross system impediments, and raising those up. Integrating teams, which are not necessarily permanent teams that form forever, it's sort of like you don't want a DevOps group that's there forever. You want it to reflect the existing and the current issues. So these are not static teams. These are ad hoc teams built of the people who are currently working across context and need to do that integration. Having the decision boundaries be clear is critical when you're working on these large programs, because there are many decisions that can be made locally and should be made locally. But there are decisions that are going to affect the larger program, and that needs to be clear which categories of decisions those are. This doesn't mean that you have a long list or a 1,000 page matrix saying who has responsibility for each decision, but it's helpful to know what the categories are and know which decisions belong to the people on a team, which decisions are shared between the people on a team and one of the integrating teams, or one of the other integrating mechanisms, which ones are shared with management, and which ones belong specifically to other integrating groups within the program. So making sure those boundaries are clear is one of the things that enables groups to work as a network and to work effectively in teams without having that chain of command where everything has to go up and come back down, which is slow, slow, slow. Component shepherds is a role that I think is critical when you start scaling, because when you have multiple teams working on the same code base, there are certain risks involved. And a component shepherd is someone who will work to ensure the integrity of that particular component. So that doesn't mean they're the only one who works on it. It doesn't mean that they check every check-in and then fix other people's code. Has that ever happened? Anyone ever had that happen that you had some guru who was, I hate that. I hated that when that used to happen, because you don't learn. You don't learn anything. So that's not the job of the component shepherd is to be the policeman. It's to raise the general level of understanding, make sure that people are following good practices, and help people learn. So they spend some of their time checking on the check-ins. They spend some of their time looking at the integration. They spend some of their time working on standards. And they spend some of their time working with teams. So whatever team is working on a component, the component shepherd may spend part of their team, part of their time pairing with people on that team, just to make sure that they're working in such a way that they are not doing something that's going to degrade the component, or make it more difficult for someone else to work on the component. So they carry that cross-functional, that cross-team responsibility for making sure that the components doesn't degrade as multiple teams work on it. If you get big enough, you may want to have a technical council. And the technical council looks at the overall integrity of the product. And you may have a product council that looks at the overall integrity of the product. So you've got technical integrity, you've got product integrity, which are two different issues. Technical integrity is looking at, are our components still working together? Are we avoiding technical drift on these? Are we avoiding building up a lot of cruft in these? And the product integrity looks at, are we making sure that the parts of our product are contained together and that it's a consistent experience for people so they're not confused, we're not using different language in a way that's confusing, we're not changing interfaces on people. So that's looking at the integrity of the, across the product. So how might this look? Yeah, scrum of scrums kind of looks like this. You take one person from each team and they coordinate. Some people say this should always be the scrum master. Maybe, but maybe it needs to be someone who is really, has the knowledge about the technical issues that are going on. So this might work if you've got a small group of teams. But in larger programs where you're working across context, you need to be forming the teams around the context and then looking to create an integrating team across those. So I would look for, depending on the size of the team and the issues, one or two people at each one of those context boundaries who are going to form a little subset, a little sub team, that deals with those issues around what are the unit tests we have to be looking at, what frequency should we be integrating, how are we going to handle those integration problems? So it's a little Venn diagram sort of thing here where you look at those intersections. Does this seem like you could do this where you work? Does this seem realistic? Yeah. So this takes care of that, this takes care of that integration across contexts. And you can put together as many of those as you need to to make sure that within these various groups, we're talking together. It's not a permanent team. It may actually shift from one iteration to another, depending what the needs are and who has the most knowledge, the most relevant knowledge to make sure that integration works. So this is not a permanent, the integration teams here are not permanent. They change based on the needs of what you're integrating across those contexts. When you have a bigger team and you're touching shared services, this is when you bring in component shepherds. And component shepherds, as I said, are really making sure that those components stay strong even while there are multiple teams working on them. Questions about the team? Does anyone have questions about component shepherds and how they work? Anybody have component shepherds where you? Yeah. Do you want to say anything about how that works? We have teams, shepherd teams, not single persons. Because we didn't want the developer to feel like they didn't seem to have anything to do with that. They're like, they seem to have anything to do with it. OK. For those of you who couldn't hear, he says that they actually have shepherd teams, which can make a lot of sense. And particularly if you have a lot of people, it's too big a burden to put on one person. And it avoids the tendency to, I'm trying to edit myself here for just a minute, it avoids the problem of someone acting like a policeman. I'm the component police when you have a team that sort of softens that. So most of the time what I find is that if you have a program of any size, you need both component shepherds and integrating teams. And those folks probably have a fair amount of communication together. So you're looking at how do we talk across contexts, how do we make sure that we're not breaking our components by having the shepherds in place. So the tech council is typically made up out of the integrating team members, component shepherds, and test experts. So this implies that there is some shifting membership here. This is not a group that has membership that's going to stay in place for a long time. It means on one level that they're not going to function exactly like a team. What it does mean is that there's going to be more knowledge sharing across groups. It means that that sort of information about technical integrity across the product isn't built just into one small group of heads. This is another one of these things that may look like it's increasing redundancy, which it is, which is a good thing, which is one of the characteristics that allows in that work to function, that you have redundancy built in. And by having this not be a permanent membership, you allow that knowledge to become redundant within the system. So it's the people who are on the integrating teams across contexts, and that membership may shift. The component shepherds, if you have people who are responsible for enterprise architecture, they may be on this team. They should be on this team, participating at least, not dictating but participating, to raise the general level of architectural practice, and then the test folks who are looking at testing across your context boundaries. So when I say that, some people think, oh, well, we thought natural testing should happen in the iteration, and it should. You still need to have testing happen within each one of those teams. So they're getting to some definition of done. But when you're in a large program, in my experience, it's very difficult to do testing across the entire product without people who are paying attention to that as part of their overall role. Does that match your experience? Yeah. Yeah. So this is not your old testing group that gets things thrown over the wall to them at the end, but it is a group that specializes in testing and looking at the issues across many, many boundaries throughout the life of a product. And they are continuously running tests and designing tests and creating test strategy to address the issues of this. Are there questions about how this might work? You have questions? Yeah. How do the subteams, are they handpicked? Or how do you construct the subteams of the teams finding the directions themselves? Or how do you manage those? So the question is, how do these subteams formed? Are they handpicked or do they self-select? Typically, that comes up when you're doing planning, because when you're in a release planning session, you begin to get some sense of what the coordination between the components is going to be, what between context is going to be. So you get some sense looking forward a short time about who's going to be involved in that. And then when you're doing more detailed planning for a set of iterations, that's when I would be making sure that we have the right people in place. You aim to have them be together for a series of months if that makes sense, but sometimes it doesn't. So that's something I look at during planning. Does that answer your question? Yeah. And sometimes they self-select. Sometimes it's very obvious who the people need to be. But it's not something that typically comes down from on high. It's something that is done within context, in the context of what the work is that we're looking at at a particular time. Other questions about? Yeah. OK. What kind of role do you have when you're doing planning across teams? Because Scrum Master is concentrating and planning with him, his team, and the teams are planning themselves. So that's. Big picture. Yeah. So the question was, well, what kind of planning do you have across multiple teams when you have this sense? Because you have this big program. Because in a smaller effort, the Scrum Master is helping with the programming. So this gets to that question of the missing level of planning that I talked about earlier. You have this big vision of the product. And then you have the mechanism set up for people to do planning at an iteration level with their Scrum Master and their product owner and planning on a team, on a day level with the people in the team. So the missing layer there tends to be product owners. Product owners who are looking to plan across a release, and it might be multiple product owners on a big system or a big product that's often a group of people. And as they cross features or they cross contexts, they need to be involved together in that middle level of planning at the release level or even at the roadmap level. So if you think we have several levels of planning, we have the day-to-day planning. And sometimes I have a picture of this that looks rather like an onion, where you have the little inside, where you have the day-to-day planning, and then you have the iteration planning and a little bigger circle. And then you have a release planning, which is where the coordination starts to really come in. That's the group that's looking at how do we manage dependencies in the backlog. Then you have a level of roadmap planning, which says for the next X number of quarters, this is where we're going. And then you have the vision, which is the outer level of the onion saying, this is our vision for the product or market where we're going with this. So did that help? I wish I could draw you a picture. Yeah. So typically that happens based on the context in the release planning and is refined when you do the iteration planning. And when you have a big program like this, you can't just leave the iteration planning to one team. They need to have cross-team planning so that they can make sure that they're lined up. So other questions about how this might work? Yes. We are having a very active order which is performing a pre-planning session with some of our business athletes before we have this big planning meeting with all the teams together. So when you are starting there, a big planning, a lot of the planning is already done by the owner and some of our business athletes before this great meeting with all the screen members in them. So it's, these planning meetings doesn't have to be that long. You don't have to miss the planning stuff before the big meetings. So the insight that was shared was that at his company, they have pre-planning meetings where the product owner is thinking through where he wants the product to go and thinking through the dependencies working with the business analyst to understand that before you get everybody in the room, which makes a lot of sense. I mean, you can do it with everybody in the room, but it tends to feel a little more chaotic and it tends to feel like it's not such a good use of people's time. The advantage of having everybody in the room is that then everybody understands at a detail level, because the value of a plan is really in people getting to understand what's going on. But you do have to have a certain amount of understanding about how these features work together, how these parts of features work together, and how what the dependencies are, both technically and in terms of the people who work on them. Because sometimes you cannot keep the team perfectly intact. Sometimes you need to flex on that. Although ideally, I like to have a long term look at the work and say, what are the teams that we can put together that are going to be able to stay together for as long a time as possible and learn together for as long a time as possible. Because then they enhance their capability to work together over that period of time. Yes? You mentioned that the research show that five person is the size of a team. Do you have any references to that? So the question was, I mentioned that the ideal size of a team was five. Yes, that comes out of the work of J. Richard Hackman, H-A-C-K-M-A-N. He's done a lot of really interesting work on teams, which I'll reference in my talk tomorrow as well. He has a recent book out, which I think is called Intelligent Collaboration. It'll come up if you look at Hackman. But his most recent research is based on teams in the US intelligence community, which more closely mirrors the work that we do than a lot of the research on teams. Because it's looking at teams in knowledge work rather than teams in factory work, which is where a lot of the research on teams has come from. So if you apply these principles, then you can handle the coordination and integration laterally. So you're really working on a lateral level, not building up layers of hierarchy and additional departments that have responsibilities, which always causes a delay in information. It always slows down the feedback. It always takes away responsibility because that's someone else's problem. And when it's someone else's problem, those problems wait. So integrating laterally makes sure that people still have a sense of the whole product. They still have a sense of responsibility for the whole product. And people are avoiding that late learning and the integration hell. When you take some of the burden off scrum of scrums because you are doing integration with these other lateral mechanisms, then the scrum of scrums can actually focus on impediments and that sort of day-to-day communication and coordination, which is what they're really good for. That's where they really shine is in that level of integration. Looking at impediments, rolling those things up, getting management involved in the impediments, rather than having all of the burdens of integrating product, coordinating how the work is done, making sure that dependencies are clearly communicated, making sure that priorities are clear across all of these things, that's just too big a burden on that fragile structure. So take that burden off, use lateral structures to handle that, and then you're going to be able to really focus them on what they're good for, which is impediments. When you integrate laterally, you are creating an information flow that goes through the organization rather than just up and down. Because up and down is slow, and it's often makes things stupid. Not because people are stupid, just because the information goes so slowly that people don't have the right information they need when they need to make decisions on a day-to-day basis or when they need to make bigger decisions. So this makes sure the information flows laterally throughout the organization, and it creates information redundancy, it creates skills redundancy, which is what makes organizations flexible. Hierarchy makes organizations slow and inflexible, network makes them fast and flexible. So this builds on that principle of redundancy and networks to do that. So I want to make you aware of some resources as you're moving into bigger programs and you want to think about how to do this. Oddly enough, some of these come from the technical domain. If you look at Eric Evans' book on domain-driven design, how many have that big blue book? It's a classic. It's big. It's fat. It's a classic. And the way he thinks about dividing up contexts domains for design of software is actually useful for when you think about your organization. Not that you get these rigid departments, but it will help you understand where the context boundaries are. Craig Larmann and Voss Voda have some interesting ideas about scaling. I don't agree with everything they say, but they have a lot of really interesting ideas about how to address this. So it's worth looking at. And then if you look outside of software, there's some interesting work that can guide you as you look at scaling these large programs. Images of organizations is a very interesting book that contrasts the traditional mechanistic model that we have about the dominant model, particularly in the US, about how organizations work. The organization is the obedient machine. And if we just decompose everything to the right level, everything will work perfectly, like a cog in a clockwork. And we know that's not true. But it does talk about some alternative methods for organizing people to accomplish goals. So it helps folks shift out of the mindset that the only model of organization is we are a clockworks. We are an obedient machine. Designing team-based organizations is, the book itself is about 16 years old, 16, 17 years old. But it's about the only book that's been written on designing team-based organizations. So it's a decent starting point. Creating Strategic Change by William Passmore talks about the legacy of very defined job descriptions and role descriptions and how that actually gets in the way of flexibility and responsibility in organizations as an interesting book to look at, and Leadership in the New Science by Meg Whitman, which looks at a new model of organizations. When we talk about scaling Agile, we're really talking about a new way of organizing. We are really talking about moving into a new way of thinking about the way our organizations work that emphasizes pride in work, that emphasizes results, that emphasizes delivery of valuable software to customers. And our existing mechanisms for organizing just are not sufficient to do that. So hopefully, I've given you some ideas and some starting places on how to organize as your programs grow that will save you from following the path of bureaucratic hierarchy, that will help hold on to the principles and the qualities of Agile that made it attractive to us in the first place, that it's fun, that it's satisfying, that we get to feel like we're actually doing something useful. So I have time for more questions. If you have more questions, and if not, I thank you for your kind attention. It's been lovely spending the morning with you. Thank you. Thank you.
Agile methods depend on effective cross-functional teams. We’ve heard many Agile success stories…at the team level. But what happens when a product can’t be delivered by one team? What do you do when the “team” that’s needed to work on a particular product is 20 people? Or 20 teams? One response is to create a coordinating role, decompose work, or add layers of hierarchy. Those solutions introduce overhead and often slow down decision making. There are other options to link teams, and ensure communication and integration across many teams. There are no simple answers. But there are design principles for defining workable arrangements when the product is bigger than a handful of agile teams. In this talk, I’ll cover principles and practices and explain how they work together to address coordination, integration, and technical integrity without adding levels of hierarchy or bureaucracy.
10.5446/51016 (DOI)
Okay, good evening everyone and thanks for staying so late at this after this long day of great sessions here So in this last session of today, I'm going to be talking to you about some more advanced stuff in Windows runtime or win RT So it's not gonna be a level 400. I don't think it's time yet for level 400 in win RT This is this is like the session after you've seen all the introduction talks from Microsoft You've done some playing with it, but you still have quite a lot of questions So what I did is I get at quite a few topics that I think are interesting At least for me that were interesting and I hope they will be for you as well So let me quickly introduce myself because we have a lot of things to do. So my name is a jill klaren I come from Belgium. I think it's a third time here at NDC and I work at a company called Ordina where I'm doing mostly XAML stuff WPF and and self-light their projects I'm a civil light MVP and a regional director for Microsoft and I've written a book I've actually written two books in civil light and My blog my email my Twitter you can read them from there as well I will post all these demos as well as the slides on my blog properly tomorrow So this is the new book that I have released on civil light 5 It's called civil light 5 data and services cookbook. It's available in the store here as well The reason I'm including it here because there's actually a lot of things here in this book that are relevant for Windows 8 Development as well this book focuses mostly on the enterprise stuff So services data binding MVVM rear services also covering some Windows phone and I haven't Counted yet by I think let's say 60 70 percent should be at least applicable to Windows 8 Metastyle apps as well So there's a lot of interesting things in there if you're doing XAML XAML development currently or start planning it Planning on doing it in the coming months All right So what I did is I actually created a presentation. It's way too long unless you want to stay here until 9 everyone Yeah, okay, cool okay, so So what I'm going to do is I'm gonna introduce you the topic that I'm gonna be that I have prepared for you and Then we'll do some vote. We'll do a vote and On which topics that seem most interesting to you and we'll pick out the ones that get the most votes So I have something on periodic tile updates as well as push notifications So periodic tasks, you don't hear people talk a lot about periodic tasks They basically are a polling system to a service that updates the tile push notifications some Sometimes it's also a bit complicated Then we have a very big block on data. I'm not gonna split that up because that depends on the other So there's a big block on data and services that contains things like data binding. What is possible? What is not possible? Some things on data controls Some things on OData how you can interact with OData How you can do background downloads how you can access regular WCF services JSON service What is available in winRT to do that? And we're gonna finish that part if we if we get to it at least by taking a look at the live SDK services to interact or to allow your apps to interact with the live SDK for example to include the SkyDrive I have a part on localization that might be interesting because your apps are gonna be deployed in a worldwide app store So localization of your apps is gonna be very important Asynchronous development. I have some slides on async for the people have never seen any async Happening or what it's doing or how it's actually working. I have something on files Local and roaming data. How is that actually working? You heard Microsoft talk a lot about roaming data But what are the options to do local saving? What are the options to do remote saving and how can we interact with the local file system? Can we interact with the local file system? I have a part on reusable components something that you built and managed code and you can for example reuse in C++ or in JavaScript and There is also a part on background tasks and Lockscreen apps the back of tricks will do that in any case. It's that's two three minutes of some code samples that might seem interesting Okay, so we'll do this by popular demand Who wants to see something on prelactile updates and push notifications? No one wants to see something. Okay, that's not good If we're gonna end up with a list of zero then we can all go home Okay, so no one wants to see that. Okay. Who wants to see data and services? Yeah, quite a few. Yeah, that's one two three four five six seventeen if I can correctly a sink Three five six six or so Local files data acts as a local data taxes roaming data one two three four five six seven eight ten I'm very good at counting people Building reusable components That's a very small so we can do that for sure great. That's that thing that was 12 if I'm not mistaken. Yeah Oh 12 might have been 13 didn't catch that area and I'm background tasks and lock screen apps That's a bit more that's 15 so that should be so we should be starting with this one Then whoops, then we're gonna do a background Okay, so that's from the form to the back then And reusable components, I think we're gonna be around of out of time by then but we'll see where we get And then we'll do the back of tricks. So the data parts of course the longest past that's bit of my One of the things that I like doing most working with data services hands the book. Okay, so the first part is gonna be Data and services and look how handy I did that you can create areas in your Or how is it called sections in Windows in PowerPoint presentations? Handy if you allow the public to choose what you want to see. Okay, so Let's start with data and services First thing I want to talk about is data binding who loves data binding Everyone loves data by everyone who has done some XAML development was done MVVM loves data binding I'm sure they do now if you've taken a look at the At winRT and I hope I assume most of you have taken a look at some chat sessions of Microsoft Played with winRT already. Maybe I've already built the next Angry Birds in winRT. I don't know You will have noticed that not everything that you know Coming from WPF coming from Silverlight is supported in in Metastyle apps in winRT. So what is supported by the way one big disclaimer This entire presentation is running on consumer preview I didn't have the time to upgrade. I think it's 30 demos in total to Since since how many days it's released now four days. So Maybe there are some changes Could be that some things are not really correct like I'm saying them anymore So my apologies for that in advance. So in consumer preview at least What do we have that is supported to regular data binding? What I mean by that is the data binding syntax hasn't changed The the mode one way two way that is supported element binding So binding and one element to another element is supported Converters are supported not 100% in the sense that the parameters change model binding So you bind to an object That you set as a data context that is supported indexes data templates who doesn't love data templates I love them Collection view source something that was added in silverlight if I'm not mistaken silverlight three The notification system so the I know to decay those are the I know to fair property changed and I know to find collection changed that also works Binding to Jason services WCF services will see that later on and we can see data binding exceptions in the output window boo-hoo anyway If you see this list, I might have missed a few things But there is something that's that's quite a few things that actually aren't there. There's no string format There's no format value. So the binding base properties not there because the market extensions not there implicit data templates Not there data binding breakpoints. I love that feature in in silverlight five there That's that's doesn't work in win RT at least in the version and that we're using here today. So to to To make a statement here, we are more or less with data binding engine in win RT at the level of Silverlight to silverlight three ish somewhere in between That means a lot of features that if you've done silverlight if you're doing WPF Are not available in win RT, which is a bit of a Setback basically I'm gonna very quickly show you a couple of Demos so I couldn't of course start my demos up front Where is Data binding so I'm gonna show you some very basic data binding samples just To set the record straight so that everyone knows what we can and cannot do Okay, so I'm gonna run this I Have spent hours on this demos create making them nice as you can see here. I have the original background So what does work an element binding works as you can see here? I have a list box I can click on one of the items in the list box and a text block is automatically being updated It doesn't follow the screen there, right? It's readable. Okay So you can see an element binding works if we take a look at the syntax. That's very quickly do that That's not waste too much time. So an element binding is all about XAML codes. Go away you go away What's this now an empty window? Okay, that's cool Thank you. So We have here an element binding we have a text block here and that's text blocks as text property is Bound to another element which is that list box main list box and it is binding to the selected item That was added in self-light in self-light three if I'm not mistaken. So binding to elements works then object binding Yeah, I'm gonna skip on that one observable collections works So for example here, I have a bound this list box to an observable collection And I'm gonna add some Persons to it as you can see here. I'm not updating that list box It's automatically being updated for me. So if we take a look at that that is the observable collection one That's what you see here. We have an observable collection of person instances and to that Initially, I add only two and when I click on that add person I add some other persons to that list box Automatically, it doesn't have we don't have to do anything manually. It works because of the observable collection We all know that right, so we're not gonna spend too much time there Notify property changed also works there we go now We're bound to something that doesn't have any values set yet now I'm gonna change that object because that object implements the I notify property change property change the event is being called Oh, it's being raised this updates this person. Sorry is now shown in the UI and that interface hasn't changed as much as a As long as I looked at it So we have the changeable person class here that implements the I notify property change which has an event called property changed that we raise Whenever the object sorry the property is changing. You have to do that manually that hasn't changed from WPF or self-right either Converters as you can see here. They work. This is sample from the SDK. I'm not mistaking so here we have a letter great and it's actually binding the Value of this slide here to a value as you can see here So if you get a hundred percent you are superstar if you get below the fifth oops if you get below the 50 percent You get an F. Okay So that is converters for you. So a converter is a class that implements the I value converter interface as you can see here the I value converter Which has a convert and convert back method which gets in the value does it converge in this case to? to a string and I'm sorry to To an integer everyone I'm taking yeah, and then checks Returns the string value and that is being shown in the UI that instant that converter is instantiated And it's used here in the binding expression text element name and then we have the converter being used Nothing out of the ordinary I think and then we have we have data bind as we data templates data templates also still keep on working So here we see a list box in this case and the items contain the color inside of the object themselves And we are binding So we are using a data template as we have done that in WPF and silver light for many years now in winarty in the exact same way So we have here on that list box an item template and the item template is set to a data template Which contains a border that's giving the background color So we are binding the background to that color and the tech look is being used as well nothing special really cool things They still work so those are the things more or less that still work If you're coming from another sample technology now one thing that is Was not possible in silver light Is binding to anonymous types? It wasn't possible in silver light to bind to anonymous type For example, you do this query through this link to object query and you're gonna bind to the result It wasn't possible to do that in in in silver light. It doesn't didn't work Now you can simply say well, I'm gonna create a new object on the fly as an anonymous type It's like new blah blah blah and it's actually possible to bind for example your data template to properties inside of that Anonymous type that wasn't possible before let me prove that to you So that's still in the same demo So what I'm gonna do now here is I'm gonna bind to that same list box using in the data template I'm gonna do a query on my items and I'm gonna get all the values that are that that have the Word red in them as you can see here. That's so that's the red the reds and the red devils or something So, how do we do that? Let's take a look at the code so anonymous types that this one So we have here a link query a link to object query the teams is a class that Is basically inheriting from a listing team so a generic list and so what do we do? We add some new team instances. That's what we do when we create a new teams object for this demo Now I'm going to do a link query from team teams where the name contains red select me a new anonymous type And I'm gonna return that and see that we have a team city team name and team color here And if we now take a look at the XAML code that generated the view that we saw just seconds ago We see that we have a list box item template and we are simply binding to team color and team name If you try that in silver light, it won't work. This is something that works in metro didn't work in winner team Sorry, it didn't work in silver All right, so that demo can go away Okay now Controls binding to control that's basically the same thing There are some new controls interesting controls added in Windows 8 in metro list controls basically List you the grid view the flip view the semantic zoom. I'm not gonna spend time on Explaining you what each of these controls do I'm gonna spend some time on a few extra features the first one I can't show you because it doesn't really work at least I didn't found out find out how it really works in consumer Preview I found to be the bug in consumer preview But anyway, it should be it should be happening in the maybe it works in the release preview So what is this cool feature now? Imagine that you have a grid view that is binding to a list of thousands of thousands of images returned from flicker now In essence the grid view doesn't have any issue with that It is internally virtualized meaning that it will only create the items that are physically in the view and a few extra ones So you had that in silver light you had that in WPF with the list box So for example as well it only created the items in memory that was being shown at a certain point now virtualized Is it sorry internally? It is virtualized now If however you are binding to a An XML that is being returned for example from flicker that is containing 10 10 thousands of of Tags image tags it might be a bit of a burden on your service Because it might be a bit too much for your service in fact because maybe you say well The user searches for Oslo and there's going to be 10 thousands of pictures of Oslo So you don't want to you don't want to return that service response or you don't want to make that service response that big So the grid view however has two properties that allow it to when you start or when the user starts flipping through them That notices that you are at the end of what has been loaded And then you can use the load more items async and it has more items It has more items has has to be set to true And then at some point an event is going to be triggered that allows you to do some asynchronous loading in the background of the next 100 images Image tags that's possible. However, I can't show it to you because it doesn't work I didn't actually find how I find out how it really works I found there is a bug in it. So it's going to be probably fixed in the new future Now, however, what I can show you is semantic zoom. I noticed that a lot of people Like the semantic zoom, but really don't know how it works. It's Some it's a bit complicated To give you a very quick intro to semantic zoom semantic zoom is the thing that you can pinch on To go from a zoomed out level to zoomed in level Basically, it is two grids on top of each other to grid views effect on top of each other That you can if you are on the lower level the zoomed in level That's basically all your data and when you zoom out it's going to be Another view of your data, perhaps a grouped view of your data You can use that you can use that on the screen Using pinch you can also do that with the mouse Using this call wheel where you can use it with the control and the plus and the minus sign on the keyboard now In fact the grid view sorry the semantic zoom control uses two controls that implement the isomantic zoom And at this point that is just the list view and the grid view that are available here So it needs two of these controls. It needs either two lists used two grids of our combination of the two So that's the code that you see there at the bottom Some typical samples. I'm going to quickly show you a few screenshots. There we go So this is how a semantic zoom is ideally used. For example, you have a store there and that store has Has a has a number of categories if I'm not mistaken Yeah, so it has a an overview of the of the apps grouped per category And when you so the lower one on the right is the zoomed out view and my new pinch It will load up the items that uh of all categories for all subcategories That's a typical way of using the semantic zoom. This is not a nice Implementation for example the zoomed out view shows you the number of messages per day Grouped per day that is and the zoomed in view simply shows you all the messages I'm going to skip the other ones. It's basically the same download the slides. You can see them all now The question is of course, how do you implement that semantic zoom the semantic zoom? However has Again in release can sorry not in really can it in consumer preview A few things in in terms of data binding that don't really work that probably will be fixed When when everything is is final Let's run it first. I'm going to show it very quickly. It's an ugly looking demo, but that's that's not why we're here So this is the zoomed in view. It shows me a couple of movies and then I can use So we use the mouse control And and the scroll wheel here or I can also do this programmatically switch views if I click on this fantasy view I'm immediately brought to the fantasy Category in this case. There's not not much space on the on the right there anymore So it zooms in so that fantasy in any case is in view. That's how a semantic zoom works Now, let's take a look at the code for the semantic zoom. That's not what I had to do. There we go So, um the semantic zoom control In this case, I'm using a collection view source. Again, this is one implementation. It's of course possible to do it in another way Let's first take a look at the view model perhaps So this is the class the view model which exposes a list of movie category instances under the movie category property And a movie category as you can see there at the bottom Has a list of movie instances and each movie, of course has a couple of properties And then in the constructor in here. We're instantiating this are hard coded And then we are grouping the movies by category And instantiating that movie categories And then we have this list. Now the data is actually ready to be consumed using a semantic zoom control So what have I done here? I have a collection view source And I'm binding it to movie categories. Of course in the code behind in this case In here we are setting a new instance of the view model to data context. This is not real MVVM. I know that I've been there. I've done that So in any case we're setting the collection view source and we're binding it to the movie categories property on the view model So far so good. Now we specify the items path To be movies. Remember each movie category Each movie category has a property movies So now it knows that the movie category has items and that are located in the movies property. That's one thing Okay, now my use my semantic zoom control That has two views a zoomed out view and a zoomed in view So the zoomed out view is the top level one. The zoomed in view is the one at the bottom So the zoomed out view Is a grid view I specified a grid view here. I could have used on this view if I wanted that I'm using a grid view here And that one Is simply saying well, I'm the zoomed in view is now binding to the movie categories Um It's actually binding to the collection view source and that's how it knows that the data is grouped And the items are going to be the movies now. Then I define the grid view The grid view itself Has an items panel, which is basically how the different Blocks are each groups. Sorry how each of the groups are going to be stacked in this case, which is a virtualizing stack panel horizontally And then we go into the group definition now we specify how a group is going to be shown Here we specify that each group in the group had a template Has to be a button right With the category name on there. We map is it still running? No, it's not running anymore So if we uh If we take a look again, so in here So that's the category that you see here. That's a button. I can click on it doesn't do anything, but it's a button That's what you see here Um and then the item sorry the The the panel for each of the items inside of the group is in this case a variable sized wrap grid That's basically specifying how all the movies inside of each group are going to be visualized Then In the uh, yeah, so in here now This is basically a bit of a bit of a bug. I think I think they're going to change that It doesn't automatically find those movies in here. So I do need to specify In here to the grouped sorry to to the to the zoomed out view What are going to be the collection groups? So how it needs to show all the different groups and that's what i'm doing here So on that grouped uh items use source so the view source for my uh for my data binding I specify views or collection groups and I specify those as the item source for my zoomed out view So basically that means that the data context is not uh going through I assume that it's going to be changed when things are final The zoomed in view however that works out of the box Sorry the the uh this one here So for the uh the zoomed out view that we specify that uh each uh group is going to be a group.category name You just simply specify for each group in my data take the category name and show that in the zoomed out view That is one way of doing it There are other ways of doing it, but this is one way that I found that is reasonable to do with my uh semantic zoom control Okay, let's continue if you have questions we'll do them at the end because we have so much thing to talk about and um Then we'll we'll have some time more time for everyone to to look at things. Okay What about o data people are using o data here? Couple of people yeah now o data is I would say it's quite popular, but judging from this room, I would say not Um to o data is basically what you create with the wcf data service. It allows you to create or to expose your model over a service using uh a query based syntax you can ask your model for uh Basically you can perform a query against your model Now again in consumer preview and I imagine this is not updated at this point yet for uh release preview There was an o data client library available Um again if you're coming from from wpf a silverlight Uh, you probably know that there was in dot net in general. There was an o data client library available That makes it much easier to work with wcf data services. So normally you would have to create the uri yourself Get in the xml that is being returned by the by the service and then Go pass that xml into objects the client library does all that for you. It basically allows you to do right there Is to your wcf data service and uh create object automatically It does work, but not everything is implemented yet. Uh, probably The biggest thing that is not implemented at this point is the ability to create Uh, your uh, your service reference in visual studio. That doesn't work yet if you try doing that to Wcf data services data service. It doesn't work at this point. You have to use something called the data svc Util which is comparable to svc util but then for wcf data services Okay, let's quickly take a look at that Um, let us uh Go to uh, not this one Let's take a look at o data All right All right So it's it's hitting the breakpoint we'll take a look at the break on you just a second So with this it is now we are now talking with the netflix data as uh, wcf data service Or the netflix o data endpoint Um, netflix is a movie service and they have a great api to browse all their movies that they Expose can't get it in belgium. Can you get it here? Not flicks as it exists here already No, I don't think so. I don't think it exists in any country in belgium. So anyway, they have a great api They have an old data endpoint and we're using that one in this case to browse the movies In uh in in this weird categories that we see here Okay Not gonna open those movies Okay, now how does this actually work so what you need to do is um You need to use the data svc utl which is included when you install the client library And then you can you can point it to the o data endpoint and then it will create for you your um proxy Let's say what would normally be created by adding a service reference, but it doesn't work in consumer preview So when I do that I get this class that is being generated for me As you can see here. That is basically my context my data service context that is being generated Exactly the same as in other technologies. There's not really a much of a difference happening here Now, um inside of my code, uh, and let's go let's perhaps run it again Then we can see what is happening. So what is happening here is I am going to load uh genres from um from uh Netflix, but I'm going to do a query I'm not going to say look give me all the genres because they have tons of genres What I'm going to do is I'm going to say give me all the uh genres that contain sci-fi So I'm going to talk with that context the code that was generated in the using the data svc utl And if you've never seen o data, so this is what is going to happen. Uh, where is the query here? um So what is going to happen? Let's zoom in on this. Let's try that there we go The query as you can now see it is now talking it's readable. Yeah, so uh, it is now It has it has built up a query. So it has translated my link query into uh A uri And that uri is going to return me a unique response So that if you don't use the client library you have to do yourself and that is basically uh, not very Fun thing to do. So it's automatically translating me that query into a uri. I'm going to launch that query to Netflix There we go and now it has gotten in the response and now basically I'm going to go through all these genres build up where it's going to get the um Now it has to download for each of the categories an image It's gonna I think it's going to take the first image of the first move that it finds and then it's going to do another call to o data The o data endpoint now notice what we haven't talked about yet Is that of course we are using the await here and here as well If you've done o data work in a previous life You'll probably recognize that this is more or less the same thing You build up a data service query by talking to your context you execute that query you wait for the response Asynchronously using the awaits and we'll talk about the wait later if you have time and then we get in the response And because of the o data client library, it returns me objects. It doesn't return me xml It basically returns xml, but that o data client library is uh materializing my xml into Into objects and I can use those objects in as you can see here It's automatically creating me new genre objects. Okay cool way of working wcf data service Not always that easy, but it's a cool technology if you get into it. It's really Um, consumable for many other technologies all right another thing that uh, if you were in my windows phone talk, uh, we've looked at Uh at background downloads in windows phone um Now in windows 8 the same problem is coming up again that we have in windows 7 7 What if we want to build an application that needs to do downloads in the background? Because of the fact that we are limited to do one application in the foreground If you build an application that needs to download stuff the user has to either Leave that application on the screen or otherwise his download is is not going to continue. Of course, you can't snap it Yeah, okay, that's one solution, but you never know if the user is going to be able to snap So that's not a good solution either. So, um In windows uh met in windows 8 even after we have the windows networking background transfer which has a background Downloader and to that background downloader. We can add download operations Which is basically saying create me a download operation for a ui Sorry a ui add that to the background downloader and that background downloader is going to take over the download It doesn't matter if your application remains on the foreground or not. It doesn't matter It's basically a different executable. Uh, and that is the background transfer host dot a say Exe that is going to run even if the application gets killed It's going to continue the download for you if the application is uh, is is not running in the foreground anymore Even if it gets if it's precious even if it's uh, it being killed the download is going to continue for you The only thing the only way that I found that I could stop the background transfer host is by deleting the file It's actually downloading then it automatically crashes and stops downloading. It supports credentials. It supports Uh, things like custom cookies like setting the headers. That's uh, that's sometimes very important I've done this for a project in silver light. It's can can be very interesting and Another interesting thing it can also report progress while it is running in a different executable It's a different service basically on the machine. It is capable of communicating back to the windows Metastyle app what the progress of the download is Okay, let's take a look at that uh background transfer this one Till what time do we have 40 6 6 40, right? And that one guy was just going to stated until nine. Yeah someone over there Okay Okay, let's take a look at downloading stuff. I am going to um I'm gonna copy a link that I have somewhere over here, which is the big bunny uh movie It's quite big. It's about a gigabyte. I think so at this point my application check. There's basically no backgrounds for my app running So, uh, let's uh remove this thing. Let's put in something different here And I'm going to download something called avi There we go. I'm gonna start to download normally should start it if we get network connection. There we go So it says downloading Fingers crossed If it waits till out there we go So it's now downloading and you see that my app is actually getting progress being reported uh to uh to the ui Yeah, of course. I'm not uh scrolling all the way back. So it's actually quite fast network already on 2 percent If we now take a look at the task manager You'll see somewhere over here. There we go The background transfer download upload host my app is now not the foreground app. In fact, it's not suspended But if I kill it, it keeps on downloading this so you see that this is a different executable That is downloading that's performing my download for me while I don't have to do it. It's doing that for me All right, um, let's go back to the app and let's cancel that because otherwise it keeps on downloading Is it cancelled? Yeah, okay In Norway the network is much faster than in Belgium. Anyway Uh, okay, let's take a look at how this thing works So, um Not this one. Let's keep that one. Let's take a look at what we do when we click on the start download button So first I'm going to build the ui where uh that I want that I want to download from from the from the ui From the from the sorry from the text box that I'm going to specify where the file has to be located notice if we have time for files, I think that was one one that you voted high Um, you cannot put this file in the c slash whatever you can't do that You can put it for example in a known folder of the application or also in the local API storage API of the application so the storage file Gets created in this case. I'm putting this in the pictures library. I could have put it in my documents. I don't know Um, what I'm doing is I'm going to create a file for the download and I'm going to create a unique name That's not that has nothing to do with downloading effect but then then I create an instance of the background downloader and I specify Uh that the downloader should create a download and that is going to result in the creation of a download operation instance Yeah The download operation instance in fact is not Anything it doesn't do anything yet So that download now passing to this handle download async and in that handle download async I need to start that download async and that is actually going to start the background transfer host executable and download the file Asyncously so in here, I say download start async passing in um, the something called a cancellation task A cancellation token source. I'm sorry. Uh, and also passing in an uh, a delegate to Return me progress in this case the progress callback that's defined two lines above in here Um, so what happens when I uh, so what happens to report the progress this download progress is being called In this case this happens on the background thread. So I need to marshal back to my UI And therefore I'm using this marshal log Helping method in this case. Uh, so this is going to be called To update the UI about the progress of the download and then when I click on cancel That's what's happening here when I click on cancel that cancellation token source is the is the way of Download canceling the download everything that is associated with this cancellation token source Can be cancelled asynchronously. So basically killing the download That's what I'm doing here. I'm saying everything that's associated with this cancellation token source cancel it And then let's restart all over create a new cancellation token source and create a new list of active downloads I've skipped on that one. Okay All right, let us continue Let's go back to the slides Talk a bit about services in general what works asmx wcf Rest services. They are supported Using xml using json that works as well RSS is supported and sockets are supported. They don't have a demo of socket set. That's all it's quite complicated In terms of wcf, that's always a very important one. Many people use wcf What is supported again more or less on par with silverlight? The basic htp binding works the custom binding works for example If you want to combine it with binary text encoding, sorry binary encoding The net tcp binding works and the net htp binding works That that Indeed means that the wshtp binding doesn't work. I assume for reasons that are the same as in silverlight So because of the fact that the private key would then be on the client So more or less the same thing as silverlight In terms of encoding so binary encoding is supported text encoding is supported Security again, more or less the same thing as silverlight You can pass in credentials if you've seen my talk. I think I did it here one or two years ago on silverlight security There I talked about passing username password in the header of the message with wcf that is possible in Metro as well using the transport with message credential or transport credential only So it is possible to do authentication with a service. That is an important one that that works So for the rest, yeah, again, I'm not going to read it all for you One thing to notice with wcf Again in consumer preview There is no file. So no configuration code generated for you. Everything happens in the reference.cs Which is a bit of a pity actually In the reference.cs so in the file that gets generated when you create a proxy There is a method called Configure endpoint which is partial which you can use in your code Again as partial of course to configure the endpoint to configure the binding basically That's how it's done at this point. Everything also is task-based. So again, everything is asynchronous just like with the odata endpoint from netflix everything in in Communication with services is asynchronously task-based of course And one thing that you shouldn't be forgetting is that the application needs internet capability If you want to talk with the wcf service and you forgot the internet capability, it's not going to be able to reach out to the internet Okay, let's take a look at a very simple demo So let's go to service access So i'm going to use this one and then use I'm gonna let's let's let's use just a public service. So in this case i'm using a an asmx endpoint In the project in here You can see that no configuration code has been harmed in the making of this of this solution But it wasn't generated either. So that means that everything That we need to do in terms of configuration has to be done in the reference.cs You can't change that because once you update your service reference, it's gone But there is a configure endpoint in here Figure endpoint there you go Where is it? Uh, that's not what I wanted. So the configure endpoint The configure endpoint is a partial method that you can use to do the configuration of your endpoint in your own code Let's hope that that changes In terms of talking with the service. So this is an asmx service. So it uses the basic htp vining Um, that is the asmx sample that's this one. So what I'm going to do is I'm going to instantiate my service client As you can see here on the first line and then I'm going to talk with my service It is all generating task based methods. So I can await them So I don't have to do any callbacks like I should like I should have to do to do in a silver light So I can use the tasks that are being returned by the get city Get better get city weather by zip async So this method the rest of this method is only executed when the method So the callback is or the call is returning if you take a quick look at this So you see that you're working with the services more or less the same thing There's not that much that changed apart from some things that are a bit missing So this is a us weather service. So I think about a very bad tv show that was on many years ago So indeed in Beverly Hills, it is now 59 59 degrees Fahrenheit. Okay. So this has communicated with the service. It works. The application itself has the internet capability There we go in the capabilities. It has the internet capability set to true because otherwise I would not be able to connect with that service This is an amx service. Now. What about rest services? Oops, we've seen that already. What about rest services in in Silverlight you would have used the Web client in silver light. Sorry in metro. The web client doesn't exist anymore. You have to use a replacement Which is called the HTTP client There's not much difference. You can use more or less the same thing or you can do more or less the same things It's a bit of a different API, but the results are the same What you can do with that HTTP client is the get async, put async, post async and delete async So it is basically possible to work with restful services The response of a rest service is well json or xml If the service is returning json or you build the rest service that returns json, well, you can consume it from winRT You can use the json object On which you can call the parse. You can call the get name string the get named number Or you can also use the indexer, which you should never do, but you can do it And then you can also use the data contact json serializer that is included in winRT If you have a response coming back in xml format, you can use the xml serializer You can use xml reader rather and link to xml, which is probably the preferred way of doing it I'm going to quickly show you a very small demo where I'm going to talk with Flickr The image service So they have a great REST API And you can use it via some things you can only use via xml, some things you can use using json as well So we're going to search, of course we are in Oslo, so we're going to search for Oslo A couple of times back I had some embarrassing pictures showing up But in Oslo people are very nice, I think. There we go Okay, so these are pictures being returned, so this is actually an xml response and it's downloading the pictures on the fly As you can see, notice by the way that this is virtualized. It didn't create me thousands of images, actually hundreds of images Because Flickr normally returns 100 If I scroll very quickly, you see that it hasn't insesated the elements yet That's because it is doing this thing virtualized, that is doing by default. What the hell is this? Okay, anyway, so let's take a look at the code So REST service So what I'm going to do is I'm going to create a client Actually, I'm going to call this method createClient in which I'm going to instantiate the HTTP client method As you can see here So I'm going to instantiate the HTTP client, the web client is gone And then the HTTP client, let's go back up Has a method called getAsync What do I pass in? Well, the uri basically of the page that I want to download, the response that I want to download In this case, it is a formatted string in which I pass in some parameters that are Viable for Flickr I get back Of course, this is asynchronous. I get back in HTTP was HTTP response message And then I'm going to pass because this is XML Sorry, I needed this one So what I'm going to do is I'm going to read out the content coming back from Flickr by saying response.content.readSstringAsync It's just going to return me a string that I can pass using the link to XML implementations So an XML document in this case, sorry, what am I saying? An x document And then I'm going to loop through the results using a link to XML query and I'm going to bind that in my UI That's more or less the same thing, something that you already know All right The final thing that we're going to take a look at in the data part is skydryff There is a live SDK available for Windows Metrostyle apps. I don't know if it's updated for release preview already But because of the fact that in Windows 8 you can log in with your Microsoft account, your previous live account It is possible to build apps that integrate automatically with the skydryff API that automatically is also using The live SDK, sorry, the live single sign-in So you can build an application That doesn't require the user to log in again or to build an integrated sign-in experience If the user hasn't signed in with the live ID account and that therefore you can build an application that for example downloads the pictures from your skydryff. Let's take a look at that So let's open the photo sky application Yeah, why am I asking that now For some reason today it opens every solution in the full blown visual studio while normally it Uses the express edition. I don't know anyway So let's take a look and hopefully this will work So it's going to go out take my Unlocked in with my live ID and it's going to take a look at my skydryff and download all the pictures that I have On there as I have this is test account. So there's only a couple of pictures Uh, so let's skip on the breakpoints And I'll do them in just a second should that There we go. Everything happens asynchronously. So you don't know what is going to happen first. There we go There the pictures are coming. I hadn't didn't have to log in again because I'm already logged into windows I have a single sign-on experience. I'm logged into windows with my live ID. I should say my microsoft account now Which sounds a bit weird, but anyway Okay, so let's take a look. You saw the pictures popping up there and notice here the uh, msn guy Or how is this thing called? Uh, it's also up there. Basically. This is a sign-in button and that's the thing that triggers everything Let's let's perhaps restart it and you can see everything that starts happening at some point It's going to get the session from uh from that sign-in button if you install the live sdk on the And the group items page Somewhere over here. There is There is a sign-in button and that sign-in button you have to put on your On your ui it allows the user to sign in if he hasn't signed in that could be that user is using a local account And then the session changed event is going to be triggered It's automatically triggered if the application gets in the live sdk gets in the Uh, the the live sorry the the microsoft account from the local device and that will trigger a session changed That session changed will automatically Get in the uh, let's let's go back to the breakpoint and then it's gonna you're gonna see what happened So basically what i'm doing is I have in my uh application xaml I have a property called session which is of type live connect Live connect session and that session is going to be set because of the fact that in the In the session changed event here, I'm setting it get it in I will see that the live connection status is connected. I get in the session. I'm going to save that Okay, what I'm then going to do is I'm going to download all the images from uh from my I am from my uh sky drive in this case. So i'm going to use something called a live connection client passing in the session And then there was some basic query syntax basically that you have to use And you say client would get get me all my albums get me all my albums That's going to result in a live operation result and then things start getting a bit more complicated As you can see here, you have to ask that live operation result dot result which returns you dynamic And on that dynamic you can start looping in this case on the data property and then in that case I get in all the uh the The albums and for each of these albums and then I have to go back out to live and ask me the pictures that are in there It's the bits. It's still not really easy to work with but it works And most of the most important thing here is of course to take away that it is possible to build a single sign-in Uh experience with the microsoft account of the logged in user all right I think we're way behind but anyway, so, uh, let's take a look at background um apps Okay, that is all the way at the back. Let's collapse a few things here Am I happy that I built that in background tasks? okay Now background tasks are a bit of a hot topic. I think in windows 8 Since by default we can only run one application at the same time Now you hear microsoft talk a lot about background apps And lock screen apps. Now. What is the difference? Is there a difference and how do we get uh, what the difference basically is? Now by default only one application can run you know that by now Uh, only one application again can receive all the resources of the system because that is the main application That's running. You know what I have can basically do something in the background That's not really true However, because of the fact that there is a background task API in windows, uh eight in every nocti that allows us to run code in the background Exactly like windows phone. It is not the same code that will run in the foreground It is not your app that keeps on doing stuff in the background. It's a separate part of your app That is basically registering itself with the system and saying well, I want to do something in the background. Okay, that works It's more or less the same thing as windows phone again There are some differences for example. This one can add more than one background agent. Let's call it Windows phone can only add one okay Now how does this actually work? Well, basically what you need to create is uh a separate project that creates A background task and that background task is going to be created using a class called a background task builder And a background task builder can register a class that implements the ibackground task interface And that ibackground task interface has a method called run And that becomes basically the code that will be uh called when the background task is going to execute That's how it works. So from your app use the from your main app. You use the background task builder that registers a class that implements the ibackground task in Interface and then that code can run in the background now how it can remember come back to that in just a second Now this code will Be managed by an external process which is called the background task host Uh, and it can also happen in process now in process is is a specific case Normally it is executed by a specific executable the background task host exit Now one is this useful. Well in many cases, I think uh things like voice over IP applications iam message messaging applications Mail applications they need to do some checking. Is there already a new mail on the server? Yes, no, they need to check that every time to keep you updated Now when does this code run? Well this code can run periodically or in response to system events now that periodically is basically also a very specific case Now when does the code run? In windows 8 in metro apps the background task runs all the time It doesn't matter if the application is running if it is suspended or if it is if it is terminated The background task runs completely independent from the main application It doesn't bother taking a look at if the main application is ready is finished is running or not it runs You register it it runs it keeps on running Now when does it run and that's an important one? now basically an application So a background task can only run when a trigger is triggering the execution of it Now what are the triggers that exist? Well you see a list here You see a very important one at the very top which is the time trigger But sadly it has a little star next to it Anyways, when there's a star there's always some little letters and that says some little small text that says it requires lock permission Now what is lock permission? Coming back to that in just a second. Basically that means that it cannot not every app can build Background tasks that run every now and then that run based on a time trigger time trigger says after 30 minutes or so That's not possible for every app to do that There's some other ones as you can see as a mass receive network state change things that might or might not seem useful And then the execution can also be linked to a condition So the trigger happens, but you can also optionally specify a condition that has to be true for the trigger to execute and therefore your code execute For example, that can be internet available in and out available user present user not present But is he logged on or is he away? Okay, might seem a bit complicated. It's complicated for me as well um So I already said this, okay How does this actually work so your main app? says well, I have a background task to register so when it first runs It says well, I will register this task using the background task builder with this and this and this trigger At some point so that it does that with an external service with the system at some point that trigger will fire And then the system will say well now we execute your code. For example, the internet is available trigger Well, then it's going to execute the the your code when there is internet available again or network state change, I should say Okay Remember in the slide with all the triggers we had a star next to the time trigger To use a time trigger your application has to be a lock screen application So it's not enough that your application your application can simply say well, I'll run to run some code in the background every app can do that But to become a lock screen app is Requires that the user says well, yeah, I want to allow this application on my lock screen Therefore saying to the system while this application is important Well, I'm going to pin it to my lock screen and then that is that is a trigger for the system Not another trigger like this, but the trigger to say well this application is important. It can do more on the system It has lock screen permission. Therefore, it can do more on the system. It can do background tasks based on time So that run every five minutes every 10 minutes checking for a new map, for example So that is the difference There's a difference between a background task. Everyone can create as many background tasks as they want But they cannot all be linked to time triggers other triggers as well. But the time thing is I think the most important one So when they are on the lock screen and now I'm going to do something very dangerous I'm going to go to the lock screen So here you see Down there at the bottom. Those are the apps that are available on the lock screen Applications, there's a maximum of seven applications that are on the lock screen that can do stuff based on time I'll login again. There we go When user places the application over there the application can Show a batch update like it like a push notification Sorry like a tile update. It can show a tile. Let me rephrase it. You have on a tile update You have the regular tile update and you have batch updates. It can show a batch update on the lock screen And then the user can also select one application that he says, well, that's a really important one That one can also show a tile update on the lock screen The user says well, it's so important for me that it's a very important one It's so important for me that it should show me real-time information What you need to do for that is you need to specify, of course, the image that it has to show that's quite normal But in any case The user remains in control There we go It is in the configuration panel. There is a lock screen window or a lock screen pane Where the user can specify which apps he wants to allow on his lock screen Therefore saying this app can do more therefore saying this application can access Or can execute based on time And there's one down there at the bottom and instead of showing it on the slide. Let me go to the settings and the lock screen So here you see that these are apps that are Able of showing updates on the lock screen and then from these I can also show an application that has Um that is going to be allowed to show a tile update on the lock screen Therefore saying well, this can even do more but in terms of of being able to run in time Only the apps that are pinned to the lock screen can execute a background task Uh on based on on a time basis Let me quickly show that to you and then we'll have to round up because we're already over time. I'm sorry Talked a bit too much So let's first take a look at background tasks. How do we create a regular background task? So if you take a look at the solution manager, you see that we have two Two projects one that has a class sample background task That implements the I background task and it has a method run and we're going to implement that and that code is going to be the code that is executed so, um This this background task is already registered. So I'm going to register now a background task Unregister it and now I need to disable my internet. Don't need it anymore today What did I disable here? I think I disabled my bluetooth I know I have wired connection So now the network state changed there we go. So this background task is triggered based on the network state Um, it's not available. I think it is and then I click it in again and there you go You see it starts running again So there we go. It is in this case running So this application is in communicating with the foreground. It's it can do that if you want that so it can run whether or not the application is in the foreground or not So this application could never have used a time trigger So now we have so this is the background code that the code that just starts running The run method is is executing and in this case Uh, normally, uh, that's an important thing as well to notice here. Normally this code would just run top to bottom But I don't want this to run and then stop doing its thing. So in this instead what I do is I ask for a deferral The deferral basically says well instead of going through wait until I give you that referral back deferral back And then I instantiate the periodic timer and it starts running and it has a periodic timer callback on the timer elapsed And that updates the timer progress and at some point when the timer is complete Then we say well now it's okay to shut me down deferral is complete That's the background code the ui code has to register Let me scroll up Uh, this one I'm registering a background task Passing in the name of the class has to run passing in the name of the task to find it afterwards Passing in the internet not available trigger and don't not passing in a condition Let's go to this one. This method is actually doing the registration of the background task by saying background task builder I'm going to instantiate you pass in the name Of the task passing what has to execute set a trigger that was the internet available trigger Not passing in a condition in this case and then I'm creating a background task registration And that is in that case going to register the the builder. So it's sorry. It's going to Say builder dot register and that's going to return me a background task registration Which I can use to do stuff with my ui. I can sort the updating of the ui I'm going to show you the rest. I'm going to quickly show you the lock screen apps And then we're going to round up So lock screen apps For some reason it changed default to visual studio fool. Anyway, so, um, let's run this one as well So this app One thing one second. I need to update the name of the app because it's filtering it on name I probably already denied it lock screen access. So now from the code I am going to request lock screen access There is no Action taken yet and the user it's not on lock screen. So if this would now try to do a time trick, which would fail it It would never be executed So now for my app, I'm going to say well, dear user, can I please go on the lock screen? I am your mail app I should go on the lock screen user says, okay allow this Let's now go back to the lock screen And now also say here You see it was already here lock screen application sample seven Now I specified this app to be the app that I want to get most updates from Let's go back to the app this one come on And now I'm going to send a tile notification and a batch notification The batch notification is going to show at the bottom and a tile notification is going to show at the very top Let's log off again And you see now that there is a tile notification next to the clock and there is a batch notification shown at the very bottom This can now this is now registered as a task that will do some code Every now and then because it can use a time trigger and it will be able to receive updates and and get updates and also receive push notifications and show them on the lock screen So, um, I'm very sorry, but I think we have I think you're going to throw us out of the room if I keep on going Unless you want to do some other things Should we should we stop? I can stay until eight. I have nothing to do So I think we're we're gonna have to round up maybe let's let's round up with uh with uh With the bag of tricks and then then we'll then I'll really shut up. Um Um, sorry, I have a train to catch you have a train to catch. Okay. That's a valid excuse if someone else has to take that train please go And so we're gonna I'm gonna show you the bag of tricks, which is I think five Code snippets that sounded interesting to me You can download every slide every demo of uh of my blog so Don't worry. You get them all for free By just being here Your presence has made me so happy so, um for example From code it is possible to change the lock screen image I'm gonna use a file picker And I'm going to select an image it's possible and I'm gonna put that as my lock screen image I now log off You see that it's christmas already Um, so it's possible to do a lock screen image change That's wrong I think It's also possible to check from code and that might be interesting in some in some applications to detect the input If you have to check what uh, what input uh methods are available You can do so. Uh, for example, this this laptop only has a keyboard and a mouse and it says touch is not present Yeah, I wrote some code for that, but it's capable of checking that itself. You can do that from from the api It's also possible to detect the devices that are connected Um, that does give you some some weird results in this case But it's for example possible to check the connected devices Uh, the trend the other mouse and stuff like that the camera you can detect it And finally It's possible to detect the lockdown user Which is in this case very appropriate in this case So it says it says the lockdown user says it's it's time to go home. So I'm not going to go over the code Uh, download everything from my blog take a look at the code at your own pace. I hope you enjoyed it I'm sorry. We couldn't do everything. It was planned on not doing everything. I hope you enjoyed it Uh, thanks for being here and I hope you enjoyed the rest of the conference. Thanks
Windows Runtime (aka WinRT) is the new kid on the block for developers. It forms the foundation of the development of Windows 8 Metro style applications. But what exactly is WinRT and what does it enable developers to do? In this session, we’ll take a look at how WinRT is opening up Windows 8 to developers. We will cover several, more advanced topics using WinRT, including MVVM, file access, background tasks, push notifications, the async pattern and more.
10.5446/51017 (DOI)
Okay, I guess we can start. Hello everybody, my name is Gojko and I'm here to talk to you about busting the behavior-driven development myths today. This is a bit of a surprise talk. It was scheduled at the last minute, so it might be a bit improvised, but we'll work with that. So I guess the thing that I really want to talk about today are lots of misunderstandings that people often have about behavior-driven development, and maybe five or six years ago, most people I talked to at conferences wouldn't have even heard about BDD or ATDD or spec by example. Most people seem to have heard about that now, so let's check people in the room who's heard of BDD before. Okay, so honestly, how many of you have a good implementation of that? One, two, three. So that's quite typical, and that's what I mean by saying there's quite a lot of misunderstanding going on, and one of the key problems with that is that there's no official record or statement what BDD is. Dan North, who's kind of speaking in another room at this point, is kind of famous for just inventing ideas and then forgetting about them, and I think he has about eight or nine books in progress, or at least he says he has eight or nine books in progress, and he's never going to finish them. So kind of the only official record of what BDD is is Wikipedia, and when you look at Wikipedia, Wikipedia is not a really reliable source of information, and we've known that for years, for example, even Harold Hoggar has said a thousand years ago that 35% of Wikipedia pages are wrong. So, and the problem kind of if our knowledge is based on Wikipedia is we become kind of to have kind of the American education system. There's something called home schooling in the States, and there were two girls a couple of months ago trying to explain this to me at the conference. They were from Texas, which is kind of known for the high standards of education, and they were both home schooled. So they were trying to tell me what this kind of home schooling thing is, and then one of them went to and we started talking about what they do, what I do, and I said, well, you know, I'm traveling to Norway next week. I was attending something else here, and one of them kind of came back with drinks and said, oh, Norway, where's that? And the other one turns out, the Norway is the capital of Scandinavia. So the problem with using internet as a reliable source of information really is that we get into myths like this. And I said the only definition of BDD around there that you can find is on Wikipedia, done north from Agile Testing Exchange in 2010, who said that basically BDD is the second generation outside in pool-based multiple-stakehold kind of all the possible two-word hyphenated acronyms he could find. And the problem with this is this is probably the only sentence in English that I can pass through a Klingon translator without losing any meaning. So that's kind of the issue with the whole BDD thing. And then we have this other problem of the second source of reliable information on the internet being Twitter. And I don't know how many of you follow the BDD hashtag on Twitter, but those of you that do that probably know that there's not one kind of BDD. There's actually two types of BDD. And the more popular type of BDD is the one that I find very confusing. You find people writing about some disorders in social anxiety. There's people who talk about morbidities and things that, you know, that can relate to a lot of software projects I was in. But essentially, what these guys are talking about is something called the body dysmorphic disorder. And then you get this kind of whole confusion. You know, if we can trust Wikipedia, Wikipedia says that the body dysmorphic disorder is some kind of a mental illness where people are overly obsessed with their physical look and feel, and that's manifested as a preoccupation with physical defects. So, you know, the less popular type of BDD is what we know about BDD. And these people, I really feel for them. You know, these are people who suffer very badly from their mental image of themselves. They are very conscious about what they eat, how they look and things that, and what do we do? We interject, we jump into this conversation and we give them dieting advice kind of. So, if they're tired of cucumber, maybe they can try spinach or something like that. So, this is kind of, you know, there's a whole world of confusion going out there. And, you know, that's the problem we need to start solving really, because this is creating lots of idiotic statements there. And there's, one thing that's really interesting is that if you go back and if you look at this Dan's definition of BDD, there's just too much information there. And one thing that I really, one website I really like is two things about anything. Kind of, there's a story about an economics professor who was trying to explain what he does in a toilet, and the person who he was talking to was very, very confused. And he said, okay, give me two things about that. And they created this thing where you should be able to explain anything by saying two things. It's a very good, very good website. They kind of, two things about economy ended up being buy cheap and sell expensively or something like that. And there's, you know, my favorite two things about something is two things about world domination, which is basically divide and conquer and never invade Russia. And my second favorite, two things about anything is two things about web programming that's basically control C, control V. So I think, you know, in order to prevent all this confusion, we need to be able to define BDD as two things about BDD, not five million things about BDD. So, and this is where things start to get really interesting, and we kind of, there's a lot of confusion and myths going on there. So one of the two things rules that I really like is Bill Gates, two things about automation or two, two essential rules about automation, because kind of a lot of the BDD discussion there is on tools, cucumber and spinach, or fitness and spec flow and concordion. If you look at the whole history of things, kind of fit came out and then fitness came out and people said, oh, fit is shit. Fitness is the best thing in the world. And then two years later, fitness was shit and cucumber is the best thing in the world or concordion. And then we continuously argue about tools. And Bill Gates said that basically there are two important things about automation. One is that if you automate an effective process, you will increase the effectiveness. But the second thing is probably more important for us. And that's kind of if you automate an ineffective process, you increase ineffectiveness. So what I constantly see lots of teams do is install a tool, have a horrible, horrible process, install a tool and automate the process. And then the process hurts even more. It hurts more frequently and it's even more difficult to sustain. And lots of you, I did this kind of the first time when we installed fitness in 2005, we did this thing, because business analysts didn't want to talk to us, test us, we didn't want to talk to test us. So what we did is we installed fitness and we rewrote all our unit tests in fitness. And then we said, okay, we have acceptance test-driven development now. And of course, nothing happened. It was even worse because we had a bunch of unit tests in a tool that didn't support refactoring and nobody wanted to read that. So kind of we need to start moving away from this idea that we can install a tool and it can solve all our problems. And I think one of the real aha moments for me was in 2008 at the Big Agile conference in Toronto when Henrik Nieberg did a keynote on the top 10 mistakes with Scrum. And he said the number one mistake people do is they think they can buy Scrum as a tool. And the funny thing is that outside of that room, there were at least 10 companies selling that. So we are obsessed with tools as an industry and that's something psychological we have to deal with. So there's a couple of myths that have arised as a result of this misinformation on the internet about BDD, ATDD and things like that. We can look at it as if we try to rephrase the other more popular BDD thing. They said BDD is a mental illness with people concerned about body image and perceived defects. If you just kind of rephrase that a bit, you get BDD is a type of mental illness where people are overly concerned with user interface details and perceived defects of that. So I worked with a team a few months ago where they said well we started doing BDD and as a result of that our business users don't want to talk to us anymore. It's quite a strange thing to say because the whole purpose even if you read the misinformed stuff is about better communication. So if you do this and your business people don't want to talk to you, that's kind of telling you that you're doing it wrong. And when we talk to them about what they actually did and why the business people don't want to talk to them, one of the business guys said well we don't have time for this. I spent three weeks telling these people all the given when dense for the left-hand menu items. So they spent three weeks with their sales people looking at all the possible ways of ordering left-hand menu items and doing given when dense and automating that which is completely insane. And we have to kind of prevent this. We have to come up with better ways of explaining it to prevent these things. So kind of lots of times people just install cucumber or install fitness and expect it to magically work. And very often that leads to a team suicide. One of the teams I talked to for my book worked for a big bank in France and they started using BDD whatever that meant for them. About six months later they had 2,000 tests automated with cucumber and a 10-minute change in the code was two weeks of test maintenance. At this point their iteration is no longer effectively two weeks their iteration is effectively 10 minutes. They have paralyzed themselves by doing something that seems like a good idea. So we need to be able to explain this much better and not jump into these horrible misuses of tools. So the first myth that's really there is installation. So I have invented this word for you so you can go back to your company tomorrow and say we have installation. And installation as a noun is a belief that process problems can be solved by a tool. This never ever works because process problems are still process problems and going back to Bill Gates's ideas about two things about automation. If we have a process problem and we install a tool to formalize the process it's only going to hurt even more. There's a bunch of Twitter quotes and blog posts and Wikipedia pages about this. One of my favorite ones is from Unboxed Consulting. This says that even though cucumber has lots of predefined phrases to define these things if you want to describe anything more complex than I should see a yellow box it gets really, really tricky. So what this guy is basically saying that I've installed cucumber I have no idea what my user stories are. I have no idea what my business users want and software development is really hard. But let's blame cucumber because that's the tool I installed. So cucumber actually came with a set of web steps that was really easy to describe. I want a yellow box and people didn't move away from that for years and then asked like basically deleted that and there were a lot of complaints about where's my yellow box now. So another thing is that I really like is the fatal flaws of fitness. So again there's nothing too related about this. People constantly complain about process problems but looking at tools. And what this guy says is basically with fitness it turns out to be incredibly hard to get business involvement for the wiki-based integration tests. Even as a sentence this doesn't make sense. So the other thing he says that kind of programming in fitness is a steep curve and then there's exceptions, strong instructors disappearing to oblivion which I have no idea what he's talking about. Kind of essentially again this person is saying well I, business people don't want to talk to me anymore. I've installed fitness they still don't want to talk to me. Something's wrong. Let's blame fitness. And kind of the real problem here is that we approach everything as kind of here's a tool, let's use a tool and this tool is a testing tool so let's put all our tests into that. I recently worked with a big bank in London where we've talked about all these ideas. They loved it and their testing director said yes good we have our own home-built tool for testing. We will use it for that. And the home-built tool for testing is something as horrible, horrible, horrible, horrible, horrible, written in a way that even their people don't want to use it and then two weeks later they complained how their business people don't want to use it as well. Of course. So kind of Brian Marek came up with this idea of agile testing quadrants to explain that there are really different types of tests and different types of information we need on an agile project. I think this is true for any project not just for agile projects but it's a good idea to look at it like this. He says basically there are tests that support the team from a technical perspective while we're developing software. That's the part in the bottom right corner or your bottom left. And there are tests that support us from a business perspective while we're developing. There are other tests that are supporting us while we are evaluating the product later and the real kind of strength of cucumber fitness and all the other tools is really in this upper left corner of yours. It's kind of supporting the team from a business perspective while they're developing a product to understand whether we're developing the right product or not. The key benefit of fitness and cucumber and all the other tools is that when a test fails you can go and show it to a business person and they can tell you whether it's right or wrong. So if you need that use those tools. If you don't need that don't use those tools. Use something else. One example that's really interesting is I worked with a financial investment fund in London and we were rewriting this horrible legacy system and all the requirements we had is it has to work the same as the old system does. And we had to reverse engineer this horrible, horrible PLC code to try and figure out what the business rules are and then kind of re-implement that in Java. We had no idea whether we're just making it up or whether this is really what we need to do. So that's the point where you need feedback from a business person. If all your business people can read end unit, code it in end unit. If your business people cannot read end unit you probably should put it in a form that business people can read. So what we did is we reversed engineered these PLC equals stuff. We tried to illustrate it using examples and put it into fitness and then created a nice little document and showed it to our business people and said well this is what we think the old system does. Is this thing correct? Nine out of ten times they said yes. There was one case where the guy said you guys are just developers. You have no idea how our company works. He said okay that's interesting. Tell us about how the company works. For the next 15 minutes he told us a fairy tale about foreign exchange options and things like that and then he said something I'll never forget. He said if you implemented the stuff the way you wanted not the way the old system works we could get sued. At that point I opened my laptop I showed him a fitness test clicked on test. It went green and I said this is automated against your existing system and he started panicking. He started phoning people shouting at them and we no longer had this problem it has to work as the old system does. So that is the real strength of these tools. It's perfectly useless to do stuff like I want a yellow box. It's perfectly useless to do stuff like click click click click click CSS CSS CSS DOM DOM DOM because we can't really get any good feedback from business people on that. Those are useful tests but those tests don't need to belong in cucumber of fitness or spec flow or any of these other tools because you're not getting any benefit out of that. So the second big myth is business testing. So you know how this works now. What does this mean? Business testing what about business testing? Is anybody have an idea? Business people write the test absolutely. So the second myth is that business users should write acceptance tests. And this is what I see all the time. One of the most common complaints against cucumber of fitness or anything like that is that business people are non-programmer never write the tests. Now even as a sentence this doesn't make sense. So kind of and people say okay you know we couldn't make video work because our business people don't want to write tests and we talk to business people well why didn't you want to why didn't you want to write tests? They say well we pay testers for that. Testers should go and write tests and then when you ask testers why they don't write the test they say well the system is not testable developers should write them and developers say well you know we wrote all our tests and they passed their green and everything is fine. So kind of the the issue here is really that we don't really think about what we want with these things. If we give business people to write our tests we're only getting their opinion about how the system should work. And this is no better than business people writing requirements in Word or business people putting requirements in a wiki or index cards or anything else because we're only getting one side of the story. The key thing about BDD for me is exploring what the system really needs to do through examples. One of the key problems that BDD solves is that people do not have enough time to think about things. With short iterations business people don't have enough time to think things through. Analysts don't have enough time to analyze things. So kind of we constantly get half baked requirements. There's an argument that even kind of when people had six months to analyze things we got half baked requirements but we get half baked requirements they blow up in the middle of an iteration and then people say oh how stupid could you be why did you do that. So one example that I had a few years ago is we worked on this poker integration. Online poker is an interesting business for two main reasons. Reason number one is it is illegal in the United States but 99.9% of the players are from the US. That opens up some very interesting domain problems. The second interesting thing is that online poker is really a community game. So starting a new poker site is like starting Google Plus. Nobody wants to use it because there's nobody there. So European companies where online poker is perfectly legal just integrate with an existing network. So we're working on this integration and we had wireframe diagrams. We had kind of requirements and one of the most important things was because poker chips were a dollar cents. Most of the people are Americans so one chip is one cent. We should round to two decimals when we convert money from pounds or euros or crooner into cents. And we had about 80 pages of documentation about some XML over HTTP that wasn't really XML and wasn't really HTTP that these guys invented. So we wrote lots and lots and lots of tests around that. We had five or six hundred tests for that. We had exploratory testing, BDD tests and everything. We wrote that and we asked the business users to give us key scenarios for the kind of business cases. Everything passed. Everybody was happy. Two weeks later somebody stole 20,000 pounds through that screen and people started blaming. Business people said that developers were idiots. Developers were blaming testers for being morons. Testers were blaming everybody but nobody listened to them. And kind of the big problem was that we had only one side of the story and nobody really thought about any edge cases. And the problem is that in this particular situation the exchange rate was something like 0.54 pounds per dollar. So somebody took one cent converted into 0.0054 pounds which was one penny. Then took that one penny, sent it back into two cents, took each of these two cents individually converted into two pennies, took each of these two pennies individually converted into cents and wrote a robot that over a week accumulated 20,000 pounds. And when the blame game started kind of everybody was talking about how stupid could you be. It's forex rounding. And one of the developers was really smart. He sent a Wikipedia page about rounding. He said look rounding. Round to two decimals. 0.0054 is 0.001. What do you want? And kind of this concept of rounding where people had different ideas in the ad was really what created the problem. And we had only one side of the story. So BDD is supposed to solve these problems but if we don't collaborate, if we don't get testers and developers to actually contribute to specs and get business people a chance to think about it's a bit better, we're not really getting the benefits of that. So even if your business users want to write all the test cases on your their own, you shouldn't allow them to do that because then we're not really bad, you're not going to benefit from the conversation. So the second big myth is acceptogration. Third big myth. So you know how this works. What is this? Okay. So acceptogration is another word I've invented for you so you can go back to your company and say we suffer from acceptogration. It is the belief that tests are either unit tests or acceptance integration, BDD, Kraken thing that does everything. Lots of companies are working with kind of when you look at what they do with cucumber or fitness, you just kind of always end trend. It's all the web services and databases anybody's ever invented that are tested with this one test that talks about yellow boxes and clicks. And the problem with that is kind of you can see this guy saying that you know after all unit testing is about isolating a class, testing a set of classes called by the other name acceptance test. So I think as developers, we have a pretty good idea of what unit tests are. We have Uncle Bob who's taught us about what unit tests are in this room. So lots of good books, lots of good stuff. And we've conquered the idea of a unit test, but we kind of afraid of everything else. Everything else is that kind of outside of our world box. It's kind of at the same time a integration acceptor security performance and everything else. And this is a big problem because we just bundle stuff in when we have stuff that runs for seven weeks and when it breaks, it's horribly broken and nobody can fix it. And we need to start moving away from this idea. So again, going back to Agile testing quadrants, there's lots of different types of tests and lots of different type of feedback we need. And when we really start thinking about, well, what are we de-risking with this? If we are using cucumber fitness or spec flow test to de-risk the business perspective, then the question becomes where is the risk? And if the risk is really in seven web services talking to each other, from a business perspective, then by all means tested there. If the risk is in a single class, why not test it there? Even better when kind of BDD is done properly or ATDD is done properly, developers are kind of responsible for automation. And then we make ourselves change the context. Nat Price and Steve Freeman wrote about the hexagonal architecture reports an adaptors pattern in their book, Growing Object Oriented Software. The first mention of this, the Tino is from Alistair Coburn about 10 years ago. It's probably older, but I don't know the original idea. Eric Evans writes about something similar in domain driven design when he writes about anti-corruption layers and context boundaries. So there's lots of different ideas explaining the same thing, but basically the idea is model your world, model the software so that you capture the business essence of it inside a nice clean model that's decoupled from infrastructure, that's decoupled from architecture, that's decoupled from technical stuff, so that you decouple these different types of risks. And then outside of that, connect that to infrastructure in again isolated context, isolated modules. So we can then start thinking about, okay, if I need to do an integration test, that's really about how my system talks to a database. Well, I need three tests, one that selects this data, one that updates it, one that deletes it. I don't need seven million business tests to verify that. I can then use 500 other tests disconnected from the database to run quickly on an isolated set of classes. So kind of, I think this is one of the most important things for implementing any software properly, not just related to BDD. There's a fantastic rule called the Sturgeon's Law. I don't know how many of you heard of that. Sturgeon's Law is basically that 90% of anything is going to be shit. Sturgeon was a sci-fi writer and he was being interviewed by a journalist and the journalist asked him, if you know, if you say your sci-fi is so good, why is 90% of sci-fi out there shit? And Sturgeon said, well, that's kind of positioning the positioning it the wrong way. 90% of sci-fi is shit because 90% of all writing is shit. So sci-fi is just a reflection of that. And kind of, when we start thinking about this, okay, you know, 90% of stuff out there is going to be crap, but we can start remodeling our world differently. We can start pulling stuff from legacy systems. We can start pulling stuff from all this horrible crap around us, putting it into a nice small context that's clean, that's the 10% that's good. And focus risk there. If we can concentrate our tests there, then on the risking that, we can improve it significantly and leave the 90% of stuff around us that doesn't have that much risk. So kind of myth number four that I want to talk about is relation. And relation is a myth that basically any single role should do BDD on its own. And there's a lot of confusion about this, especially people who still have a testing approach to this. And I like the name BDD more than acceptance test driven development. They even like the name specification, by example, much more than BDD. Because I think that kind of is much more direct to the point. But especially people approaching this from a testing perspective often say, okay, acceptance testing, testers going to do that. Or kind of, you know, here's this for our business analysts going to do that to developers that can't really talk to business people or testers do that on their own like we did in 2005. So I had a client where they were complaining that they've tried the ATDD and it didn't really work for them. They're spending too much time maintaining these things and they're not getting any value. And they wanted to know why that is. So when we started looking at what they actually do is one of their directors heard about ATDD and he said, okay, we do ATDD. And how will we do ATDD? Well, there's this junior developer over there. He's going to do everything. And he said, well, you know, I'm not a tester. I can't write tests. So what they did is they had a contract tester who was the least paid person in the company and knew nothing about the domain because he was external. He wrote all their tests. So kind of, BDD is really about exploring the specifications, exploring the domain. And these people had the least qualified, the least paid person who was actually external and knew nothing about the domain, writing their specifications. And then they complained that they're getting no value out of that, which is completely insane. So again, going back to that problem, there are a lot of complaints about that. One of my recent favorites is a Hacker News thread that was quite popular. BDD and all good coding practices seem to get to bashing on kind of Hacker News because that's all about startup heroes and not having to code and I don't know what, inventing money out of thin air. But this particular thread was really interesting. He says, cucumber is very expensive. It's an open source tool, but okay. It requires you to maintain a set of rules that map English to Ruby. Okay. Causes people to continuously fiddle with stuff so that just English reads right. And then kind of you end up spending lots of testing time, fiddling with cucumber stuff. It feels productive, but you have much more better tests working straight into web rat. So apart from kind of his English being even worse than mine, which I thought was physically impossible, this guy is complaining that cucumber is very expensive because he has a lot of fiddling time around tests and that kind of he would have better tests writing straight into web rat. Yes, probably you would have much better tests doing web rat. But again, why are you doing this? And most of the complaints about cucumber now seem to be from one person or two people teams where they say, okay, you know, why do I need cucumber for this? There's just one of me working on this project and I'm wasting a lot of time. So cucumber, fitness, then and all the other tools and be doing general as a process about solving communication problems. If there is a team of one person and there are communication problems on that team, there is no tool in the world that's going to solve that. So kind of the way I like to explain these things is if we have business people doing stories, developers doing programming and testers doing testing, then developers will find functional gaps and inconsistencies in the middle of an iteration. And fair enough, that's not six months later. It's five days later, but it still blows up the iteration. And then what I see lots of teams do is developers develop and then testers go off and test. And if the testers are responsible for testing, then testers kind of find misunderstood requirements and stuff like that later. And this is a very hurtful process. This is a process based on discovering stuff and then fixing it, discovering stuff, fixing it. And I see lots of teams that have a scramble board or a Kanbar board that has developed and then done and then done done. And then there's one team that I worked with that had done done done done. It was completely insane. So there's several levels of doneness depending on who you talk to. And this is basically waterfall in disguise. It's giving us the same effects as that. We just find problems earlier. But if we can get a whole group of people to collaborate very early on at the start of the iteration, then we can discover functional gaps in consistencies, misunderstood requirements very, very early on. And then we don't have this problem of missing the target several times. We remove a lot of variation, variability. We remove a lot of the delays. Several teams I've worked with that have implemented spec by example, ability properly have reduced their time to market by five times, just by removing all this variation. And we create one concept of done, not seven different ways of being done. So kind of the next big problem, the next myth that I want to talk about is long Gresham. And long Gresham is really a belief that the long term value of BDD comes from regression testing. There's a lot of stuff there that people talk about, especially when you connect it to tests. So for example, this guy complains about fitness on Stack Overflow. He says, you may want to use fitness when you want to do acceptance testing instead of unit testing. Again, you know, we have this unit testing and acceptance testing thing and one is bad, one is good. So, and then kind of you want to use it as a communication tool with a stakeholder. So somebody who I guess holds some stakes to kill vampires or somebody like that. And then you want to do large scale tests rather than granular tests. And you want non-technical people to write stuff and blah, blah, blah, blah, blah, test, test, test, test, test. Everything he writes about is testing. So that's not when you want to use fitness. You want to use fitness or cucumber or spec flow. When you want to have better communication and kind of regression testing, fine, that's, there is a problem. And, you know, regression bugs come out and we solve them. Keepers Jones had this very interesting study based on several thousand projects. He published a book called the Economics of Software Quality. And in the book, he says that the effectiveness of regression testing out of this huge population of projects is about 23%. What he means by that is that in average in the industry about 23% of problems are caught by regression tests. Brian Marek had a paper published about 10 years ago on a much smaller population of projects. He says informally he discovered that about 60% of the problems are caught when he writes the test the first time. About 20% of the problems are caught by repeatable running of that test and about 10% of the problems are caught by other means. So kind of these two figures are relatively similar. I don't have any scientific research to show you that regression tests are more or less valuable, but let's assume that these people are right. And we can catch 20% of our problems using regression tests. They're much, much cheaper ways of catching 20% of problems than organizing spec workshops, automating cucumber tests, decoupling your testing environment from all the other stuff. So you can run your tests reliably, maintaining tests. They're much, much cheaper ways of doing that. So the benefit is not there. One of the more interesting case studies I had for my book was when I talked to the guys at U-Switch, and U-Switch is one of the busiest websites in the UK, they basically, when we talked about their process, what they told me is that they will write a test in cucumber, automate it, and then when it passes the first time they will disable it, and they will no longer run it. For most of the tests, they keep a very small portion of tests running. And I thought this is insane because how could you possibly have a project where you don't have an extensive set of tests that you run all the time? How could this possibly work? Why is this not blowing up on production all the time? And we looked at kind of their quality metrics, and they didn't have any serious problems in production for more than six months at that point. They had one bug that wasn't that important. And this felt insane. And I talked to them about, well, you know, what benefit they're getting from that. And I said, well, we get much cheaper maintenance because this is a horrible legacy system. Everything is built, so it has to be tested and to end. So these tests are brittle. We use them as a target for development. And when it's green, we know it's developed okay. We prevent regression using different means. It's okay. That's interesting. So what do you do to prevent regression? And they said, well, we have very good business metrics. So what do you mean by that? Well, for every user's story, the business users need to say how it's going to affect our KPIs. We have five KPIs. And for every user's story, they tell us, well, user registrations will go up by 5% after this. Or this thing is going to happen. And then what we do is we release something very quickly to 10% of the farm, and we measure the KPIs. If the KPIs move in the right direction, we continue developing. If the KPIs move in the wrong direction, we take it out. So they don't really measure technical bugs. They measure business bugs. And they measure business bugs by running it on 10% of their farm and looking for indicators. So not everybody can do this. But it's an interesting aspect of it that if you have cheaper ways of preventing regression, you do not need to run these things long term. The benefit is really somewhere else. And a couple of other stories I heard about big benefits of this is, for example, from Iowa Student Loan, where these guys are a loan management company that had to change their business model very, very quickly. In 2008, during the credit crunch, their business model no longer worked. All their competitors were going bust. They were in a risk of going bust as well. But by having good tests that were written in something that business people could understand, their business users were able to look at this, understand how the system works, and play with different scenarios of changing the business model, and then design a business model change that allowed them to basically restructure their business in about a month and a half and continue working. So the benefit for them was not in regression testing. The benefit for them is helping business people understand what this thing really does. Another really interesting story about this is from Rick Magridge, who had a brilliant talk at Google Tech Talks in 2006. I watched this talk in 2006, and I didn't really understand the impact of that then, because Rick talked about doubling the value of automated tests. We were still thinking about tests at that point. And he talks about the New Zealand airline, where they've cleaned up their tests so much that when somebody phones up and says, well, I want to refund the ticket, somebody in the call center looks at the test to see if the ticket is refundable. The tests are their business rules. The tests are their documentation about the business. Lots of companies suffer from having no documentation. And that's because it's very, very difficult to maintain good documentation. I'm not saying good documentation is always necessary for every possible company, but lots of companies I work with suffer from not having an understanding of what their system does. Five years from now, six years from now, especially with larger teams, you do need to have a reliable source of information what it does. Any documentation, Word, Wikis or anything like that is very, very difficult to maintain, because we have no idea whether we've changed everything that we need to change. It's easy to open a Word document or a wiki page, change a five to a six. Have you actually opened all the wiki pages? You should have opened. One of the benefits of Cucumber Fitness spec flow is that it's very easy to discover all the pages you need to change. If we look at the failing test, not as a failing test, but as a specification that no longer reflects what the system does, then we can say, okay, these five pages of our specifications are wrong. We can detect that quickly because the CI system told us that. And we can easily maintain good documentation. So kind of all these things have, you know, we've known this is a community, but we still make these mistakes. And Jim Shaw wrote about them a long time ago. He wrote about five ways to misuse fit, again, focusing on a tool. And he said, reason number one is use fit for test automation, not for collaboration. Reason number two is write everything on your own. Don't even talk to your customers or business users. Just try the test because you can do it yourself. Reason number three is use fit to write integration tests. I think fit in particular is damaged, caused a lot of damage by the name because the name is framework for integrated testing. And lots of people understand this as framework for integration testing. I have no idea what word meant by this. I can only assume that integrated came out as a retro acronym or that integrated meant, I like to say, I like to think that integrated meant an integrated team. Business people develop as a tester. So can explain it like that. But kind of way to misuse fit number four is pin your propeller. Just kind of write lots of tests about the UI because UI is easy to think about or write lots of tests where tests are easy to write. Not about the really difficult stuff that people don't understand. And reason number four is going specify everything. Over engineer it and do every single thing. The way I like to think about these things is actually as documentation because then we have a very interesting question to ask. Is this important enough to document? And a lot of these things just get resolved. For example, is all the possible combinations of left-hand menu items important enough to document? Probably not. So let's not waste time talking about that. Let's not waste time automating these things. If we talk about is how we exchange dollars into pounds when we do financial transactions important enough to document? Probably yes. So we might want to document that. We might want to test it. We might want to explore this thing a bit more. And I think that's why I kind of also we can start thinking about resolving lots of these typical automation problems because lots of typical automation problems come from the fact that people look at the UI and describe the cucumber of fitness test as click, click, click, click, click, dom, dom, dom, dom, dom CSS class. There is no single business domain in the world that includes CSS classes. I've never heard of that. And then the question is this important enough to document and how would you document it? If you talk about business rules, that really does not talk about CSS. So we start thinking about documenting it in a business language that has an additional benefit. Eric Evans in domain driven design wrote that the most useful software models are the models where the technical concepts, names, relationships are aligned with those in the business. And the reason for that is clear because if these things are aligned, then one small change in the business is a small change in the technical domain. We get to symmetric change. We don't get to a situation where somebody says, I just want another column in this report and then say, you say, okay, it's six months of development. The change is symmetric if these things are aligned. And the problem where people shoot themselves in the foot with fitness or cucumber is one small change in the business, seven weeks of test maintenance. That's not symmetric. We can solve the same problem in aspects and tests by applying this alignment principle. If the concepts, relationships, names in our specifications, in our tests are aligned with the way business people think about this, then one small change in the business, one small change in the tests. The way to get there is think about documenting, not testing. Because then we kind of think about it the right way. So kind of going back to the two things about anything. Here are my two things about BDD. Thing number one is explore examples through collaboration. Forget about everything else. Try to get to the bottom of the right business rules. Find the right examples through collaboration and help people think about things to get the right requirements at the right time. And second thing is create living documentation. Don't create test automation. Don't focus too much on how you're going to test. Is it a component test, integration test or whatever? Think about documenting things. Those two things will get you on the right track. So kind of that's about it. There's a bit more on this in my book, SpecBug example. And you can find a lot more on my website. I write a lot of very passionate thoughts about this. Some are a bit offensive, some are not. So that's about it. I hope I've kind of provoked at least a bit of thinking. I don't know whether I have time for questions or not to assume I have. So I have nine minutes. Do you have any questions? Okay then, alcohol. Thank you. So next session I'm doing in I think room number seven, it's about reinventing software quality. If you're interested in changing your view of what software quality really is, come and listen to that. I think I'll provoke a lot of kind of thinking there. So thank you.
The ideas behind BDD, Specification by Example and Agile acceptance testing are deceivingly simple, but have proved far from easy to implement. Yet most of the complaints online come from misunderstood ideas which lead to misguided attempts. Gojko Adzic will bust the myths around these techniques and show how successful teams all over the world use them to successfully deliver the right stuff using these techniques.
10.5446/51020 (DOI)
I'm going to talk about pretty much the same thing, but with a very specific difference. So yesterday I talked about the rise of server-side JavaScript. Today I'm going to talk specifically about one of the things that was a core idea that I conveyed yesterday, which is server-side JavaScript is great for the real-time web. And today we're going to be talking about the real-time web exactly. We're going to be talking about HTML5 web sockets and how basically you're going to implement them, how you can leverage two projects, WebSocket.io and Socket.io. And for those of you who have more experience or are more interested in Socket.io, I'm going to be talking about a little bit of the state of the project and the upcoming 1.0 release, which is going to happen really soon. Some of the new stuff that has not been shared before and you're going to have the first exclusive peek at the new stuff. So for those of you who don't know me, I'm Guillermo Rouch. On Twitter you can find me by atrouchg or my blog, that's how.com. And the CTO and co-founder of a company called LearnBoost. We're based in San Francisco and we're basically trying to change the world of education through technology. I started my career in open source many years ago working for the Mood Tools front end JavaScript framework, but lately I've been doing a lot of server side as well. And I ended up creating something called Socket.io, which I'm going to be showing today and we're going to be writing a little project with it. And a few other less interesting stuff, but still useful, like MongoDB's drivers and tools. And a few other things that I'm going to use today during the demo, like how do you make sure you can transfer WebSocket traffic or do long polling requests and make sure that you can still reload your server and do sticky load balancing. I'm also the author of a book called Smashing Node. This is the second day in a row that I'm plugging it, but it's really useful if you are just getting started with Node.js. So if you are, buy it. And well, today we're going to talk about HTML5 WebSockets. For those of you who haven't heard of them or you're not sure how to use them or how they're useful, I'm going to do a little introduction to what they are, what's the state of WebSockets today, pretty much like any other HTML5 technology. It's in flux. There's a lot of changes that happen to it. There's new stuff that's being added to HTML5 that is correlated to WebSockets, like the new WebRTC APIs for doing camera and audio transfer. And then, of course, I'm going to be talking about Socket.io and Socket.io 1.0. So that's a lot for an hour. Wow. So let's get started with WebSockets. WebSocket is basically a protocol. And the best way to summarize it is that it's TCP for the Web. It's encoded and formalized as the RFC6455. And the counterpart to the protocol is that when we implement it on the browser, we're basically just using an API. So WebSocket is also an API, of course, which basically allows you to send and get messages in real time really fast. And lately, like I said, it's not just messages. The WebSocket protocol is also about sending streams of binary data. So that's also where Socket.io is going to help us quite a bit, especially as this API, like I said, become more popular and ready to use. So like everything else that's new and really interesting and really hot like WebSockets, it comes with we have some problems today in their implementations. So problem number one, numero uno, we have different protocols implemented in different browsers today. So during the creation of WebSocket, which started at Google, a number of maybe like longer than a year now, it basically started as a draft. And it evolved incrementally with different revisions to this draft. At one point, it was at draft 75, and they were like, okay, so this is good enough to implement. It doesn't matter if it crashes a few websites or if it's insecure, we're going to implement it. So what happened is browsers did implement it right away. For example, Safari, which is a browser that's known to have slower release cycles and they would have been implementing that draft 75 or 76, and they stuck with it. So today, the only browser that is really up to date with WebSocket is Chrome, and surprisingly to a good guy, IE10. So IE10 has a developer preview, which has up to date WebSocket support with a few minor glitches that Socket.io addresses as well. And as far as the other devices or web browsers are concerned, we have iOS 4.2 and AppWords has WebSocket, but it has the older, less secure version. So these drafts that I was talking about, they have some security concerns, and they're implemented today in these three browsers. So as far as Firefox goes, they are pretty much up to date, but they call it Moz WebSocket. So if you guys are used to seeing CSS prefixes everywhere, now we have a few APIs with prefixes as well. So we have a panorama where there is different protocols out there. The good thing is those protocols can be seamlessly implemented, which means you can have a server that responds to requests and to basically user agents that have all these different WebSocket versions. So there's a project that I created called WebSocket.io. So the job of WebSocket.io is to give you this really reliable WebSocket implementation. And WebSocket.io sits on top of another project by Norway's own INR Auto, which is called WS, and that's terrific in the fastest WebSocket server and client implementation. It's blazing fast. So before I move on, I want to show you guys how to implement WebSocket.io really quick in Node.js. Wow. So I'm going to go to...I'm going to find it. First I'm going to rotate. I'm going to create a project called WebSocket. So the first thing we do is install WebSocket.io. I think that was the name that I published it to NPM with. And I express, which is the HTTP server that I always use to just serve some HTML. We don't really need it, but... So I'm going to require my modules here. My neck is going to hurt afterwards. So I'm going to create an app. I'm going to do this every day. I do static middleware. And I'm going to create the directory. And then I'm going to serve some basic HTML. No, actually I'm going to have static server. Never mind. So I'm going to attach WebSocket.io. So one thing that's really important is WebSocket can coexist alongside your HTTP server. You don't need to create a new host. You don't need to register a new domain. You don't have to do anything. You can just attach it and let it handle the upgrade requests. So that's exactly what I'm going to do. I'm going to call...attachapp. So and then I'm going to make my server listen. And I'm going to detect the connections. So whenever I get a connection, I'm going to say that someone connected. So I'm going to do... Now I'm going to say, like I said earlier, we have the client-side API for WebSocket. And that's exactly what I'm going to implement. So since I'm going to be using Chrome, I think, or maybe Safari, I don't need to worry about detecting whether it's there or not normally unless you're targeting a specific browser. You would want to use Moz WebSocket or WebSocket. So... So the WebSocket API is really simple, like I said. It has three event handlers on open, on close, and when we get a message. So what we're going to do basically is send some messages back and forth to measure the latency of the connection. So we're going to track whenever we got the last message. And we're going to print that in some element. For those of you who are getting bored, more exciting stuff is coming up. So measuring latency is not that exciting, but we're going to show basically how easy it is to open a channel of communication and send data back and forth with Node and WebSocket. So here... So whenever we get that connection, we're going to send the user a message. So we're going to say ping whenever the user connects and whenever they get that message, we're going to send Pong back. So by creating that back and forth between messages, we can measure how long it's taking to create to complete that round trip. So I think this looks... Oh, sorry. This is... I made that up. I think I was thinking, jQuery. So I think we're good for a test. We'll see how it goes. All right, so... Oh, I thought I was showing the browser. So right now you're going to see that it's not actually updating, and the reason is that I forgot something. So we're going to split this. So what I forgot is to capture the message. I'm going to do it like this instead. We need to open with a message. So here we're going to say... So like you see, it's something that's extremely fast because it's made local connection. But we can see here that we've basically established something where we can send data back and forth very easily and as much as we want. And this very fundamental thing is what it's going to empower our next demo, and we're going to be basically leveraging the power of this super fast real-time communication to actually make something useful out of it, and namely a game. So just to recap really quickly, we can attach WebSocket.io. It's going to capture the connection. I'm going to send... Again, I give us a closure with that, and we're going to be able to exchange messages. It's very similar to how the API looks like on the client, and in fact, we have hooks in there so that you could actually do an isomorphic, almost, even though some people don't like that name API, where the server handlers and the client handlers look exactly the same. So that's basically all it takes nowadays to implement WebSocket with Node. And the other cool thing, of course, is that... I'm opening Google Chrome. The same is true for... We have... And if we were to use a phone, it would be working seamlessly. So like I said, we already have fragmentation in the WebSocket world. Sometimes it's easy to assume, oh, I just need a real-time socket. I don't need higher-level abstractions. Therefore, I literally just throw a WebSocket server. I don't know, you have to be careful which WebSocket server you're using. So I'm going to close this. And actually, I'm going to close this too. And I'm going to go back to Keynote with a very exciting next slide, which is... Wow. So that was really easy. And we're going to show that our intent with Socket.io and all these other APIs is that we're going to try to not divert away from the WebSocket API too much, because unlike most of the APIs that us Web developers are used to, namely the DOM, WebSocket is actually really good. It's really simple, really straightforward. And most people like it. But meanwhile, in the real world, like I said, we do have problems with fragmentation with WebSocket, but we have bigger problems than, oh, people have really modern browsers, but I don't know which one they have, is that we have a lot of old stuff out there. So for that, most of you are familiar with my solution, which is Socket.io. So for those of you that are not so familiar with Socket.io or how to use it, I prepared a... I was thinking this morning, so what can I do that shows the power of Socket.io and the power of WebSocket, and it's really interactive and cool. So what I came up with, I stole an implementation of Asteroids that's really cool, that has a specific twist, and I made it multi-user. So basically, I took everything that would assume that there is only one player in the screen, and I changed that so that it would support multiple players. And in addition, I did some other stuff so that it would be easy to receive events through Socket.io and convey them to the game without assuming that there is a keyboard in there. So I'm going to go over the initial implementation of this. Basically our first goal is going to be to show to little players on the screen. So I'm going to... So we're back with eye term. Do you guys see the text well or should I... Is it okay? All right. So we're going to be doing the same thing, but I'm going to quickly show you the code for the... Oh, wow, that's huge. Oh. So I had to add this for... I had forgotten that iOS doesn't have function product I bind, so this is my addition as well. This project is really neat. This has been created by AI called Erky. And well, here we have the constructor for the Asteroids. What we consider the Asteroids in this case is just the arena with the player. So whenever we initialize an Asteroids object, we're getting the whole thing. And we're actually not getting Asteroids. We're getting just a spaceship. So... Oh. I want to call it spaceship. We're going to call it Asteroids. So like I said, the first thing we want to do is actually implement Socket.io. So we're going to connect. I'm going to show you later how Socket.io comes into the picture, where it pops up from. But basically we just want to do io.connect. Actually, I don't want to rewrite this. And then I want to do... I normally don't do this, but today I want to do this. I want to listen on the connect event. So normally you don't need to listen on the connect event because whatever you do, Socket.io buffers and sends later. So like if you start sending data right away, unlike the WebSocket example that I showed earlier, if you start writing to Socket, it's buffered and when you connect, it's sent out. But in this case, I do want to... And the reason is I want to ask the player who's... What is his called? So... And only... Actually, I'm going to do that outside, sorry. Because of reconnections. So I'm going to ask the user, like, what is your name? Then whenever the socket opens, I'm going to send back that name. Oh, sorry. I'm going to rotate again. So that's what I'm going to do. So unlike WebSocket, here I'm almost like naming my messages. So in Socket.io, I'm not doing socket.send and something. I'm doing socket.emit. So I'm sending a remote event. And whatever I tag along with that remote event is JSON encoded. So in this case, I'm going to send the name of the player. It's going to be a string in this case. So... Sorry. Once again, I probably want to install Socket.io and Express. I should have an alias for that. NPM install, Sexpress. Oh, no, that's not good. Anyways, gets installed, goes into the node modules directory. Same deal. But now I'm going to require Socket.io. Once again, the cool thing is we're making this database really similar. So when you attach a WebSocket server, in this case, Socket.io is going to listen on it. Time check. Good. We're going to create the Socket.io listener. And we're going to listen on connections. So first difference with WebSocket.io, we have this socket stuff here. Why? We have it because with Socket.io, you can connect multiple sockets and they get multiplexed. So you could say, I want to connect to a slash something or I want to connect to a slash something different. And those you could capture as if they were actual different sockets. Although for version 1.0, you're going to be able to do this. So it's going to actually proxy it through the alias of sockets, but this is going to be one difference for those who are familiar with the API. It's going to be more straightforward. And in addition, another difference that both I Express and Socket.io are getting, another change is we realized most modules should only just expose one piece of functionality. So in order to create an Express application, you go like this. In order to create a Socket.io listener, you go like that. So that's going to be all the initialization you need to do. So that's not really relevant or important, but it's neat for those who type IO.sockets a lot in presentations. So we're going to hear it once again say, okay, user connected for a quick test, send a check. That's a column. And I want to serve some static files. And I'm going to listen on. So one thing I'm going to show you that's really cool is when you create a node application and you find yourself reloading it all the time, I'm going to show you a little trick. So I'm going to listen only if module.parent is not defined. And I'm going to export here the app. So for those of you who are really into CoffeeScript or hate the var keyword, you're going to notice that I never prefix app with a var keyword. And the reason is that usually there's only one app in my entire app, obviously. So when you have big projects with tons of files, it's usually useful to have that app floating around. You could argue it's a hack, but it's neat. So I think we're good. I'm going to create static again. And here, the only difference between just using WebSocket really and using Socureo is like, I'm going to need to include the library, but we make it really easy to do. So I'm going to call this Socureo asteroids. And good point. I'm going to move asteroids in there. I know lots of you have iPhones. I have an iPhone too. So I already created a little way of controlling it through your phone once we start showing the demo. And that once again is going to get served, hopefully, by static. So here goes the trick that I mentioned earlier. So instead of noting that server, which is going to listen on port 3000, which by the way, I redirect my port 3000 to port 80, so I don't need higher privileges. And that would mean that every time I make a change, I have to reload the thing. So what I do is I create that utility, if you remember from slide number four, called up. So up, you pass watch. It watches the directory. And then you pass server. So it spawns workers that get replaced on the fly whenever there's a change. And this is where long polling comes in really handy, even though WebSocket is cool. With long polling, since we're splitting requests, most load balancers already know how to buffer and split requests and keep a live connection. Whereas if you want to reload your code for WebSocket, it would be slightly more complicated. You would have to drop the user. With up and solutions like this, you don't really need to drop any requests at all. We buffer the request until the new worker is up. So... Cool. So, user connected. One thing that you might have noticed is, see there? Prompt is really cool for presentations, because, you know, like, I don't know what I would do without prompt. I would probably, like, inject a form and, like, add an event listener. So with prompt, what's happening is it's blocking the thread. So never use it along with socket.io, because it's going to block everything. So your sockets are going to time out, actually. But for the sake of this example, it's really handy. So my name is Guillermo. And user connected. You see that the connection actually went through afterwards, even though it shouldn't be the case. So now that we know that user can connect, we're going to go ahead and... You know what? Actually... I'm going to split this. It's going to be a mess, otherwise. Let's call this game. Oh. So my Vim right now is not syncing buffers with... That's embarrassing. It's not syncing buffers with my clipboard, so I have to do this. Game.js. So back to the game. First thing we want to do when someone connects, we want to get some feedback from the server first. So I'm going to do this little function called on me, which I'm going to have the server send to us. So Vim splits are coming really handy here, even though they're a little tight. Can you guys still see? Well? Yep. All right. More breathing room. All right. You see here that I'm emitting name. I'm going to capture name here. So what I want to do when someone actually starts the game, which means that they're sending me their name, is keep track of a few things. So I'm going to keep track of how many players we've got, and I'm going to keep a state object of basically what I'm going to consider my room. So I'm going to keep a state object of everyone that's in the game, floating around, where they are, et cetera. So when we get a name, I want to first consider that there is a new player. And I want to do some, like I said, object that keeps the player positioned. Another thing that this game has in particular is like we have a vector that determines that the asteroid thingy is going somewhere. We also have a vector that has a direction. So we have the position and the direction. So before actually I move forward with that, I'm going to show you what the single player stuff looks like. So game.js. So here I'm going to... So that uppercase variable is not mine. Just letting you know. This is not going to be public. Once again, we're committing so many ugly JavaScript style stuff that forget that we're doing stuff like this. So I added a few things to the asteroid constructor, like starting position and starting direction, and then handlers for where the things move or we fire something. So I'm going to label my guy with name. So I think that should be enough. Let's go to another tab. Actually can't see. 429. All right. So something went wrong. Oh, no. Oh, I think I'm not including game. Yeah. Too bad. So you know what's cool, I'm going to add a title. I'm going to call it asterisks.io. Do you think someone knows that? Oh, you know what it is? So it's telling me I have no elements. That's funny. I had never come across that. Wow. So this is the player like I described. Now the thing here is when you can fire elements, so when we do the demo, we're going to go to NDCoslo.com and we're going to fire everything. So I predicted that people would get really crazy. So what I did is I'm throttling the space bar event to 10 seconds. So you're not going to be able to remove everything. You could hold space bar and remove everything. So we're not going to allow that. Also interesting is that we're going to need names. So let me think. So we probably want to identify those guys. Again, this code style is not mine. I'm very, very particular about it. So when we draw the player, this is the canvas context. So for those of you who are not familiar with canvas, I found this out yesterday. But apparently when you call one of these handlers, actually I don't know how this guy is doing it, but when he calls this handler, this is bound to the context, which is really handy. So we want to do a field text with the label of the guy. And see here we're translating the path to that position. So we're going to simply... Oh, all right, so... All right. And I think we can make it look better. Nice. So I just mistyped my name. So moving forward, what we want to do is we want to assign a label to each of these players. And we're going to do so by initializing players. So first of all, we want to initialize our own player. From the server perspective... Did I change the server? All right. From the server perspective, we want to give this guy a initial position that's random. So we're going to say that Y is 100, and we're going to expect that the server gives us a position. And also an ID. So let's tell this guy... So first of all, we want to save down that ID around. We could technically use the session ID that Sakaria gives you, but you don't want to do that. It's private. So we give this guy an ID, and we send it back with the me event. Once again, me, see it there? Me and me. So we give him the ID, and we're going to give him the... Actually, we're going to give him the player object. So in this player, we want to keep track of the name. We're going to keep track, like a set of direction. So I got distracted. So this is the direction stuff that I was talking about. So there's two vectors that we care about mostly. There's actually four, but we want to care about position, and we want to care about direction. We could care about velocity. So for example, if you're moving around and then someone refreshes the page, you would get his existing velocity. That would be cool, but not for this. Not for this case. So direction is also determined by a X, Y. So for X, we want to create something random. So again, the idea I came up with for this is you take the current timestamp and you take the last three digits, and it's going to be a number under a thousand, so that's cool. And for Y, we want to do 100. So we have direction we don't really care about now, but we're going to leave it a placeholder. And then we're going to send back the player object, and I think that's all we need to initialize. Oh, and we also want to capture changes for these guys. So whenever they change position or they fire, I think I called it fire. So whenever someone fires or changes their position, we want to relay that. So this is looking pretty good. That's not looking good. I need to... The perspective is killing me. Let me move this a little bit. Oh, it's actually good, right? Yeah, I think so. So let's see. All right, that's me. So the difference here is I'm getting feedback from the server. So a lot of people hate this, but Sokodeo by default is like overly verbose, but it's cool because you're actually getting stuff here. As you saw, I think that worked. Yeah, that worked. It didn't work? No, it didn't work. So technically we should be getting a position here, but I think it's defaulting to the same every time. Oh, yeah. Thanks. So I guess undefined is defaulting to something. Yeah. Oh, yeah. Yeah, cool. Boom. So now that we have this player going, we also want to get other players. So how do we get other players? What we want to do when someone connects is broadcast that player to the other people. So we're going to define a, here we're going to have perspective issues again. Wow. All right, so we want to capture a player event. So here we're going to have again an ID and like a player object. And here we want to keep track of the players. So whenever a new guy comes, we want to actually create a new asteroid object. And this is where it gets more interesting. So in this case, we do want to keep track of the initial position and direction because like I said, you can start off in a different direction. So I call that their X. And so yeah, like I said, whenever someone connects, we're going to basically have him appear. And for that to happen on the other side of the equation, where? Oh, thanks. Nice catch. So here, like I said, we got a new player. We want to broadcast it to everyone else that we have a new player. So we add this flag, which basically takes something and sends it to everyone else. So we're going to send that player guy and we want to attach the ID. So the difference here is that everyone else is going to get that event, not that one player. So if everything is good, we should now have the keeping track of players. One thing that's important is that when someone disconnects, if they had an ID, we want to destroy that guy. And we want to emit that guy disconnected. Let's call it player remove. So on the other side of the equation, if we have that we have player remove, we're going to basically call a destroy method on it so it gets rid of the canvas stuff. And then we want to delete that stuff. Sorry. So notable is when someone connects, we want to give them the existing players. So not just broadcast it to everyone else, but you want to get the stuff that's already there, which is in the player's object here. So what we do is we walk it. I think. I don't need to check. And we meet, for that one guy, we meet player events. This is similar to like if they were connecting as new guys, but they were already there. And that's where it's important that... I'm going to also keep the ID here. That's why it's important that we structure this around events. Because there could be existing state or there could be new, but for the sake of the game, it doesn't matter. Oh, wow. Something was going on there. All right. So that's me. I'm going to have window problems here. So, I'm not getting the other guy, but... Oh, wow. Oh, is it? Oh, oh, oh, yeah. Oh, I'm moving two guys here. Oh, you know why? Because... So that's the other thing I realized. If this game is already a single player, you could potentially move everyone. We don't want that, so, keys fault. Oh, it's... This guy's here. Oh. Well, that's not so bad. I'm not sure what's going on. There's like 100 players, but I'll take it for now. So, okay, so the next thing we want to do is capture that people are firing and doing stuff, right? So, we'll go here on change. I don't remember the arguments that I get. Oh, I think I just saw them, actually. Let's give this a nice restart. I'm having so many state issues. So let's see. All right, so on change is giving me the... So this is interesting. What I'm going to do is just relay the... Basically the events that the guy is firing. And this is a problem because you usually want to have some latency compensation mechanism so you don't want to send every movement that everyone is doing. But you're going to notice that as we do this, there's probably some lag or like someone else is lagging behind, but the idea I came up with is that when you fire, you do relay the exact position. So when everyone is floating around, it doesn't matter if everyone sees exactly where you expect them to be, but when they fire, it's important that they're exactly where they want to be because if they want to get rid of a specific part of the page, we should let them. So when we get that on change, I'm going to swap them here because I can see stuff. When we get on change here, I'm going to say player move. And I'm going to send my ID, which is ID, and I'm going to send the type. So when we get a player move event, we just want to relay it. So we just want everyone else to get it. And we want to archive the position too so that if you refresh the page, you get it. But we don't want to relay that to everyone else because it would be like super high traffic. So we also want to say player. Oh, here we're going to have some problem. So we want to update the direction and position. And when here we are going to broadcast it. So players, ID. We want to update the position and we want to update the direction. So I think we're good here. So let's see if it works. Yeah, probably. How long does we have one here too? Here. And where do I have the other one? Well, this thing should be telling me, right? Where is it? Oh, thank you. All right. So. Oh, actually. Oh. Oh, I think I'm sending myself. So. So let's see if you're getting, well, we're getting the event. So now we have to capture it here. So whenever the player moves, we want to find him. I will play Titan time. So. So you can see there that the player is getting the removements relayed. Oh, we're having a name issue, I think. Yeah. So it's getting called as the same guy. So basically when someone fires, we want to do pretty much the same thing. The difference is that. So due to time constraints, I'm going to speed this up by giving you guys a URL where I deployed this earlier. So here should be force fire equals true. So what I did after I completed all this code is I used there is an utility called, I can remember now. There's an utility called book that will generate a book Marlet. So the idea is that we want to compress soccer and the Asteroids game into one file that you can copy and paste. So you would basically do. And then you would cut the game and then you would cut soccer into it. So once we have the book Marlet, you simply run book boo. And what this gives you is something that you can copy and paste and add to any website. So notice that here in my game, I'm connecting to my own host. The idea is that if you're embedding this on any website, you would want to connect to wherever we deployed it. So what I did is, where is it? Book Marlet. Oh, I deployed it. So yesterday people were asking me how do you deploy this stuff? And it's really simple. So you have an utility node called Jitsu. It's provided by node Jitsu. Basically what you do is call Jitsu. And you follow this process, sorry, Jitsu deploy. And once you follow this process, you can basically with a few commands deploy to a subdomain that anyone can access. So I did that for our example. So this is a wood. So here we have a similar implementation. Oh, wow. Wow. Okay, so get out. No, I'm kidding. So what we're going to do is static. No, what do I call that? Copy. So I'm going to copy and paste that book Marlet that I generated. Thanks to that wonderful utility. And let's see how I... I'm adding a bookmark. All right, so if... Oh, you're going to notice, well, here's all garbled. But in my connect, connect, where is it? There guys done. I'm still here. I'm kidding. So when I'm calling connect, I'm actually calling Jitsu. See? So that's what's going to allow us to bring it to NDC. Was it NDCosla.com? Yeah. So... So... Oh, that didn't work. Let me correct the book Marlet. I think Chrome is going to be... Yeah. Oh, wow. Oh, okay. So here I am. I'm going to fire this. So we have a few zombie guys. So if you guys have an iPhone, you can also go to iphone.html. Oh, the scrolling is actually not synchronized, so that will make people tomb. So... All right, so this really silly demo actually shows you all the entirety of the Sokareyo API. Even though for the new website, and I'm going to quickly go over the new stuff that we have, Chesuklose. By the way, thanks Eric. Eric is from Sweden. He created the Asteroids game, so he did the bulk of the work. I simply tied Sokareyo into it. Thanks Kenneth Belkovsky, he's Norwegian, and a placeholder. So quickly, what's next for Sokareyo? And we're probably releasing 1.0 the next week. So we're going to be releasing the new core of functionality called engine.io, which is the last thing I'm working on is the cluster integration, so that it's going to use all the processes in your machine. Right now, you have Node.js, and the name is really appropriate because you only have one node, but we're going to make it so that you drop Sokareyo, it's going to use all your cores, you're not even going to notice. It's going to be so fast. I should have a WoW slide here. So we're going to have CDN.sokareyo provided by Node.jitsu, so you're no longer going to have to host it yourself or do slash Sokareyo slash Sokareyo.js. You're just going to do CDN.sokareyo slash Sokareyo.js, or the version or something. And finally, we're having a new website and the new logo, which you saw as the first slide, which I'm really excited about. I don't like the current website at all. So new logo, that's it. I picked this color so that you remember it well in this one and this one. So any questions? Do you think a multiplayer internet is coming along? Say again. Peer to peer, internet is going to... Oh, peer to peer. I hope so. So as I was writing this example, I was thinking it would be really interesting if Sokareyo could also expose some sort of peer to peer API, even though it's not. So in the future, if we do get a peer to peer API, we can make that more seamless. But sometimes with certain security considerations, it would be really useful to be aware of other Sockets out there and be able to just send them messages transparently and not have the server be a broker. So, yeah. Any other questions? No. No questions. So thank you. And sorry for the extended time.
The web is no longer about retrieving documents interlinked to each other, as it was envisioned decades ago. With the explosion of the Web 2.0, AJAX led the way towards efficient communication and just retrieving or sending what was necessary. HTML5 introduces a new range of APIs that make the creation of full-fledged interactive applications easier than ever. With that comes WebSocket, which exposes a very basic channel of bi-directional communication with little overhead. WebSocket enables the transmission of data in real time faster and easier than ever before. Projects like Socket.IO, subject of this talk, have been working hard on bringing this API to everyone, in the same way jQuery did for the DOM and AJAX. Not only do we make WebSocket ready for real-world usage, but it comes with fallbacks for every browser. This talk will explore the creation and deployment of a realtime application the audience can interact with from their laptops and mobile devices.
10.5446/51022 (DOI)
I thought I'd let somebody else do the singing to open my talk instead of me because you don't want to hear that. Welcome. Thank you for coming. This talk is about the cloud more than just hosting and buzzwords. First brief introduction. So I'm John Sheehan. I am known for a couple of things. Resharp is a project I started to make accessing HTTP and REST APIs really simple in.NET. Recently it was included in GitHub for Windows, so very proud of that. So check that out if you use.NET. I also recently launched a website called API Jobs. It's a job board for all of the best jobs that are out there in the API industry at companies like Twilio and SendGrid and PushR and more that we'll talk about later. So you can go there if you're looking for a job. I am starting a new job on Monday at a company called If This Then That. So If This Then That lets you connect all of the internet services that you use in interesting ways. You can automatically save your Instagram pictures to Dropbox or save Twitter favorites to Instapaper or you can do things with Google Read or Evernote. There's tons of combinations. You can combine them however you want in these recipes and automate your life essentially. So check that out at ifttt.com. These are some other companies I worked for. I worked for Twilio for a couple of years building the developer evangelism program there and working on developer experience. Most recently I've worked with App Harbor, Stripe and Xamarin and I'm an advisor to App Harbor and Xamarin. Alright, so we want to talk about the cloud today. So first we should get everyone on the same page and really define what cloud means. So the definition of cloud is a meaningless buzzword that any company uses to replace the word internet. Like you've all seen this. To the cloud. Or maybe not. That's Microsoft's version of the cloud. Apparently not even involving the internet in this case since the software runs on the desktop. The cloud has pretty much come to mean nothing. And for the sake of this talk I want to sort of narrow the scope from meaning everything and nothing all at the same time into one more focus area. And to me the cloud means compute network storage and service infrastructure offered and managed by a third party provider. Now even this isn't complete enough. There's the whole scaling thing and where your data is located and all that stuff but the slides are only so big so we'll go with that for the definition. Compute storage and network. This is a really well defined area. This is something we're all, most people are pretty familiar with. Now it's pretty well defined and scoped. Companies like Amazon Web Services, App Harbor, Heroku have done a really good job of solidifying these three services. So I don't really want to talk about these today. I'm kind of bored with talking about hosting when it comes to the cloud. So what I want to talk about instead is the service infrastructure part of this equation and what that means. So let's start with a couple of examples of what kind of cloud services are out there that what I'm talking about when I mean service infrastructure. So for example, email. Want to send email? Yeah, you can send, set up an SEM TV server, you can manage your mail server, you can all do that. Or you could use one of these services, send grid, postmark and mail gun. These let you send and receive emails. So we'll talk about how the receiving works later with web hooks but essentially what this does is this is managed off somewhere else in the world and you don't have to think about it and all you have to do is make and receive HTTP web requests which is web developers is something that we're intimately familiar with and can do without even batting an IAAT. So no more infrastructure to maintain to send email. There's the telecom area. So Twilio is a really big player there, send and receive phone calls and text messages. Nexmo is an SMS provider with good worldwide coverage. Make it really easy to send and receive text messaging and make receive phone calls without having to set up your own infrastructure. If you've ever tried to set up telecom infrastructure before with like asterisk or free switch or any of those things, you know what a pain in the, you know what it can be to try and get those things working, let alone getting them to scale and using them on demand just as impossible when you're building your own infrastructure. You know, sort of like the latest crop of these cloud services are sort of the real-time providers. So, you know, WebSockets and Socket.io are awesome but building your own infrastructure on that, again with all the benefits of the cloud with the scaling and all that stuff, can be sort of a little bit, you know, too much for some applications. So, you know, these companies have set out to sort of define real-time as an API. So, Pusher, I'm going to be showing you a lot of today and how that works. PubNub is very similar to Pusher. Spyro is sort of another recent entry into it and SuperVeter is based off of PubSubHubub for RSS feeds. But that's just another version of real-time over HTTP. We also have, you know, like as mobile gets bigger and bigger, we have a set of companies that are doing mobile back-end as a service completely via API. So, instead of when you want to go build your mobile application, setting up, excuse me, setting up, you know, users and data and all that stuff and having to run your own Web infrastructure to complement that app, these services give that to you out of the box. So, Parson, StackMob are complete mobile back-end as a service company. Shoot is around photo processing and photo handling and photo pickers and all that with some cloud integration as well. One that I left off of here because the name is too long is Urban Airship and they handle all of the, like, the push notifications to different devices and different platforms all via API again. You can also run jobs in the cloud. So, IronIo will let you send a Ruby job off to it. It's sort of like delayed job with an API and then it'll just run it at a specified time where you can send it a queue and you can do queuing and job processing and all that. It's a little like, if you're familiar with, like, Heroku worker rules or Azure worker rules where you could build something like this, all this queuing system, it's a little higher level than that and gives you more out of the box functionality really designed for running jobs. So, the moment app is essentially a way to schedule HTTP requests to be made anytime. So you can via the API say, hey, I need you to run this URL at Monday morning, every Monday morning, every Monday morning, it'll go off and request that URL for you so you can run some tasks. And then we have, you know, the services that we use every day. So Dropbox, GitHub and Google, everything that Google does. Most of these things have APIs that you can access. In some cases, you can actually use them in your apps to replace, you know, things that you may have done yourself. So, for instance, if you're running an app on Heroku, you can't write to the file system in a persistent way. So if you want to give your users file storage, you could go through S3 or you could also go through Dropbox and use their Dropbox accounts for storage. And for one, that's a nice benefit to your customers because it gives them data portability on the way out. If they leave your service, they still have all the files that they've given you already sync to their machine automatically. And then a couple of more popular ones. These social APIs are like, you know, some of the big three social APIs. It seems like every social service out there has an API. And they've sort of, like Twitter really sort of popularized the idea of interaction with a service via an API. Now, they later take it back, sadly, but it seems like you can't launch a service now without an API or without people demanding that you have one. So the big question is, why do we want to use Cloud Service APIs over building ourself? And that's really like the most prescient question, really, is build versus buy. This is what everything comes down to. Why would I want to buy these services and pay as I go and, you know, start out on a free plan and grow with them or whatever, as opposed to building my own infrastructure that I'm going to be really intimate with and maybe curate and grow over time? We've got a couple of reasons why you might want to do it. One is they equip you with a new capability. So in my case, back in about 2008, I had this idea for a website that I wanted to build. I run a sports league, like a football or softball league. And every Sunday when there was inclement weather, everyone would just call the hotline over and over again, all afternoon, just waiting for updates. And I thought to myself, you know, like, you shouldn't have to pull the hotline. The hotline should call you when the status changes. So I sat down, I was like, I'm going to build this. And I was like, all right, asterisk, I'm going to set it up, I'm going to learn how to admin a Linux box, and I'm going to be up and running in no time. About 20 minutes later, I scrapped that idea because I was a lonely Windows web developer and all of that was like a whole new skill that I did not have. A couple of months later, Twilio launched and I like read the post on TechCrunch and I was like running around my living room because I was so excited because it spoke my language. It gave me a capability that I did not have the five minutes before I found out about Twilio. So suddenly I could interact with Telecom using just HTTP, which again is like second nature to me. So a lot of those, like some of the, you know, back in as a service for mobile apps, give you this too. Like maybe you're just a mobile developer. You're really good at, you know, doing Windows 8 desktop apps, but you're not so good on the website and building web stuff that scales or anything like that. You know, those can be a place where they sort of augment your existing skills to give you a complete set of skills to build your applications. Another thing they give you is instant scale. So almost every cloud service out there, I've never seen one that doesn't claim this as a benefit, gives you instant scale. So if you release an app and it suddenly gets more popular and expected, if you're running your own infrastructure, you're scrambling and you are like in emergency mode trying to throw more servers at the problem. If you can even get them, you know, up and running and configured and if your application was even written to run on more than one server, you can be really in the weeds when that happens. So, you know, these cloud service providers have already taken care of that problem. They've built in scaling into the platform so that they can grow with you and they can handle spikes and dips and basically, you know, be more malleable to your application. A big one that some people that seem to look over, look past sometimes is data. So I don't know how many of you have built like, you know, an email sending system. How many of you know exactly how many emails go out of your web app every day? One? Out of the 3,000 people in this room, one person had no idea how many emails go in and out of system every day. If you were to use SendGrid, the rest of you, you would know exactly how many emails were going in and out of your system every day, how many bounced, when your peak times were, what days of the week were busy for email generation. You'd have complete history over that. These are the things that we always say we're going to build. We all have these half-assed dashboards with blank graphs that we're going to fill them up. We're going to build all this analytics and all this data. And we're going to start collecting it from our email and our text messages and our phone calls and all this stuff. But nobody ever builds that stuff. And these services give it to you out of the box. Complete logging of everything that goes in and out of the system. A lot of analytics, a lot of them tie into other analytics systems like good data and give you even more in-depth breakdowns of what exactly you're using when it comes to the resources. So I think those are three of the biggest reasons that we want to use Cloud Services. But I think the biggest one for me, and this is just summing all up, is that Cloud Service APIs allow you to focus on building your application and not building infrastructure. And if you're a startup or even like a small team within a company and you want to move fast and get a prototype out there and get in front of real users, the last thing you want to be worried about is infrastructure problems or even infrastructure design. So with Cloud Service APIs, you can essentially take all these pieces from all these different providers and connect them and then sort of mix in your special sauce that makes your application unique and special and valuable and you're up and running in far less time with less infrastructure worries. All right. So there's a couple of technologies in play here that are very common across APIs. I'm going to take a sip first, although this isn't my water. I don't know whose it is. My throat's drying up, so. It takes Norwegian. All right. So there's a couple different ways that you communicate with APIs. And there's really two modes. There's sort of like this half-duplex mode where you send a request, you wait for a response, or a request is made to you and you give a response. And it's sort of synchronous, not always, but it's not two sides talking to each other at the same time. So the first thing that you do with these half-duplex communication methods is you be pushing data in or you be pulling data out. So if you're sending emails with SendGrid, you're sending them the information for them to generate the email to get sent. After you send a bunch of emails or after you've made a bunch of phone calls or text messages, you might want to pull your logs out. So the technology that's most common for this is the HTP API. Frequently called REST API, I'm going to sort of avoid going into the intricacies of whether or not something is REST, but for marketing's sake, let's just equate HTTP and REST APIs together. So this is like the real workhorse of the cloud services industry. So if you go back to Amazon Web Services, which was like the first really popular cloud infrastructure company, what really made them establish their position was that they provided APIs for all of their services. So you could provision servers just via API requests, or you could upload S3 completely via API, you didn't have to FTP everything in. It made the entire thing automatable and basically gave you full control over the resources in a way that hadn't really existed prior to that. Another thing that needs to happen sometimes is that you need to be notified. So imagine if you have SendGrid set up to receive your incoming email, somehow they need to tell you that, hey, there's a new email here for you and you need to do something with it, or for Twilio when there's a new incoming phone call. There's a new call coming in, how do you want to handle it? And generally the way that these get handled is via a technology concept called Webhook. So Webhook is not a standard, it's more of a style, but essentially what Webhook is, is a way to assign a URL to a specific behavior so that your application can be notified when that behavior happens. And I'm going to have lots of examples about this in a second, but I'll wait for that. So and then the last sort of thing, these full duplex situations where you want to interact in real time with something. This is becoming more and more common. And the technology that is used for this is WebSockets. And I put a set around there because there are other methods for doing this, there's long polling and 10 different other bastardizations of HTTP to make it so that you could have long live requests. But WebSockets is really where it's going and where the future of real time conversations between your application and APIs is going. And the reason why it's really taking off is because it's so easy to implement in the browser and that's the most common place that you want to actually send data back and forth in real time. And so WebSockets is really where things are heading. All right, so an example of REST API. So this is, REST APIs are essentially the glue between your code and the service provider. So if you have a function you want to send SMS, then you make a post request and you say, here's the three parameters that I want to send with it. The service replies back and say, all right, we sent the message, here's the ID you can look it up later. So REST HTTP API is just a simple request. They're usually based on JSON or XML response types and messages. And they're just a way to communicate very simply. And because they're so simple, they work for so many different scenarios. So they're really just the glue between your code and the service provider. Now WebHooks are sort of like in a reverse API. So you may have seen this like if you use GitHub and you use a post commit hook or if you, Heroku has post commit hooks, App Harbor has post commit hooks, basically a way for your application to be notified when something happens. So in GitHub's case, when I commit new code, it'll go off and make a post request to the URL that I have on there. And it'll tell me, hey, new code has been committed. And then I could have my build system or some other system download that code and do something with it. I could post my campfire chat room saying that new code has been pushed and basically stay in touch with that. And what's nice is that it's just a simple post request to any URL. And because of that, they're really easy to implement. If you can handle a post request, if you know how to handle a form submission on your website, then you know how to handle a WebHook. So these are becoming more and more popular and thankfully, because I love them, they're just so simple and useful. So here's an example of a WebHook. So this would be a service provider making a post request to your website. This is roughly what it looks like when an incoming call comes into one of your Twilio phone numbers. They make a post request to the URL that you specify. They give you some information about the call. And then you reply back with XML that controls the phone call. So if you want to speak, text, play audio, connect people into a conference room, anything that you can do with the phone, you basically send back simple XML commands from your application and that Twilio reads that and decides what to do. And the conversation goes back and forth until the phone call ends. So WebSocket, there's no real good way to put this on a slide. So imagine going back to full duplex. I don't know if you're familiar with how speaker phones work. Back in the day, they were half duplex. So if you were talking, it would cut out the audio from the other side so you couldn't actually both talk at the same time. And now they're all full duplex because they use noise canceling techniques to prevent feedback and such. So now you can talk on phones and still hear the person on the other side at the same time. And that's really sort of the kind of communication that WebSockets and the related technologies allow for APIs. All right. So I'm going to do a couple of examples here. I'm going to grab some more water again first. We're going to first start out by doing a simple incoming email thing. And actually, if you want to get out your phone or laptop, you can participate in this demo, in these demos. And at the end, you have to participate if you have a laptop because I need people in the audience. All right. So I'm going to go over here to my editor. And what I've got is a simple Sinatra website. Sorry. The obsessive compulsive part of me needs two spaces. All right. So this is Sinatra. It's a lightweight Ruby web framework. And I just want to say I've been working with Sinatra for about three weeks. So I've previously seen a lot of.NET. So if anything goes wrong, I'm going to be counting on Rob back here to help me out because he knows it way better than I do. But essentially what we've got here is we basically have a, it's a route handler that says when a get request is made to the root URL, then do this action. So what I'm going to do is actually I'm going to start off by sending an email with Send Grid. So the first thing we need to do is we need to basically create a hash of all of the options or all of the data that we want to send along to Send Grid. So we're going to send this in the HPV, the body. These are just going to be, these get translated into post parameters when the request is made. So I'm going to start out by saying, all right, I need a parameter with my API key. And I have that in a, in a global. And then I also need to send along my API user. I have that in a global as well. And then I need to say, all right, who do I want to send this email to? So I'm just going to send it to myself. And then I'm going to say it's from, we'll just use test.json.com because I have that configured with Send Grid. And then we'll go down here and say subject. And you see rocks, got to pander to the audience whenever possible. All right. And then the text of the message will be like, this is the best conference I've ever been to. And that's actually not a lie. So I'm not just saying that. All right. So I'm going to use an HP library called HP Party. And what that does is it's just a thin wrapper that lets you make HP requests. And I'm just going to say make a post request and then put in the Send Grid. Send Grid.com API.mail.send.json. So this is just the API endpoint that accepts your request to send a new message and returns back to json. And then I'm going to pass in the options. And I'm going to just output the body. So if you're unfamiliar with Sinatra, the last line, anything that you can put after the equal sign, you can just put there and it'll return it to the browser. All right. So now I'm going to save this. And I have this right here. So I'm going to do Ruby web.py. Oh, see, this is the problem. I just switched from Python to Ruby, too. So web.py is not a Ruby file. It's Python file. I know, shocking, right? All right. So now I've got this running on my local server. So now when I load up this request, this is going to actually go off and send the email and see that's the data that Send Grid gave back to me. Now, Send Grid doesn't actually give you very useful information back after you send a message. Some other services will give you like a representation of that message with more information. And I felt the phone buzz in my pocket. And if I go over here to my Gmail account, you can see there's the email I just sent. So all of that, zero configuration, no SMTP server to worry about, no separate infrastructure, maintain simple post requests, you can be up and running sending emails in no time. Send Grid also allows you to parse incoming emails. So what I want to do next is show you sort of what that looks like. And at the same time, I want to introduce you to a tool that was written by a former coworker of my atulio. And what request bin is, is it lets you inspect incoming HTTP requests. So if I go here and I say, create a request bin, I get this URL up here. And now every request I make that URL, so if I were to come back to the command line and say curl foo. Foo and bar are hard to type. And then put in that URL and hit enter. This is going to go off and make a request to that. And if I come back here, and if I request or refresh, you can see foo and bar is the request that I just made. And I can also inspect the headers. And basically it's a really, it's an easy way to debug webhook. So I'm going to take that same URL and I'm going to go over to my SendGrid account. And I've got a domain here. And I'm going to go ahead and put that post, that URL in here. I'm going to say update host and URL. All right. So this is where you can, this is where you can help out. So if you go to your phone and you type in hi at email party.com, which by the way, I registered that domain just for this talk. So you have to use it because it cost me $10. So if you send, actually, you put anything at email party.com, then I'm going to go over here and I'm going to show you. So hi at email party.com, blah, blah, blah, blah, blah, blah. It's actually a language I made up. So you can also participate in this if you want. So you can see, here's a place where you get a lot more useful information. So what this has done is it's given, it shows, you know, whether or not it passed a sender, what's that called, SPF, whatever. I can't remember what it stands for right now. If it has attachments, what character set, who it's from, you know, all the information related to that email. So we'll see if anybody decided to play along. Thank you for participating, Mr. Dolby. His message was he sent his email to sexytime at email party.com, which means I just got a business model and raised $10 million in funding. Thank you for that idea. All right. So you can see how easy it is now to start accepting incoming email. And you can parse out the subjects and the message and basically, I don't know if you're familiar with Tripit or what's another service like it, but basically with Tripit you email in your travel receipts and it parses it out, it gives you a nicely formatted itinerary. So if there was something like Basecamp, when you want to create a new item, you can send it an email with the to-do in the project and then it goes off and creates that for you. And now you can build this in your own application. So if you're looking for a way to, you know, allow users to create new issues or have your support tickets routed through this, it automatically goes off into another system. Whatever you can do with HTTP web request, you can start doing with webhooks. All right. So let's combine these a little bit. And I'm going to use Pusher to sort of glue these two ideas together. So the first thing I'm going to do is I'm going to add an endpoint on my site for receiving the incoming email. So I'm going to say I need a post route at slash email. And then I'm going to, I'm just going to, when this message comes in, I'm going to use Pusher. So I haven't really talked about what Pusher is yet, but it's essentially Sakurai-O or SignalR as a service. So you can send messages between browsers and devices and websites and all of that. Any application that can make post requests can send messages and then any, like, sort of browser that's listening can act on those requests. So if I show you, I can show you here the, actually I skipped a demo. I'm going to go back and do the other one first. All right, so I'm going to show you how Pusher works before I get into tying Pusher to email. So the first thing I need to do is copy in some configuration thing. And I'll just save the type. I'm just going to copy in the route too and I'll explain what it's doing. So this is basically just saying, all right, here's my Pusher application. And then I have this endpoint. So anytime a post request is made to slash click, it's essentially going to trigger this event called click and send that out to everyone that's subscribed. And what the subscribers look like is this. So if I go into my view and look at the counter, what I have here is the JavaScript syntax for setting up Pusher. So what I do is I first, I pass it in an app key to tell it, you know, which app to listen to. And then I say, all right, listen on this demo channel. And then I have a global clicks variable. And then I'm going to bind on that channel. I'm going to listen for the click event. And anytime one of those comes across, I'm going to increment it and then, you know, write it out to the screen. And I also made it so you can just click on it and increment it. So if you, once I get this up, if you pull out your laptop or your phone and you go to this and you click on it, we'll all see the number climb together. So but first I need to make sure I'm not returning a view yet. One second. Where's my get? Sorry, I have a bad, my notes are incomplete. That's right. We'll make it up. Get click, do and and this just needs to return the herb index counter. So that's just going to load that view. All right. So I'm going to go back over here. I'm going to say get commit and click demo. Get push. All right. So this is going to take a sec to push to heroku. Well that is going because it takes a minute. Does anybody have any questions? No questions. All right. All right. So it's received my application now and you can, if you have your phone or your laptop ready, this is going to be at ndc-demo.herokuapp.com. I'm going to go there now and then slash click. So this is probably error until the upload is done. So we'll just wait a second here. Hopefully we'll only have one or two other Heroku deploys in the rest of the talk because I used to do this talk with.NET and app harbor and it would be like ready instantly. So Heroku just takes a little bit longer. And good thing I'm not doing it with Azure, I'd get exactly one deploying and then we'd all have to go. So until later today when they fix that. All right. App is up. Internal server error. Which is great. Let's see here. You all are keep hitting it, which is great because then it's just tailing endlessly. All right. What did I do wrong? Get, click, do, er, next. Hang on. This is expecting an application variable or an instance variable that I don't have here. So we're just going to say push her app. Come back. All right. I was trying to minimize Heroku deploys. This does not minimize Heroku deploys. Oops. All right. I did say if my demo was failed that I would do an irish jig. Is anybody going to hold me to that? All right. So here we go. At the end I have a very, very fragile demo. So I might have to like do the entirety of Rain Dance because that one has a lot of moving pieces to it. But we'll get to that in a second. And any questions? No? All right. So I can tell you a story while this happens. So I'm actually like a.NET developer by history in Python recently. Good. A question. Thank you for saving me from the story. Not for push her. Because they just use simple HTTP requests they work with any web host or framework essentially. What was that? Oh, repeat the question. Thank you. He asked if it works any differently between Azure, App Harbor, Heroku, or Amazon. And the answer is no. All right. So this is up now. And it's still not working. All right. You can tell. I practiced this one 30 times. Too many. All right. Let's go back to it. I'll give it one more chance to try to fix it. I need raw. I don't know if you saw Rob's video yesterday. I need the thinking step back from a moment video. See, the view was not tracked. All right. One more time. Does anybody else have like really like bad self esteem commit messages? It's like, why did you get into this business? I wanted you to be a preacher, you should maybe you should consider it. No, just me. Okay. I keep telling people I'm a horrible programmer. Nobody will believe me. They keep hiring me anyway. Rob believes me. Thank you, Rob. That's why we're friends. All right. If it doesn't work this time, we'll move on to the email demo, which I have the complete sample for here. All right. Discover that process type. You can do it. I think it's called continuous delivery because you're continuously waiting. I'm beta testing jokes now because I have my cap stall. I only had enough jokes for one or two deploys. Hey. All right. So, is anybody else have this loaded yet? So if you start tapping or clicking on the number, you can see like this is distributed. This is all the events that are hitting your machine. So I'm glad that many people are clicking because later we have to get to 100. So I know it'll only take 10 seconds now. All right. So let's take this sort of like push your messaging thing. I'm going to close mine. You guys can continue to tap on that to no end if you want. I think I have more as much user engagement as like Farmville now. So I can be rich. Actually, did you ever see that app that was just clicking and incrementing the number? Okay. Never mind. I have too much free time. All right. So let's get back to email. So now what I want to do is I want to take it. So I'm going to accept incoming emails and then I'm going to write out all the messages on the screen from all the emails that you send. Now I've done a lot of live demos in the past where I put people's real text on the screen and I realize that's dangerous. But we're all going to be good boys and girls today. We're not going to say anything that's not safe for work or offensive to anybody. Right. Okay. Great. So what do I need to do? So I have a post handler here for email and now what I need to do is when the email comes in I just need to say, all right, pusher, make a request on the demo channel, trigger the email event. Yep. And then I'm going to pass in some data. I'm going to save message. Prams. Actually, I just realized. No, thankfully, I think Ruby will save me here. I was like, we're going to get the first ever like email based SQL injection attack probably, but there's no SQL and like I said, Sinatra will save me from that. All right. So I think that looks right. And then I need a route to actually display the results. So I'm going to say get email. I'm really bad at mixing between single quotes and double quotes. So I'm sorry. I even said earlier I was pedantic, but that's not the right word. Where'd it go? Push a BI key equals. And then we're going to return the index. All right. So essentially what happens is post the request comes in. I'm going to trigger the email event, which is going to pass along the text in the emails that you send. So thankfully, while I upload this one, I can go over here and distract you by configuring my domain to point to this application. So I'm just going to copy it from here really quick. All this loads. So now I'm going back in and I'm going to say any email sent to email party should be posted to this URL. All right. Update hosting URL. And then we'll go back and see how that's doing. Did the other people send me email? Oh, nice. All right. Wait, I got seven of those. Oh, because you all went to that URL, didn't you? Yeah. I'm going to be getting these for months. Like, just endless NDC rocks. I will never forget how awesome this conference was. All right. So that is now up. And all right. So I'm going to say hi at email party. Oh, I got to pull up the viewer first. Otherwise, it's not a very good demo. Not an application error. Yes. I live to work another day. And then from my email, hide email party.com and say, remember you all promised. To be nice. Send. That goes out, makes a post request, and I get a JavaScript error. Just like every good web app. Undefined. I must have passed the wrong parameter name. Somebody else sent in. There we go. Thank you. Let me refresh this. Really I have written code before. I got a message. Oh, see the extra s right there? Get you every time. Wait, where did my browser go? Did anybody else see this when we came in today? Have you all seen this video? Oops, wrong one. That one. Has everybody seen this Rube Gold Room machine? Well, I won't subject you to the whole thing, but there's something entertaining while we're waiting. So later on, I'm actually going to do a Rube Gold Room machine with six different APIs connecting them all together. If you're in the overflow room, you should come back in about 10 minutes. This is also how I feel every time I deploy code. It's like somewhere off this is happening, and then at the other end is a working website. Here we go. Somebody did a script tag in their email. Thank you very much. I knew that would happen, except I didn't account for it. So if I go back here to my email, I can be clever as well, and I can say hi at email party.com, script, window.close, send, and, well, I didn't do a script tag around it. Oh, yeah, I did. Whatever. Well, you got the point. Let's just do a real once you can see it working. I don't know if anybody else is out there still trying to send email to my party, but hello. There we go. All right. So pusher, webhook, REST API could be sending messages, and we've got everything combined. So we've got about 20 minutes left. So I want to talk a little bit more. That's the extent of my coding because clearly it went so well that I should do lots more of it, but instead I should get back to my outline. So I want to talk a little bit about sort of what the API industry is sort of like where it's going. But first, a couple of tools that are really useful. So hurl.it lets you make web requests from the browser. This was started by Chris Wonstroth and Leah Culver from Now at GitHub, and Leah has got a couple of different startups. It's been acquired while I was at Twilio without asking for anybody's permission. Let's you basically do curl with a nice user interface on it to make web requests. So another one was request bin. You saw there that lets you log incoming requests to a URL. This was made by a Twilio coworker. It's sort of a version 2 of postbin.org, which was sort of similar. And then another one that's really useful when you're debugging and working with web APIs is this service called local tunnel. And with local tunnel, it takes any port on your local machine and gives it a public URL. So if you're debugging web hooks, instead of having to deploy to Heroku every time, you can spin up local tunnel and it gives you a URL. And then you can use that as your web hook URL, and then it hits your local machine. So this is really nice in Visual Studio if you have the debugger, because then you can have remote services hitting your application, and then you can actually debug it with real live data. There's another version that's showoff.io, and another one I found out today called tunneler with no vowels in it, of course. So there's a couple of tools that would make your lives a lot easier if you start working with a lot of APIs. So now I want to talk about the future, and then I got the big demo at the end. The reason I really wanted to go work at if this and that is because it's the culmination of sort of like the realization of all the great things that APIs bring us. So if this and that lets you connect all of these different services together because they all have APIs. And eventually if this and that is going to have an API that lets anybody create a channel, so now any APIs are going to be able to talk to each other in the world. And that will be completely controllable by developers out there. So if you have a service and you want to make it so that the pictures on your service automatically go to Dropbox, or if you have a social network and you want to let people post their Facebook things there but you don't want to build that, then if this and that will allow that. But this is sort of limited to the fake world of our computer screens, right? This doesn't really interact with the real world. So there's a couple initiatives out there. They're sort of taking APIs and breaking them out of the computer screen and really bringing them out into the real world. So the Pebble Watch is a Kickstarter project that just pre-sold $10 million in watches. What they want to do is they want to offer an API that allows anybody to post notifications to the watch. So it goes through your smartphone and then it goes to the watch via Bluetooth. However, this allows any cloud API now to be on your watch at any second. So imagine you take that incoming email one and if one of your most important customers emails into your support thing, you might want to know that. Instead of having to take your phone out and dig it out, you could get that notification right on your watch. Or in my case, anytime there's a new Pottermore book or something. All right. Hello. All right. So in another case, this sort of like creating inputs going the other way is this Twine project. This is another Kickstarter project. Essentially what this is is a sensor in a little box that's connected to Wi-Fi. So it'll sense temperature changes or moisture or vibration or noise and essentially make web requests with that data anytime that happens. So now we can get input from the real world into our web applications as well. So we could have it say anytime it shakes a certain amount, tweet that somebody has broken in my front door or anything like that. So this is sort of like bridging the gap between this online world of APIs and the real world thing. So I envision everything being an API. And there's no reason why when you scan your card at the hotel, why it shouldn't go off to their web application and be able to like the logic whether or not it unlocked should be up to the hotel. Why do you have to buy another system for that? We already have good ways to communicate. So like, or every time a door opens or every time a light switches on so you could track usage, like I would love all of these things to be API powered and be able to get them in and have all of them work together. And so that's why I'm going to work at ift. And ift is really, really serious about working with sort of this offline stuff and is actually more like really wants to get in that. And they've already announced that they'll be partnering with both Twine and Pebble to integrate with them. All right. Overflow room. You should watch now. So if you have a laptop, I would really appreciate it if you helped. You need to go to this URL. And if you have your speakers muted, turn them up. And if you get a security warning, just hit allow. I promise I'm not doing anything bad to your machine. And then again, turn up the volume. So what I've done is I've written a group Goldburn machine that uses a bunch of different APIs. Now, like I said, this is really fragile. So this makes me super nervous because I've wanted to do this for a year in front of people and it's just me at home doing it over and over again. It's not very exciting. All right. So what I'm going to do is I'm going to kick this off by making a commit to GitHub. And then it's going to go through a bunch of services, basically passing data back and forth. And then it's going to stop at one point while it waits for a reply for me via SMS. And some of the web hooks are not immediate real time. They take a couple seconds. So I'll do the Irish jig again or whatever will keep you entertained. And when we get to the end, there's a special surprise. And all right. So the first thing I need to do is I need to basically change a file and then go back over here. I'm going to commit my changes. All right. I'm going to now commit to GitHub and I'm going to switch over to the browser. And it's going to start quickly, but I'll explain what's going on. So git push origin master. All right. So it's waiting for the commit. As the commit comes in, it's going to do the post request. So you can see there's the message that I put in the commit request and send it off an email to sendGrid. Now sendGrid got the message and sent a text message to Twilio. Twilio received the, sorry, sent the text message to my Google voice number, which is now arrived. And now I need to reply to keep the chain going. So I'm going to actually reply with a special string that I put in here so that you can't mess with this. And then some random numbers to get around Twitter's duplication check. So when I send this back, it's going to send the message to Twilio. Twilio is going to receive it and tweet what I just sent in the text message. Twitter now, I have set up a listener on the user stream and it's gone off and created a Fogbugs ticket with the text of the tweet that I just sent out. Now Fogbugs has created the ticket. And actually if I go down here, you can see, well, it didn't show up down there yet, but it did create the ticket, which I will show later. Fogbugs is the slow web hook. Sometimes this takes up to a minute. So this is where I dance or sing or tell you or ask Joel Spolsky on video why this web hook takes so long to fire. It does give you sort of like, it does make one point that sometimes web hooks don't need to fire immediately to be useful. Like you don't necessarily need to know immediately that a ticket was created. So, you know, anytime within 30 to 60 seconds or even 5, 10 minutes can still be useful. And that wasn't enough BSing to make it through the delay, but this will fire. I promise. I've waited many times through this and it works every time. But this is where you're starting to doubt my ability and everything you thought was true about life and software and Spolsky and the Joel test and everything. But I promise you, it's going to fire. No, really, it will. Since it got passed Twitter, Twitter is the one that fails the most. Like the streaming API doesn't always pick that up. So, once it got passed that, I'm pretty confident that once this web hook fire is that we will finish. And then your computers are open and on this page as well. I never even gave you the URL. Oh, my goodness. So, if you, you probably have time, you can still pull out your laptop and go to rub-status.herokuapp.com. Did you do it? Okay, good. Did you? Okay, we got a couple. Okay, great. See, I told you it would fire. All right. So, everyone that has their laptop open, now you need to start clicking on click here because we need to get to 100 combined clicks before it's going to go to the next step. So, that went really fast. All right, we made it. It worked. Oh, but it's not quite done. So, I'm actually going to mute mine and you should turn yours up. So, they're now connected via Twilio Client Voice Over IP connection. So, in a moment here when the Norwegian National Anthem finishes, which is a beautiful song, by the way. I've listened to it 40 times in the last three days. I've almost made up my own words to it and everything like that. All right. So, they've got their audio up. I'm going to go to a different page here. And I'm going to wait until that audio finishes. This would have been great with 100 laptops, like just so orchestra and like, tear in my eye. It would have been great. So, they're almost done. People on video probably can't hear, but there's at least three laptops playing it. And as soon as they're finished, which it's way shorter when you're not standing on stage. All right. So now that it's finished, the people that are connected, I've dumped them into a conference room. So, if I talk into my computer, you can hear it coming out of your computer. Can you hear that back there? Yeah. So, that worked. So, anyway, there's my Rube Goldberg machine for APIs. If you go back to that, if you had that page open during that, you can see all of the requests that were made throughout the process. And I'm really excited about the future of APIs. And I hope that makes you excited about them too. And that's all I have. Are there any questions? I can't believe that worked. I'm so excited. I came all the way to Norway to get that demo to work from San Francisco when it did. What was that? Yes, you can mute your computer now. Any other questions? All right. Well, thank you very much. Have a good day. Thank you very much. Thank you so much.
Cloud Computing has reached full-on buzzword status. You're probably familiar with Azure and AppHarbor, but there's a wide world of Cloud Computing tools available to you oustide of just hosting your application. SendGrid, Pusher, Twilio, Dropbox and a bevy of other cloud infrastructure products can significantly decrease development time and costly infrastructure. In this talk John will discuss practical ways to integrate cloud computing into your applications with plenty of live coding.
10.5446/51024 (DOI)
there. We're describing integers, right? We're describing the ranges of integers in various programming languages. And it's odd about this because quite often we don't think about particular limits that we have on the integers and on the data types that we have within our languages. We know that they're there. We kind of hope that we have values that are always going to fall within those ranges. We know that overflow and underflow are possible and all those things. But it's interesting to consider just what these ranges mean in terms of larger level program organization. Okay. So integers, we definitely have questions of overflow and underflow and what kind of things are valid. Some of these things can go away. Quite often in many languages today, you have the notion of promotable and integer integral types so that you have integers that will always be unbounded, right? So you can keep on going and keep on going and they'll never overflow. They start building like bigger internal representation to go and handle, you know, very, very large sums and stuff along those lines. But the big question we always have to deal with is really just what is appropriate for our domain. So let's take a look at an example here. Here's a class called account and I'm sure you've seen this kind of thing as an example in many little presentations. It's almost like a fundamental object-oriented program, right? What do we know about this class? What's permissible with this class? Well, if we look at it, it seems like any integer can come in, right? We can basically go ahead and pass in any number as our initial balance. We can deposit values into it and they get added onto the balance. We can subtract them and that's a withdrawal and we can also go and get the balance, right? So all very simple things to do. What are the appropriate ranges for these things? Okay, now it's funny, there's no real constraint when we're looking at this now in the code. It seems like a reasonable constraint might be to go and say, look, we're only going to allow positive balances in our bank account, okay? And then that might be a particular type of bank account that we care about. No overdrafts might be the best way of going and looking at this. How do we change this code in order to go and make this sort of thing possible? Can we do it with the type system? It's kind of odd to think about. There is no, it depends on what language you're working in. Some languages don't have unsigned integer types. If they do, it makes things a lot easier. If they don't, then you're basically stuck going and putting constraints inside of this code in order to understand, in order to go and basically accommodate the conditions where things can go wrong, right? So for us, we probably want to go and have a check in the very beginning with our constructor. We'd also want to go and have a check with our deposit, verifying that what we have coming in happens to be positive and or non-negative. And then our withdrawal, we'd want to go and basically check to verify that our balance is never put into a negative state by something happens with the withdrawal. Now, of course, in doing these things, we're going to end up going and putting in all sorts of things, exception checking and stuff like that, putting our code into a state where a user can put, can do things to it, okay, they're going to put it into a state where it's not really valid any longer. And this really has to do with the limitations of static type checking in languages and the limits of types in general. Now, this view of types that we have can also be extended out a bit. Since I like to go and put these kind of annotations when I'm thinking about things, you know, we can look at the particular values that are coming in to these particular functions and annotate them and say, look, you know, we've got things between zero and nintmax. Those are things that we can go and place in. GetDalance can return a value between zero and nintmax. That can serve as an understanding for us about what really is legal, what's permissible with this particular type of class. Now, it's funny with this. When we have this data representation view of an object, what's it like when we start using inheritance? Some ever used design by contract. You ever hear of this at all? Okay, it's an idea that was first articulated by Bertrand Meyer with Eiffel and then basically it was adopted by just about every contracts library you'll ever see. C-sharp has contracts libraries and such and such. And there are basically ways of going in sort of like asserting particular conditions, tying them to the code in such a way we can say, look, I have a precondition for a particular function. And if I know, if I pass in values that meet that precondition, I'm guaranteed that the post conditions will hold for that particular class. And as a result, it makes it rather easy for us to reason about things. Now, what happens when you start to use inheritance? Well, that's when things start to get a little bit strange. If we look at this, would it be permissible for us to go and have a, let's take a look at this class right here. It would be permissible for us to go and subclass this class and then do the checks inside of it to go and verify that the values coming in are only positive. Would that work out okay? A subclass of this class. Who wants to answer? Oh, God. Well, of course, you could do it, right? I mean, you could definitely go ahead and write a subclass of this class and then override deposit so that you check the incoming values, overwrite withdraw so you verify that you don't withdraw into negative. But the thing about that, though, is it would start to mess up with your polymorphic substitutability, right? You've heard of LISC off substitution principle? Yeah? Okay. Definite LSP violation when you do that sort of thing. And what's funny about this is that essentially you can look at the same thing from a data perspective also. From a data perspective, we can essentially say that subclasses can weaken preconditions and strengthen postconditions, okay? In other words, we are kind of allowed to basically accept more, okay, in a subclass, but we can guarantee, we have to guarantee more. We can't guarantee any less than what we have previously. And so there's a way of looking at this is like a data constraint that happens on the input values that you happen to have for a class and the output values that come back from it. And so it's funny because if you have this mindset, you're thinking about the constraints of things coming in, the constraints of things going out, the constraints are left with the class, it helps your reasoning about how things fit together within a program. Now again, that's all data. It's all constraints. Let's try looking at another example here. Suppose I've got my account class and it uses a tax table, okay? And here I have a method called apply and I pass a tax table into the account. And what does that actually do? If we look at it, it basically goes ahead and adds onto the balance, okay, the rate for particular category. And you can see the category happens to be a variable which is part of this class, right? Now when you do this, what kind of constraints do we have around this particular operation? It's kind of funny because a lot of them are kind of silent in a way, right? If we have a constraint on balance, basically saying that it has to be positive or non-negative at least, we'd have to go and basically make sure that we can't have values coming back from rate for category which happened to be negative, right? Or else we could possibly put ourselves into a situation where we've got a negative balance that could be kind of awkward. The other constraint that we have here is kind of subtle. We have to pass in some kind of a category that rate for category understands. We have to pass in some data that rate for category understands. And of course, it isn't really specified in this code here, but you know, category could be an enumeration. It could be an integer, in which case you'd have to go and sort of think about, gee, am I handling all the cases for all, you know, you know, two to the 16, two to the 32 integers that there are? And I have to be able to handle errors gracefully. I need to be able to go and sort of make sure that all that works. So we've got constraints running in both directions when we do this sort of thing. Now have you ever heard of tell, don't ask at all? Have you heard of that idea? Okay, an object oriented programming. There's something that was written up by Dave Thomas and Andy Hunt back in the early 2000s. And the idea behind this is that essentially object orientation is much better when you are not asking questions of objects instead when you're telling them to do something for you, right? And it's a subtle thing because when you think about it, it's like, oh, what difference does it make, right? But it's subtle but it's also very valuable. It helps you go ahead and arrive at better encapsulation decisions. The thing about this code right here that's a little bit awkward is that we're asking the table for something, some integer. And that means we have to be very aware of what comes back. And we have to basically know, does that play well with the stuff that we want to go and add it into? Is it negative? If it's not negative, everything's okay. If it's negative, we're in trouble. We need to be aware of that constraint. And as a result, things are a little bit, you know, awkward that way. We have coupling between these two objects. Let me show you an example of something which doesn't have as much coupling. And it's because we're dealing with like a pure tell system. If you look at this, okay, take a look at what's going on here. Okay, here's our deposit method. I've added something new to this. I have a transaction record object and I tell it to go and record the balance and the value. Okay? Now, what can go wrong when I call this? Do I have to worry about the, I have to worry about the types of balance and value as much in a way? The important thing is that transaction record, the record method will take the full range of those values, right? But we don't have to care too much about what's being supplied back to us. We don't have to care about things being out of range, out of the range that we can handle. In essence, any problem that occurs is going to be a problem that's going to be on the other end. We give it to them and we're never even going to hear about the problem unless it throws an exception back in our face, right? We pass information in and it can choose, it can take that information and ignore it. It can choose to take that information and record it. It can do all those things. But we've basically passed the problem on, okay? It can't impact us any longer. And this is a kind of neat thing about object orientation is that essentially in many cases we can kind of organize our objects in such a way that we are not getting things from people and we're not getting things from people then we're actually not concerned about the things that they could pass us that are wrong. If we flip it around and say, look, we're just going to pass things to other objects in order to go and get our work done, we actually have a higher chance of not having as many errors. And it's kind of fascinating when you do that. Essentially what happens is if you look at things from a design by contract point of view, the contract that we would have for a method like this is just really skinny. It's all about going and saying, look, I pass you an integer. I pass you any valid integer you accept it, right? There's no particular constraint on this. And because there's no particular constraint on this, you know, it's kind of like we have a higher chance of producing things with fewer errors. So let me, people have seen this style of programming much at all. It's like it's all about notifications and things along those lines. It's cool, okay? And if you look around at some, like the end-unit testing framework internally has a very tell-don't-ask type style. So this is the J-unit testing framework in Java. And you'll see this type of pattern you're being used to over and over again. Some of the worst models you'll ever get into when you're working an object-oriented code are where you have some collaboration between yourself and several other objects. And it's like, I grab from this, I grab from this, I grab from this, I do things, pass it on to somebody else or change my internal state. And when you discover that you want to test stuff like that, you find, oh, terrible problem because now I've got to go and mock out all this craziness. I have to go basically know what state this will be when I call it, what state this will be in when I call it, and, you know, deal with time-dependent dependencies and stuff along those lines. When you organize your design in a way where you're pushing things out, often you're testing this way easier and your design's not being a lot more decoupled. Now, funny thing about this though, when you're doing this sort of thing in a design, one thing that tends to happen is you start to go and get like a bit of a dichotomy in your design, right? You end up having objects where you pass things to other objects, right? But then what do you pass? Well, here I'm just passing data and passing balances and values and stuff like that. As you move further and further into this tell-don't-ask style of programming, one of the things you start to do is you start to consolidate some of these primitives into bigger objects. They're just really like data objects, right? And so you're passing around these objects so you have like a layer of little messages that you pass around and then basically you have these hubs which are things that accept and receive the messages. Does that sound like anything else? It's kind of like networking in a way, isn't it? It's kind of like you have these autonomous things and you pass messages back and forth. People decode these messages, do things with them and stuff along those lines. So it tends to kind of dichotomize. And again, it comes down to the contract between these things being all about data, just like what we were seeing with our constraints looking a little bit earlier in things. So let's go a little bit further with this, looking at data again. Is there anything wrong with this code at all? I'm just going to camp on this until somebody tells me. I know you can't be happy with this. Or if you are happy and you want to tell me about it, tell me. Am I happy with this code? No? Come on, speak up. You'll tell me. Okay. Go ahead. Yeah. Yeah, that would be. You know, it's funny because what you're doing is you're presenting a solution, but what's the problem? What he said was, why doesn't the client just choose which function to call? That's fair. But what's the problem? Is there a problem here, though? Beyond the thing? Okay. So if you look at it from a single responsibility point of view, function's doing two different things, right? This is actually a very, there's an old piece of advice from way back in the structured programming days. And that piece of advice was never pass control flags into functions, right? And it's funny about this because you can look at it from all different directions. One of them is single responsibility. Essentially, inside, you've got to decode these things. Another reason why this is particularly bad is if you look at this and you say, I'm passing a control flag into this run method, how does anybody know what true or false mean, right? It's very error prone. And I don't know, have you seen code like this before? Raise your hands if you've seen code like this before. Okay. Yeah. Lots of people do this sort of thing. They just don't understand that there's a problem there. So essentially, true and false don't really mean all that much to us. It would be better if we were going and actually going and passing, you know, or calling the specific functions that we care about. Would this be better if we had like an enumeration and we pass in stepwise or continuous? Could do that. And then we'd have to go and say, well, we could do that, but why don't we just have separate functions for that, right? That's another argument. One reason why you might just pass an enumeration in could be this is part of an API and you want to go and have a very narrow API. You don't want to go and have a whole series of functions to do and do various things. So you basically make enumeration as part of your API. Possible. That could work. But anyway, there's something kind of smelly about this thing. I want to compare this to something else, which is a little bit smelly also, and see if we can find the underlying principle behind this. Okay. This is some Ruby code. What's wrong with this code? If anybody screams that it's Ruby, I'll have to get out of the fit, right? Anything wrong with this at all? I may not have the best example for this, but the thing that I think is funny with this, and this happens all the time in code, is that we have a layer separation here where we have one part of the program constructing a data string. And then we have another part of the program decoding that data string in order to go and basically do something. Okay. Now, think about that. We do that sort of thing all the time with like XML and JSON and stuff along those lines, right? And particularly, we do that sort of thing when we are passing from one system to another through a network or something along those lines. What gets to be really crazy is when you do that kind of stuff inside of your program. Okay? What makes that crazy inside of a program? Okay? What happens is you've got one program, one part of your program that's going in constructing something, right? Only for another part of your program to go and, you know, destruct it. Basically, go ahead and parse it out, right? So you hope that you basically get both ends of that right, okay? On one end, you're constructing something, and on the other end, you're basically parsing it. And essentially, whenever you go and modify the data format over here, you can modify the data format over there, right? And that's like a natural coupling between two different parts of the system. What's underlying between this example and the previous one is that both of those examples are cases where we can be lossy. They're cases where we can lose information in the program, right? Essentially, when you're calling this function over here, it's kind of like, well, true or false, what does that mean, right? You have to go inside the function to understand what true or false mean. And over here, it's kind of like, oh, construct this string and hope we did it right. And it's like, well, then we have to go and do the parsing, but we've lost information in the type system. We're in the explicit representation of the objects to go and sort of, you know, be able to keep track of it. So we are introducing the possibility of error. We're also making it harder to understand what's going on with things. I look at this as being like a smell I call private language, okay? And these are just two examples of this kind of thing. Private language is spring up quite often when people don't really see direct ways of doing things. I don't know if you've ever seen this sort of thing before where you have like some part of a very big function and what people do is they say, okay, I'm going to set this flag here because I discovered this problem. I'm going to go down further and further and further. Now I've discovered another part of the problem. I'll set this flag here and then down here, set another flag here. And then the whole bottom of the method is like, oh, if this is true and this is true and that's false, then do this. If that's true and that's true and that's false and that's true, then do this. You ever seen code like that at all? Yeah, okay. And again, this is a private language. You're building up one representation in order to decode it later in order to go and solve a problem. So anyway, it's a code smell. It's something I don't really, I don't think I've ever seen anybody write up or talk about. But the thing that's interesting to me is that this sort of thing shows up when you are thinking about things from a data perspective, okay? As much as we love behavior in objects, it's kind of like we can still see, as I was saying earlier, data at the boundaries when we pass a message from one object to another. We still see data in the parameter list. We see data in the structuring of like it's passed back and forth. We definitely see data within, you know, these little private means of communication that we kind of build up inside of objects. So that's a little bit awkward and it's a nice thing to know about. Okay, so essentially, sort of like summarizing that, translating the strings loses constraints. A thing that I find interesting also is that this is really something which happens all the time when you're dealing with error handling and, error handling in applications also, okay? What's everybody's favorite method of handling errors in an application? Logging. What? Logging? Yeah, logging is good. Yeah. What about the E word, exceptions? Well, I guess it's overstated to go and call that a favorite, right? Anybody here love exceptions? No, okay. Yeah, I think exceptions are one of those kind of like beneficial evils in a way sometimes. The thing that's very easy to do when you're working with exceptions is to go and sort of, you know, as soon as you discover a problem, you throw an exception and you hope that somebody up higher in the stack is going to catch it, right? And you hope that the behavior when you throw that exception is well-defined. You know, is it really understandable to the person who receives it that this is something that I want to terminate the application for? I want to log it a higher level. Do I want to retry things? It's a very easy way to go and defer problems. But the thing that's funny about this is it still has that same issue that what we do is we have local information about a context. We produce an exception and we take that exception and we throw it back to another context where we don't have the inner context at all, right? We try to make a decision based upon that. So it's kind of like, again, building a private language between this piece of the program and the other piece. Can you get past that? Of course you can. I mean, you can add a stack trace into your exception. You can add in a context to go and get a hold of all the variables that occurred, you know, back in that lower context and stuff like that. But again, you're now in that position of writing a program to decode your program. At the upper level, you've got to go and parse that stuff and understand what to do with it, that kind of thing. So it feels like this thing of inner and outer context of private languages is rather pervasive around applications. And again, I think that we see that when we are looking at data constraints in particular areas of code. This kind of leads me to something else that people aren't really aware of that much these days. Anybody ever hear of this one at all? Postos law? It's really been dawning over me over the past couple of years that design, the people that really understood design in the context of software development that people don't listen to that often are really the architects of the Internet, right? There are so many little tacit bits of knowledge that are just kind of baked into the stuff that the IETF has done, the protocols that we use, you know, day to day TCPIP, things like fail fast, all these other things. This one is just neat because it tends to be true of networking systems. It also tends to work out well with like Unix command line utilities. And you know what? Once you understand this one, you start seeing applications of this all over the place in all the code that you work in, okay? So what's this about? Be conservative in what you send and liberal in what you accept. What role does that mean, okay? Well, let's say that you're writing some utility, okay? And your utility is supposed to accept tab-delimited, you know, values, okay? Just rows and rows of them. And then what you do is you basically tabulate those things and produce some number, that's like a summation of each one of the lines or something along those lines. Should you throw away your input if somebody goes and puts a space rather than a tab between their fields? Depends on whether spaces are significant or not. I mean, if they're just numbers, you might say, well, I'll be a little bit lenient. I'll accept spaces and tabs, right? And maybe you don't even document that as you come up, okay, this is kind of just, I'll accept this sort of thing. It makes my program a little more useful to do that sort of thing. Now, when you're putting out your values, should you just put out tabs and spaces willy-nilly? No, pick one, okay? Always produce tabs or always produce spaces, right? And so what you're doing with this is what you're doing is you're creating like a funnel. You're basically allowing lots of people to produce input for you and being very forgiving in what you accept, but you're also going and saying, look, for me, you can count on me producing exactly this format and it's kind of like, that's how you deal with errors, right? It's kind of like you're being, you're being accepting of what is being produced for you. You're trying to go and leave the world in a better place by producing something which is a very tight format. So may you see examples of this in other places in programming at all? It's funny. You can look at LISC off substitution as being almost a variation of this at a very high level. It's kind of like when you have classes, it's kind of like they, subclasses should be permissive in what they accept but restrictive in what they produce. This, that whole precondition, post-condition thing I was talking about earlier, right? And when you think about this even deeper, you start to realize that this is fundamental to all systems, not just networking systems, not just computing systems. It's like when you, when you build pipes like in a plumbing system, isn't one end a little bit wider to connect to the other one in a way and it's a little bit narrower? It's like there's this thing where basically you want to be permissive on the input and then sort of like more restrictive on the output. And by going and doing that sort of thing, you end up building robust systems over time. Again, this is something that you start to see as a result of looking at, you know, data formats and stuff along those lines. One of the weird experiences I had as a programmer early on was working at a biomedical company and I was lucky enough to work trying to develop translators for, or parsers for a data format that's, that was designed by some biologists. And generally that's okay because biologists are very good at producing data formats, isn't that true? Now it's, it's typically that thing. Quite often people, they know their domain, they feel they can do this sort of thing, they do it and they make some mistake and they don't really quite understand what the mistake was. The data format I was working with is something called flow cytometry standard, FCS. And there was a very cool thing about FCS format. You could save data in comma separated value and binary. You could save it in arbitrary precision numbers. You could do this. You'd have all these data formats and headers and all these other things. You could save data in a million different ways in FCS format, which meant writing a writer was very, very easy. Writing a reader was a real pain, right? Because you had no idea which of the end formats people chose to go and write things out in, right? And I think, you know, with that sort of thing, if anybody that was on the team developing this standard was really aware of that, they would have said, oh no, we have to be restrictive just because this is going to be a pain to go and write, you know, readers for this. And I think, yeah, essentially you don't see developers making that same mistake all that often, you know, because they are aware of the pain involved in receiving something and having to go and deal with it. So it's, it's funny about this. Even though you may not have heard of Postal's Law, it's probably something you've encountered where you probably almost have baked into you now. You almost have this sense that that's just the way that you organize things. And so, yeah, it's kind of an interesting thing to be aware of. So let's look at data one last time. What's wrong with this code? You know, if nobody finds anything wrong with this code, I'm just going to walk through it and see what happens. Sorry? Too many dots. Okay. But you know, when you think about it, that shouldn't be a problem because dots take up very little ink, right? So they're, they're green. You know, dots are green relative to the other punctuation. We have in languages, right? Okay. That was a joke, right? This is a classic example of a violation of the law of demeter. Okay. You've heard of the law of demeter, right? And the notion is that essentially when you have an object, you shouldn't be going in sort of letting your internals out for other use, right? And there's a bunch of reasons why this can be awkward, okay? One of them is that basically goes and makes, it puts, it puts the burden on the person who's using this. They not only have to know how to deal with an account, they have to know that an account has a calculator. They have to know that a calculator has a superannuary table and they have to know the superannuary table has to have cells and you address them this way and it says, you've increased the burden on people trying to understand your stuff automatically, right? Beyond that, from a dependency point of view, it's possible to break code that is written this way very easily by going in saying, well, maybe I don't have my calculator on the account. Maybe I have it on another object that's held by the account. Anything you do to go and alter internal structure can go and alter these chains of objects and force recompilation or force test errors and stuff along those lines. And, you know, the other thing too as well is just from a compilation point of view in a statically typed language, anytime I touch any of these intermediate classes, it's probably going to force this to recompile. In a dynamically typed language, you probably want to go and force the rerun of all the tests for all these intermediate pieces also, right? So, you're convinced it's bad? It's kind of bad, right? What about this? Is this code okay? It's a Ruby code. Makes it inherently bad, doesn't it? Anyway, I'm not going to go and paint this as being an ideal vision of what code should be or anything on those lines, but I feel more comfortable with this code than with the previous code. This code, even though it's a little bit terse, you know, it's a little bit weird, it relies upon something which is more functional in style than object oriented in style, okay? What we have here is we have a series of transformations. We basically say we have a list of events, we map each of the events to a date. We go and we pull out the date of each of those events, we sort all those dates, we do a unique on those which basically goes and gives us all those dates uniquely, it removes all the duplicates. Then we call each cons too, which is kind of something which goes and gives me each consecutive one of those things as pairs. Then we map those pairs into the difference of the dates and then we do a frequency histogram and we select certain things within certain ranges and stuff. Now I'm sure you guys looking at this, you're just like, oh my God, that's horrible, it's just terrible. It's code written by mathematicians that have no design sense or anything like that at all. There are much tamer examples of this sort of thing that I feel much happier about, but the style itself, I don't feel bad about regardless, okay? The thing about the style which makes this useful and interesting is that it's fully transformational, okay? Essentially, there is no state over here in events which is being modified. When I do a sort, I get something completely new out, okay? I get basically a completely new array of things and then when I do a unique, I get a new array of things also and I'm not, there's nothing I can do in a later part of the sequence that's going to change anything earlier in the sequence, okay? The other thing too is that generally each one of these things I'm working with is accepting an array and returning an array back, right? So we don't have that big type issue that we were talking about a little bit earlier of like, okay, here it's like I've got five different types of things all the way down to my chain. Here, it's just basically a bunch of array transformations giving me the things that I want with these things. Can you still get into trouble with this? Yeah, I mean somebody could take the map operation and move it off of arrays, but they'd be shot, right? I mean that would just be a terribly nasty thing to do, would break every base code in the world. But it's interesting to notice that there is this thing where in a particular context, this demetrious stuff is actually okay and another one it isn't quite okay. And a lot of it comes down to change characteristics. How easy is it to change the code when you're doing these things? So it doesn't feel like there's any strict rule with that. The thing that I think is neat about this is, and I was looking for a word for this, in category theory which gets, you know, bandied about in like, you know, the Haskell and functional programming language communities, there's this concept of something called an endomorphism. And an endomorphism is essentially a function which accepts a type and returns the same type back. It can do things to the values, right, but it accepts that type and returns another one back. And you'll see that roughly here, most everything we have is endomorphic. We accept an array, we return back an array. We accept an array, we return back an array. And we're able to go and chain things together in a very nice style that does things well. Now, have I ever seen a framework called Jmock at all? Okay, some of you have. There's also variations of this in the.NET space, but it's like those very strongly DSL-ish testing frameworks where you're able to go and say, you know, this.should equals this, da-da-da. And just, you have long message chains that make the entire thing read like English. And you might look at that and say, well, gee, it's that, right? But it's basically going ahead and sort of like dealing with this trade-off. The trade-off between having things which can cause trouble and break later when people move things and having something which is a bit more expressive and quite often relying upon this notion of having the same type used prolifically across the chain of operations. Again, this is something you basically see when you look at the data that goes between things. Now, it's funny about all this. The topics I've discussed so far have been kind of, you know, mix and match. I think the thing that's really common is basically looking at the data representation between things. And I think that's something that we can try to go and basically pay an awful lot more attention to as we develop code. And I don't see anybody talking at that very much in design, so I wanted to share it with you guys. I did want to go and basically mention a couple of things at the end though about the law of demeter. We just basically saw cases where what we consider to be the law of demeter is not quite the same as what we expect it to be. There's other cases where basically undemetered code is okay. So when you are working in a situation where you have this kind of thing going on, can anybody explain or anybody describe what systems this sort of thing tends to happen in more than others? I mean, do you see this in, you know, data acquisition systems or compilers? I don't know. I tend to see this sort of thing a lot in database-centric applications. I know about you guys, right? You ever see that sort of thing? It's like I have a bill of sale and I go and I get this thing, dot this thing, dot this thing, dot this thing, because essentially you have records and sub records and sub records based upon what your data scheme happens to be, right? So how do you get rid of that when you're working in an application? How do you adhere to the law of demeter when you're doing that sort of thing? It's kind of hard, isn't it? I mean, one thing you can do when you have a data model is you can say, look, I'm accepting, I'm going to basically say that this one thing is primary like the bill of sale. And even though it has sub-objects, what I'm going to do is I'm going to place all of the methods that were on those things, try to push them up to the top and basically have high-level operations that allow me to manipulate all the substructure without having to care about it all that much. Does it work? It can at times, right? But we always have to have a lot of things that are not in the application. And OO applications are database centric between having like an overarching thing which is a container and then all these things underneath it need to go and change also. So it's very typical that when you're working with a database centric application that the law of demeter stuff doesn't quite work out as well. If you're doing things that are kind of like workflow-y, right? It's kind of like, I have this one processing step, I pass this thing through and pass it through another processing step and another processing step. And quite often, you know, you can have as many demeter violations. You're not pulling into things and stuff like that as much. I had an experience years ago which kind of opened my eyes to something rather fascinating. I was visiting a team that had the worst law of demeter violations I've ever seen in my life and it took me a long time to figure out why they were doing it and why it was happening. It was a giant Java project and it turned out that what they were doing is they were, every time a visitor visited their site, they loaded a user and when they loaded the user, they loaded up every bit of data that could possibly be accessed from the user and used in that session, right? So you can imagine what that was like, right? So you basically each time through, you have the session, you navigate to the user, you're able to go and get the entire history of everything, da-da-da-da-da, all this stuff. And I was like looking at this and saying, this is just madness. And so we'd look at particular functions and it's like, okay, you've got this one sub-piece and it goes and talks to this, which talks to this, which talks to this, which talks to this. And it took me a long time to understand why that was happening. And the reason was essentially what I was saying. They loaded everything all at once, right? And in most database-centric applications, what you end up doing is saying, look, I need to go and have, you know, I need to have a bill of sale, I need to have a transaction record, stuff like that. And you're basically dealing with one piece plus it's subsidiaries, it's immediate subsidiaries and you get through joins and stuff like that, only the piece that you care about. And so as a result, you're going to have a constrained sphere of what you're working with, right? And it was kind of fascinating to notice that because they didn't do that, they ended up with far worse demeter issues than you would have otherwise. And it kind of led me to this notion that essentially it's like databases are better, are good not just for storing data but also the constraints that they place on your development. They kind of force you to go and basically sort of get down to a smaller crux, a smaller change set and the things that you need to pull in and modify and then push back into your database. So it seemed kind of interesting that way. So yeah, the law of demeter is a rather powerful and interesting thing. I think the thing that makes it overlap in this talk a bit is that it is about representation. When you have a structure that looks like this, essentially you are making the relationship between pieces very obvious and that's good at times except if that structure changes. So cases where you want to make things very explicit are cases where the structure of the data holds information rather than just the data itself. Some cases where that sort of thing happens, things like parse trees and compilers and stuff like that, it's not that you have an equal sign and you have a variable and you have a literal, it's the entire tree structure that has a meaning. So as a result, you want to have structure data like this. It ends up looking very undemetered in a way, right? But I think as programmers we just need to really be aware of when we need that kind of thing, when we need a complicated data representation and when we don't. And you can essentially go all the way back and start looking at all the different ways that you can see data each step in your program and quite often it gives you insights that you would not necessarily have had otherwise. So anyway, that was my weird talk. It's all about data representations across things and it ties in different aspects of design. Any questions or comments? Yes? Yeah, sure. There's one and there's the other. Yeah, okay. So the comment is basically the problem is that this is basically taking you deeper into the object and these are all off of the same object, right? And that's fair. Basically all these things are things which are unenumerable or array and so they are just aspects of the same protocol. So yeah, it's just essentially that's the thing. And quite often, I mean, it's funny because now that this has become more popular, many people are going and adopting this point of view. They're basically saying that a key aspect of demeter is the amount of type exposure that you have. It's like how many types are you really exposing in a context? And in this case it's generally just one and that makes things a lot easier to deal with. That's fair. Yes? Okay. Okay, the question was if, would I say that if we're using ORM we would end up with a structure like this and that's bad and that we should avoid it? I think it's really data has to be structured in certain ways, you know, in order to go and make it easy to add, you know, access. The representation that you use inside of your program doesn't have to be exactly the same. It's probably going to map quite easily. My bias is always with that to pull up only the amount of information we need and in the form that we care about into objects that may not use this much. But there's a real tension there because then you can end up going and producing a lot more classes to represent the same data in a way. So it's, I think it's just really a fundamental tension. And, you know, the indication that really is a fundamental tension is the fact that ORM is not a solved problem. It's always causing people trouble. That's just inherent to the mismatch between database schema and stuff. So, okay. Any questions or comments at all? Okay. I think I'm a little bit early, but thank you very much. Thank you very much.
It seems that the word 'design' has been surrendered to UX over the past few years. We don't talk about the internal design of software as much as we used to. However, there are still things to learn. In this talk, Michael Feathers will outline and codify some of the gems that he feels are less understood today.
10.5446/51027 (DOI)
I'm a software architect and active developer at Making Waves, where we developed mostly web-based solution. Today I would like to tell you a story how we improved the process after vendor transition in legacy project, what we learned, what are our challenges, and what you could do in your projects to improve it. I have a question for you. How many of you worked or works in the project which started before you joined the project team? I see many of you, so maybe this talk can be used as a guide of set of hints how to improve the process in existing project. But before I go on to the main topic, I would like to show you, oh, it doesn't work. I would like to show you, it doesn't work either. I would like to show you Maslow's hierarchy. Did you hear about this? It's from psychology a bit. Maslow was American psychologist who proposed the human's hierarchy of needs. In this hierarchy, in this concept, the basic needs have to be satisfied before we think about higher needs. So human start thinking about breathing and food because its need to survive, to exist. And if those needs are met, we start to think about security, about safety. It can be safety of employment, it can be social security, it can be employment security or health security. Only then, if these needs are also satisfied, we think about social aspects, about having friends, having family. But still, if we miss security in somehow, then we don't think about this higher level needs. And then at the upper level, Maslow placed self esteem or need for respect. Then at the top, he placed the creativity, which means that if we want to be creative, we need to feel, we need to satisfy all the needs below. Why am I talking about this? Because recently, Scott Hanselman on his blog proposed similar pyramid of needs for software development. At the very bottom, he placed need for having revisible software. It means that you need kind of source control system to have a revision of your sources, to be able to see the version of the source, to be able to check who was the author of recent change, to merge branch. So if you in your company, I hope not, use a shared disk or send a mail with a code or flash documents or flash sources or PSD files, you don't met those needs. I hope it's not here. It's not applicable here. On the second level, there is need for having buildable and de-playable application. It means that you need to be not only able to get the latest source code, but also able to build it, deploy it as easily as you can build it. Because it's not enough if you can run it on a local development machine. You need to be able to run it somewhere else to be useful. Probably also to build it automatically on a build server somewhere. Then at the third level, Scots proposed having maintainable software where you are not only able to build the software, build the latest version, but you are also able to fix the bugs and hopefully verify them. So not like a blind man fixing, checking if it works and checking how many mails from the customer will get. At this level, you need having at least some kind of test. It can be manual test, but you need to be able to verify the process, how it works. At the fourth level, there is need for having refactorable software. You change the source code not only to fix the bug, but you change it also to improve the internal quality. At this level, you need a set of automated unit tests probably. Because then you can refactor code without fear of introducing changes or regression bugs. Do you have an idea what could be on the top of this pyramid? There is pride. So your code is not only refactorable, but you are proud of it. You are not afraid to show it here. We've excuses, oh, it's only proof of concept. Don't look at this line. It wasn't mine. Okay, but let's get back to reality, to our project. When before signing contract, our team was ready to start the job. We were able, before signing the final contract, we were able to view the source code and evaluate it. So it was the goal before starting the project to look how, what is the current state before this transition period has started. We found as in many projects, in many projects you work probably as well, a big technical debt collected over years. So there was changes made only to fix the bug, not only to improve, not to improve the internal quality or to maintain software in long term, with long term goal. We found some clever solutions like manual cookie maintenance or implicit sharing of images between solutions. It was clever, but it was hard to figure out how it worked. It was easy to miss after transition when we did the fresh new deployment. It was the goal of the project. We also found, we also made some code duplication analyzers. Did you do such an analysis in your project? We use clone detective. It's a plugin for Visual Studio, which analyzes how many set of lines are duplicated between files. We did analysis for one of our best projects, well, well, architectured, well developed project. Do you have a guess what could be the percentage number of code duplication? More or less? It was 4%. And it was mostly because domain objects were sometimes similar to DTOs, data transfer objects and properties were also similar. But it's a high end, I would say. In regular projects, in making waves we did, it was mostly content based projects and the duplication code was around 8%. And it's still, if it's below 10%, it's fine because it's maybe duplication. It's not a real duplication. It's not a logical duplication. It's rather physical duplication of names of the properties between objects. But in this code, it was much higher. It was much, much higher, I would say. So by that, I mean, it was not extraordinary project. It was project as you may have in your company, which you can also improve it. Maybe you will see how. We also found a lot of unused code as you may find in your project. When someone is not sure if this change is the final change, he will add two to the name of the method and leave it later because he just forget or just in case. So the goal for the first day of the project when the contract was finally signed was to transfer the source control, source code. So we looked at FIFA case. We found there were multiple SVN repositories. There were developers repositories. There were production repositories. And the code was manually migrated between those repositories. It was kind of crazy approach. And it was a lot of code duplication because it was intended to duplicate the code between developers repository and production repository. But there were some missing revisions because someone by accident forgot to migrate or made the changes directly on a production repository. We also faced some disc corruption so that some revisions were not able to be restored from the backup. Sometimes it was also challenging to get the latest version of the code deployed to the production. Which means that assemblies were not always versioned properly. So there could be a few versions of assembly 1.0.0.0. And we had to figure out which one actually was deployed to the production. And we had 13 applications running in parallel. So each application could possibly use different version of same 1.0 assembly. And we also met some changes made directly on the production. I think it's also the case in, it might be the case in your project. When someone needs to make a quick fix just to run the production and he forget to commit it later to the source control system. So now you can think about this project and think about your project, where you are, at which level of this hierarchy. Maybe you met this revisable need, maybe you are much, much higher. So the source control transfer task was a bit tedious for us. A bit challenging to get everything from the latest version, so called the latest. But then we did on a second day automating build. Actually it was introducing continuous integration practices, which means that we integrated our work every day with having multiples build run after each check-in. Who of you have a continuous integration environment? Many of you. And probably it looks like that. That when developer commits to source control, build is triggered automatically. And everyone is notified of the build status. He can see, check it on the dashboard. He can get a mail. He or she. And also some kind of test, possibly, probably unit test are run. And at the end, executables, the result of the build is stored somewhere when you can access it or where you can check it. It's also not the end when you can build the environment, when you can build the software. You should be able to test it internally. And in our case, we had some internal test strategy. It was mostly manual test. We got a huge Excel with a lot of manual test, which required army of testers. It was immoral to ask our developers to run them manually. But still, it was useful as a functionality exploration. We could explore features of the software we got by running some of those test cases. When we run it once, it's fine because we get to know how the software works. It was also useful for us to check the regression test. So when we did the first deployment, we could check if we broke something or not. But the most valuable input from those manual tests was the input for automation. We could rewrite those manual tests in the automated way so no developers were required. We could check everything after each deploy, not every two weeks, every six months. So we could create a set of unit tests. But as you know, it's hard to introduce when you start projects without having unit tests in mind. We have a lot of tools, but it's not enough. It often requires huge refactoring, but without having those unit tests, you are afraid of the refactor. So it's an infinite loop. So the other approach is to focus more on more higher level, having acceptance tests. You could use Selenium and Behave or any other UI testing framework. But as you know, it's a fragile approach. So we decided to move only some of those manual test cases to UI test cases, automated UI tests. But the most valuable from the continuous delivery aspect was the smoke test tool we developed internally. It was a rather small tool, but it checked the basic, if the basic functionality works, if the front page loads, if the connection string is correct, if all the images are properly created because that means that security rights were set on the server properly. So even those basic tests can automate your build because it doesn't require someone from IT to check few things, few simple things. So I think it's useful to have it also in your project. But having tests is not enough. We had to create also internal test environment. It has to be similar to test production, as you may know. You probably care about having same OS version, about same IIS version. But do you care about having same language version of the operating system? Do you care about having similar set of patches installed on the production and testing server? Not necessary. Do you care about having the same load balancer in your environment? Do you care at all to have two servers in test environment if you have load balancer environment in the production? There are minor things you may miss, which can cause the errors later. So it's important to reproduce testing environment to be as similar as production is. It also needs to be isolated. So it means that you probably don't want to send newsletter to all your customers during acceptance tests. It means that also the SMS gateway should be internally somehow isolated. But some system cannot be isolated or created in your test environment. Like in our example, it was Google search appliance. It's a physical box, so we couldn't effort to buy another only for testing. So we created a mock of this system, and then we can use it as an external process which can respond with similar responses. You can also think about this mock system as an external delivery system from UPS when you don't want to order delivery each time you test something. But you want to be able to verify if the process works from beginning to the end. Okay, so we have a testing environment, but we asked ourselves a question. How long would it take to deploy the single line of code to the production if we have it already in our testing environment? How long would it take in your organization to deploy it? Would it be a six month, one month, two weeks, or maybe an hour? It's only one line of code, and one line, it's a small fix. Maybe it's only changing one letter in strings, so not very important fix. And the second question we asked ourselves, how long would it take to restore the production environment if data center would blow up? I mean, you have only database backup. Can you restore your production? Can you restore your iOS? Do you know what is the version? Can you restore configuration files, web configs? Do you know what is the configuration of load balancer? What is the firewall settings? Or you would better close up the business and go away and someone else could fix it. I think in many cases it's like that. But it's a situation we don't care about. In this case, it's IT hosting company problem, but not always. Sometimes we have to say something to our customer why we cannot restore it easily or how long it would take. So when we answered, we couldn't answer actually those questions, we set up the goal for the project to shorten the release cycle to make it as short as possible. So basically it means to follow continuous delivery principles when we want to build, test and release software as frequently as possible based on business needs. And you may say, it's not my case. I do scrum. Who of you does scrum or agile? I see few of you does, but does it mean that you show the demo to the customer at the end of its sprint? Probably yes. The customer accept the functionality you have created? Probably yes. But do you deploy to the real production, not only to your internal test environment? Are you sure it will work at the hosting environment? Sometimes not. That's called last mile. When you have sprints, two weeks, four weeks sprints, and you develop the software where customer accept it. Then when you are done with your backlog, you are ready to go to the production finally after six months, after a year. Customer says, okay, we are done. We want to accept it. And he runs user acceptance test or customer acceptance test. And our packages shining at the beginning are not shining as much as it used to. Because customer has found some issues. It was not the way I thought it was developed. It was only a demo. I agreed. I didn't have time on that Friday. Please fix it now. So we go back and fix it. Then finally we are ready and we go to the hosting company. Okay. Here are our packages. Here are our binaries. Go deploy it. And they said, oh, wait, wait. You have developed with.NET for all. We support only 3.5. Okay. Maybe it's edge case. But what about opened port? When the hosting company says, we cannot open that port for you. It's our security rules. You can only have port 80 open. So again, we go back to developers. Please fix it. And this time from being ready to being deployed is called the last mile. This last mile is sometimes a big part of the project to deploy it finally when it was already done. And we did the scram. We did the sprints. We were fine as a developer. But the cycle was not or the process cycle was not closed and it was not deployed. Probably it looks like that in your company where customer gives you, customer, it could be internal, customer gives you set of requirements you have to create. And then developer or development team builds it, develops, test it, and give it to the IT, the binaries. And then IT's guy somehow maybe with our help develops it and run the services where customer can use it at the end. But the problem is that focus of last 10 years was only on the left part. And we have great tools. We have a visual studio. We have scram as a methodology. We have set of testing frameworks. But we do not focus on the last part and on the second part. The second part is the continuous delivery focus. And you may think continuous delivery, okay, another buzzword. But check this. Do you recognize it? It's agile manifesto signed in 2001. And at his first statement, there is written. Our highest priority is to satisfy customer through early and continuous delivery of valuable software. So we knew about this, but we didn't focus enough on that part of the project, of the process. So we were sure we want to follow that way, to follow also on the second part, not only on the first. Scram is not enough for us. So we had to convince customer, how could we do that? We shown him the business value of this approach. The early feedback from the user was the first advantage. Previously, they deployed every six months. Yes, it's half a year deployment. So when the user created issue in issue tracker and development team created it, it can be not valid anymore because World Cup was finished already. So they could also fail and fail fast and early if something is not what they intended to create. They could also release the risk of release. If they deploy every six months, the risk of release is huge because set of changes is huge. If we try to automate it and create it in a small step, so instead of adding the huge value at each deployment, we can do this more frequent and other less value which can be more easily tested. We can also track the real progress of the project because they see when the functionality is deployed to the project, not only when it's done in our planning board. And at the end, the money starts earning. When it's deployed to the production, someone can use it and it's investment which starts to return. And you may say it's something extraordinary, but the huge companies in the world use it. Flickr is probably the most common example of that approach. Have you seen the footer of that, of the, of their site? At the footer, they say Flickr was last deployed 42 minutes ago, including five changes by four people. In the last week, there were 85 deploys of 677 changes by 21 people. So that means they deploy all the time. Almost every check into the source code control is deployed. They have also a console which says, say, what is the current progress of deployment? And you cannot probably see it, but there is written that waiting for 151 hosts. So it's a huge deployment, even though it's only a site for hosting photos. They have a huge environment and they are able to deploy it. And they are not the only one who can do this. Firefox is shortening the release cycle, now it's version 12 or maybe 13. I'm not sure I haven't checked what's today's version, but they decided to make it not only the version race, but to shorten the feedback from the user. If the user struggled from memory consumption, they fix it. If the user prefer better tab organization, they fix it. The same with Google or Stack Overflow. They deploy multiple times a day. But still, the case in many organizations, especially large organizations, is that for the most of the time, software is in unusable state because it's actually developed and we cannot use it at all. How to figure out in which words we are more or how to figure out what are the bad parts of this delivery, continuous delivery. First is the waterfall. This means that you follow the waterfall methodology, not necessary in development, but also in deployment when you do the development as long as possible and delay deployment as long as possible. The second part is black art when only Bob can deploy because he knows that he have to first copy to FTP, then unzip, change the permission, turn off load balancer, then set the cache settings, then rename the file, turn on load balancer, revert permission, run IIS, research application, and so on and so on. If Bob is sick, we cannot deploy. The other bad indicator is saying when the boss says, okay, we're deploying tomorrow, everyone need to come to the office earlier because we need everyone just in case. No one can control it, the control is distributed, so everyone is needed. The other thing is the other bad indicator is when someone says, okay, we should deploy on Saturday. We cannot be offline during the week. Do you recognize those statements in your work? Probably some of them, at least one, I guess. So how to avoid that approach? First, we decided to design deployment pipeline. So what is deployment pipeline? Deployment pipeline is a process which describes how the deployment takes place. In our example, it looks like that at the beginning, at least. When the developer commits the version control, the automatic build is triggered. It's part of continuous integration. We also run set of acceptance test or set of smoke test on our internal environment. And everything is stored in artifacts repository when we store the build and the result of the acceptance test. And if something fails, then developer is notified, he has to fix it. It's still the automated phase. But later, there is a place where someone have to decide. In our case, it could be tester or project owner who says, I want to test this build manually. And then he test and verify if features we developed is something he needed. Then the results are stored in artifacts repository. So project owner can later decide, that's the build I want to be, I want to have in a production. Then he decides and he gets it on a production released. I missed some staging environments and so on, but it's a basic view how it should look, where we have as many automated parts as possible and we still have some manual decision. So we have person responsible for deployment. In our case, it was project owner who says, we are ready on stage. And something what is running on staging is something what we need to on production. It's the feature set we want. There are some few important continuous delivery practices like build once, deploy many. So you should not have multiple builds for each environment to have a build on staging, build for production, build for test and build for acceptance test. Because probably if you do performance testing, you want to test the same version of assemblies, the same build as you release on the production, not tuned for performance testing assemblies. You should also deploy as frequently as possible. So when something hurts, do it more often. In our case, the most painful part was configuration management. So we decided that we should focus more on that part. Why configuration is that important? Because probably it's much easier to break the software by changing one line of code than one line of config than one line of code. When you change the code, you have a compiler, you have unit test, you have version control system where you can track who has changed when you can verify it. But with configuration, it only exists on the final environment. When you change URL to number, probably system will break pretty fast if it's a login URL. So how we did that? We figured out that there are few types of configuration. But some of them are environment specific. It could be setting of a mail server when in one environment, in staging environment, should be use that server on production in the other one. And the application-based settings. It can be futter in application. We have 13 different application which were pretty similar but a bit different depending on a customer who use it. And there were some configuration which were both application and environment specific like feature set could be turned off or turned on. So we dealt with that having configuration template. What's really important is that this configuration template is stored in a version control system. So configuration does not leave only on a final environment. It leaves also in our version control system. So we can verify who made the change, who was responsible for the change and if this change could broke the system or if this bug can be caused by the last recent configuration change. Then based on those template, we run smart transformation, smart XML transformation. So we created environment specific configuration. Then, we applied another set of transformation to create environment and application specific. And still, both transformations were stored in version control system. So we again can track what were the changes on the production and we can also easily looking at the source code check what is the current configuration of the production without logging there. And at the end, everything was packaged into deployment package. And sometimes you can have troubles. You have to prepare for them. The easiest approach when something goes wrong during deployment is to deploy the last good version. You can have rollback scripts but probably if you deploy often, you rollback much less, much less often. I hope it's the case. If you rollback more often than you deploy, then something goes wrong. So this deploy scripts are tested. You can know how they work. You are confident. You are confident with them and you should rather avoid fixed forward fire as I call it. It's when you go to the production because something during deployment phase, oh, it's only one line of code. It's only I have to add the security here and here and here. And you make so many changes. You cannot track. You don't control. And you fix another fix and fix and fix and fix and you're somewhere deep and you cannot rollback. Then, so better to deploy it because you know how long it would take. You are confident with those scripts. The other set of kind of troubles, it's having a hot fix. When you have stable version running on the production, but one important bug was found and you need to fix it without introducing other features which were developed on the project team. So again, it's better to follow regular pipeline, deploy everything from the scratch as you do every time than to deploy only one assembly because it's already a tested path when you deploy everything. And you know how long it would take. When you change one version of assembly, you can find some version incompatibility issues and you can again fix and fix and fix and fix and you don't know how long it would take. It may be five seconds. It may be five hours or five days. When you use deployment script, it will be always 15 minutes. And in case something goes wrong, you can always rollback. So when we were prepared with all the scenarios when something can go wrong, we were finally ready to deploy to the production, to deploy to the production using new approach. We get up really early. It was my first and hopefully the last time when I was at the office at five o'clock, AM. We did a backup of everything. We cleaned up servers from almost everything. We wanted to have a brand new, fresh environment. We did a deployment using our deployment script. We used on a staging, we used on a production. And it was quite easy. It took less than half an hour to deploy everything. And the rest of the day, we spent of regression testing. It was, the customer was also involved in those regression testing and we had so-called maintenance window. So when we could shut down the service and it was on Saturday, so it was a bad part. But customer was used to it, to having maintenance window from 6 AM to 6 PM, because previous company deployed in that way. We also made a huge improvement in release cycle. Previously, as I mentioned, they deploy every six months. Now, we did the first deployment quite smoothly, so they get a confidence in our approach and we could ask them for more. We asked them to deploy after working hours on Friday. So in case something goes wrong, the Brazil would be affected because we deployed in our time. But we wanted to try and everything again went smooth. So we asked them for more. Can we deploy during business hours? And they say, okay, let's deploy during business hours. And we deployed on 12 o'clock during Friday. And now we can deploy whenever we want because they feel everything will go smoothly. We have a load balance environment. So we switch off one load balancer, deploy to that, switch the other server from load balancer. And customer doesn't notice the change except for version number in the footer that now there is a new version deployed. So it was a huge change from six months to two weeks when we are ready with any feature, when we are ready with bug, we can deploy it instantly when the build completes. So you may ask, what's the difference? You could actually call it continuous deployment. But there is a difference between continuous deployment and continuous delivery. In continuous deployment, we deploy after each check-in, as Flickr does. But in continuous delivery, we are ready to deploy all the time. But the decision is based on customer needs. When customer wants it now, we deploy it now. If he says, let's wait till Friday, let's wait till Monday, we deploy it. But we don't have to stay on Saturday because we are unsure of our process. So it was a huge, huge mind change. We had also a plans for improvements. We have a plans for feature. We know we are not done yet. We could do more. One of these improvements we could introduce is so-called green blue deployment. It's term introduced by Martin Foller when we deploy to production in that way where we have multiple servers. And on each server, there is application, specific version of application running. And when we create a new version of application and we want to deploy it, we create a separate application on those server or we use separate server depending on our strategy. Then in some point of time, we decide it's ready to switch the version. And we switch it only on a router. We can switch the port. We can switch the IP where the traffic is routed. It's really quick and easy. And it's a kind of hot stand-by switch. And we can easily roll back to the previous version by switching it back. The another approach we can use, it's having a canary release. Did you hear about this term? You heard about this. Do you know why is it called so? There is a history from 19th century England when the coal mining was being developed. And it was a problem with carbon dioxide in a coal mine because there were no well-equipped to figure out what is the, is it allowed saturation in the air if they could live or they can work. So, they use a canary for that. So if the canary stopped singing, it means something is wrong. Canary is dying. So let's go out. And similar approach can be used for software. When we have a set of servers, probably more than two, we can decide that part of those servers could be used as a testing environment, our beta testing environment. So we forward part of our route, not all the traffic as previously, to those new servers. And then, if everything goes fine, we can forward all traffic. We can use this as a performance testing. So we can check if the server perform well based on part of the traffic. If we decide to move 10% of the servers to that zone and move 10% of the customers, probably the performance should be the same. If it's much higher, then something is wrong and we should reverse back to the old solution. We can use it also as a A-B testing when we can compare which approach is better, the new one or the old one and still redirect part of the user. And we can also use it as a brand new release approach. If your company creates a brand new site for a customer or brand new application, you don't have to deploy it at once. You can deploy it only for one department, only for customers from Norway, only for beta testers who have signed for that. And check if it fulfills the need, if you find any bugs. And if you find it, then only part of your customers are affected. Another approach which we think might be useful also for us is a kind of, is so-called dark launch. It's launching the software without notifying the users, without letting them know how it works. Another approach used, for example, by Facebook. In the old times, they had a chat which didn't persist history of messages. But customers or the users wanted that feature, so they asked for it. But the investment was huge because scale at which Facebook works is that they cannot just add insert new row into database and very later. They want to check it, check it intensively. So how they did this? They had a server which was running fine, which was handling the chat messages without storing the history. And it was sending the responses to the user from the other users. They had the router in between. And they created a brand new chat system which used database, it was distributed database, which got also the request from the users. So in one time, two systems were handling the same messages. But the second system was only reading these messages, but not responding to the user. But still, they could compare what were the output, what were the expected response from both system. Someone from IT, someone from Facebook can check if those systems work the same, if they the performance is still fine. And you can say, okay, smart approach, but what to do if we use external system like delivery system? We don't want to deliver it twice. So we can create again a kind of mock and compare what are the requests to delivery system from the old system and from the new system. They should be the same. If they are different, something goes wrong. And having this approach, Facebook introduced chat, new chat functionality. And one day, they changed only the UI. So in the UI, you are able to check your history. And you probably haven't even noticed what was the day. There was no big rush about this, but it was a huge process below. So how can you use those practices in your project? What you should focus on? You should version everything. I mean everything, everything. Not only the source code, but also the PSD files, the flash sources, the configuration of your application, maybe configuration of your IIS, configuration of the router if you are responsible for handling that. You should do as much as possible to have a similar environment. If you're running on virtual machine on VMware, try to run on VMware locally. If you have load balanced environment, use load balancer locally as well. If you then you will be able to fix the issues you find on the production without just committing and hoping it's fixed. You can reproduce them. You should also automate existing procedure, automate everything. If you think, okay, I need to change this permission, why not to write a short script which changes those permission? And then you can run it easily each time you deploy. And don't care about this. So also if you have in your project deployment procedure which has step one, step three, step 20, step 50, try to automate at least few of those steps. Try to start from the most risky part. If you are unsure if you change the permissions of all the folders you need to do, if you are unsure if you switch off and switch off load balancer correctly, try to automate this. Then you get a confidence in that. And don't wait till the next big deployment. Try to improve it now when you have a little of time between your sprints maybe. And you still may say or your PM may say, I don't believe it. I prefer if you do this manually. I trust you. But still if you sum up all the time required to do deployment, maybe your PM will see that it's better to invest in having automated deployment than deploying manually all the time, having all the team on site and paying for overtime on Saturday. You can also say, oh, it's nice idea, but not in my case because I very often create a brand new release. I don't maintain the software that much as I create and deploy. But still you can use the approach I shown when you deploy only to the beta tester or you can deploy to the production environment, but don't show it to the end user. So the customer does not see demo at the end of his sprint on your local testing environment, but can check it on a real production environment, but only he can check it. And if you do this more often, then you get a confidence that it works fine, that you are not afraid to press deploy dot bat because you don't know what's inside. You know it works yesterday, it will work today. In case it goes wrong, you can always roll back to the previous version. And as I said, start as soon as possible tomorrow on Monday and that's everything from my side. I'm eager to answer your question. Did you try to implement continuous delivery in existing project? Did you succeed somehow? Yes? You did? Great. How big was the project? Or how long would it take to implement that process? But now you see the value. What was the hardest part? To convince the customer or to convince the project manager or everyone was convinced? So you were in a great... It was great. You were in a great situation. It's not a problem. You just want to deploy, yes? What tools there like this? There are plenty of tools on the market, but the problem in our case is it was that the time was pretty short for this transition period and we also had a legacy solution. So we decided not to change too much on the project side. So we did a lot of handcrafted work. I mean, we used TFS, we used PowerShell, but we could not use WebDeploy or Octopus because it would require too much change. We prefer to make stable deployment at the first and later improve it using more tools like Puppet, Chef. Is it the answer for your question? So my advice would be not to change too much in a project itself, but to improve everything around as much as possible. Then you convince customer, PM, then it's the right approach and they will let you change more later. If you want to start a scratch and you can use whatever tools you want with? I think in a.NET word, I would use WebDeploy and maybe Octopus and maybe Puppet. It's a well-known solution. Okay. It's a tricky question, but there is an approach to the couple deployment of database changes and the changes of source code. Basically, it means that your application should work with the previous and the old schema of the database. It's the best approach, but not always possible. So we have upgrade and downgrade scripts when we upgrade during deployment the database. It's also the easiest and requires the least possible changes in application. Okay. If no more question, then thank you again.
Continuous Delivery is the process of having a shippable product after each check-in to the source control repository. Continuous Delivery is usually implemented as a natural improvement of a Continuous Integration process. This presentation highlights challenges and presents hints on how to start from a raw environment and incrementally build a successful deployment pipeline based on Team Foundation Server (TFS), providing substantial added-value for business. This presentation will describe the process of establishing Continuous Delivery in a project for FIFA. We describe the starting point, what we achieve in the first phases and what are the plans for further improvements in order to deliver high quality software in schedules defined by business needs – not by process and technology constraints. Making Waves took over as a Services Provider for the development and maintenance of FIFA's Intranet and Extranet platform in 2011. The main challenge was to avoid long release cycles, improve quality and provide a reliable hotfix strategy for urgent issues raised in production. The first phase of the project was focused on taking over the source code, development, test and production environments. This was a challenging task, mostly because of a lack of automation in build and deployment processes. This part of the presentation will cover possible approaches for how to incrementally create a flexible development environment, supported by a continuous integration process, in a legacy project inherited from an external part. The goal of the second project phase was to implement a continuous delivery process in the project. We will present the main arguments for investing in tools and processes which enable more frequent and automated releases, and how that brings significant business value. We will also cover how we implemented a set of practices and principles aimed at building, testing and releasing software faster and more frequently, including (but not limited to): deployment automation, release repository, production configuration tracking and version promotion between environments. The presentation will briefly cover tools which were used, including Team Foundation Server (TFS), but most of the content is technology agnostic and is relevant for both developers and more-business oriented people.
10.5446/51028 (DOI)
Hi. So, am I live? Fantastic. Hi. Let's do a demo. All right. So, I'm talking about reactive extensions. Before I tell you what reactive extensions is or how to use it, I should tell you why you should use it or why you should be interested. So, let's do that then. So, I can't see anything from up here. All right. Here's your boss. He's too big. Oh, I can't. There we go. All right. There's your boss. All right. Your boss asks you. So, he wants you to, he has a list of websites, right? A list of URLs. And he wants you to download them all, download the text of them. Maybe he wants to like screen scrape them or whatever. For whatever reason, he wants a text. So, let's write a program to do it. Oh, this is going to be tricky. Hold on a second. Let's switch this to... If I don't switch this to mirrored, then this is going to be a very awkward presentation. All right. There we go. Now, I can see. Okay. Can anyone read that? Everyone can read it? Cool. So, if you are an enterprising.net developer, you might write some code like this. So, we create an array and that's a list of websites we want to download, right? And then we're going to write a select statement, right? Websites.select. And then I have this cooked function called fetch web page sync, which you give it a URL and it downloads the web page, right? Easiest pie. So, actually, before I get started, how many people know Link, right? Have you used Link? How many people really like Link? Like, you write a 4-H statement and you feel like a little gross. So, this talk will be for you. If you really like 4-Hs, then this will not be as motivating. So, dot select. We use dot 4-H and then we're going to call this function that's a part of LinkPad called dump, which I'll just print it in this bottom area that we'll see in a second. I'm going to turn off growl. Thanks, growl. Cool story. Cool. So, I've only taken the first ten characters and so we can see that we have the first ten characters of most websites as a doc type tag, right? So, that's really cool but you realize that you're fetching these websites one at a time. That's really slow, right? So, we want to do a little better. So, we're going to use our Rx to do that. So we're going to do, we're going to take our same code but instead of fetch web page sync which returned a string, we have this new function called fetch web page. It still takes a string which is the URL but it returns this new type which we'll explain called iobservable of string, right? So, iobservable of string is a future result of string. So, we're going to take this array, we're going to turn it into observable, we're going to select many into a web page and if none of this makes sense, that's totally okay because I'm just showing you why this is cool. So, I'll explain all this later. We're going to call fetch web page on all of them and then we're going to subscribe to the result which is kind of like 4eaching and then our subscription, we're going to show the same thing. So, we can see we get the same result but it ran a little faster and so this entire time as I'm talking, think about how you would implement this yourself using threads or using maybe task of t, using whatever technology you've got, right? And so you say, Paul, that's not impressive at all. I could do that with plink today or task of t. It would be super easy. Why is that cool? So, your boss, man, he's a picky guy. So, it turns out that some of these websites are not very good and they just hang forever. So, it's taken forever to download these websites. So, what we want to do is we want to look at the website and we're going to wait a certain amount of time and if it doesn't show up, we're just going to give up on it, right? So, what did I change between these two examples? So, I added instead of select many to fetch web page, I'm going to say select many to the fetch web page dot timeout and I'm going to give that timeout a certain number, a certain amount of milliseconds. Right now, it's a really small number so they're always failing, right? Ten milliseconds is not long enough to fetch any web page usually unless you've got really awesome internet. In which case, you can give that internet to me because that'd be great. So, let's make this a thousand. Yeah, we did a little better. So, now we're fetching results, right? And so what we're saying is timeout throws an exception if we reach the timeout. So, we're going to catch that exception and replace it with a different value. So, instead of the website text, we're going to just give it a URL didn't work, right? And so, you can see if I disable my internet. Didn't work, didn't work, didn't work, right? Turn it back on. So, now your boss, man, that guy, he just doesn't quit. He says, we also want to, if we can't fetch the website a first time, we're going to try it again a few times, right? We don't want to just give up the first time, giving up is for quitters. We're going to try it a few more times. So, how do we change this? From the previous example, we changed this in two ways, right? Instead of just calling fetch web page, we're going to wrap this in a function called defer. And defer makes a function lazy. It means that instead of doing the action immediately as soon as we ask it, it'll just say, I'm going to do it later. But when you actually want the value, I'll look it up for you, right? So, we're going to make this function lazy. And now, after the timeout, we're going to clear the truth. Okay. And we have this method retry. And so, we have the number of retries we want to do. So, let's run it again. And so, in this, in the select many, there's a lot of code here, but it's basically just trying to show you some text before we call fetch web page, right? Issuing request for URL, and then we call fetch web page. Same deal as before. I'm just making it a little bit more obvious when we make the request. So, we run it. And it appears to work, right? So, let's turn off the Internet. So, we see for each of these web pages, we tried them, we tried them again, and then we gave up, right? We didn't necessarily try them in the same order, right? We just guaranteed that we tried every website twice and then gave up. So, it's all asynchronous, right? There's no, it's not walking through an order trying it twice and then giving up. We're actually, as results come in, we're going to do the next thing, right? So, your new boss, man, this guy's even worse. He says, well, you're running your program and you are using up all of our crappy office Internet downloading these websites, right? So, what he wants you to do is he only wants you to have maybe five websites being downloaded at the time, right? Instead of trying to download all of them at once. It's very reasonable. So, we're going to take our code. Remember what this looks like? We're going to add rate limiting. So, I added a few more URLs up here and made a typo. So, what did I change? So, now, before we had conceptually, we had a stream of inputs and then we select many of it into a stream of responses, right? So, here, we're going to split that up a little bit. We're going to have a stream of inputs and then select it into a stream of requests and then flatten that into a stream of responses. So, we're going to split away request and response, whereas select many kind of did them in both, did both those steps at the same time. So, we're going to take our stream of inputs, websites that two of the rule. We're going to select it. Instead of select many, it's select, right? This is the only thing we changed from the previous example. So, we still have the timeout, retries, catch. And now, we're going to call merge, but we're going to say two. So, that means we only want two in flight at a time. So, two requests at a time. As soon as the next one comes in, we're going to start a new one. So, we'll see that work. I believe this timeout is set really short. Oh, no, we saw the internet off. There we go. Okay. So, we see that these websites all came back except for Yahoo, which apparently cannot give you a result in less than 750 milliseconds. If we set this to 7500, we'll get all of them. So, now, let's turn off the internet. Okay. I should set this timeout to shorter. Let's try this again. So, we can see it issuing the request or the retries. It didn't work. We're trying it again. It didn't work. We're seeing them. They paired it to a time, right? So, Google Yahoo, Yahoo Google, Bing Duck Duck Go, paired, paired, right? Because we're only issuing two at a time. And because we're issuing them simultaneously and we're all coming back on a timer, they're coming back at the same time, right? So, contemplate how you would do this with the TPL or with threads or locks or timers or yada, yada. Now, here's the other cool thing that you might have noticed. Is that fetch web page knows nothing about retries. It knows nothing about timeouts and it knows nothing about rate limiting. We took this function that was really simple. All it knew how to do was fetch a web page. And if you look at the implementation, it's just HP web request. It's very dull. We took this function that was kind of dumb and then added all these features onto it. Whereas normally, if we were doing, you know, traditional asynchronous programming, we'd have to build that into the function. And then we'd have to build that into every function separately. There was no way to separate out the concept of timeout and retry. And so, Rx lets me write these kind of functions and then have them be composable, right? So, I can compose in the concept of timeout without having to write it into fetch web page. It's very cool. So, this talk is about the reactive extensions for.NET, which is a project out of the cloud programmability group in Microsoft. So, it's actually a part of SQL server. So, it's just a library. And we'll talk about how it works. My name is Paul Betz. I work at GitHub. My Twitter handle is here because back in high school, my AOL name was taken. So, I put Xs around it. Worst decision of my life. So, we can no longer write synchronous software. I think Abraham Lincoln said that. So, back in the day, we had one CPU and no network and we could just write our programs in order and everything was easy. Those days are over. CPUs aren't getting any faster. We have to run things on multiple threads. Every interesting program is talking over a network. Networks are asynchronous. We have to write asynchronous software now. So, before I can explain to you what Rx is, let's talk about link. So, the core of link is what a closure person or Haskell person call a sequence, right? So, a sequence in.NET is innumerable, right? Innumerable of T. And so, a sequence is some items in a particular order, right? So, if I call innumerable, it has one method, enumerator, get enumerator, and then I can move next, move next, move next, and then eventually, it will tell me that I have no more items, right? So, it could maybe not ever tell me that I have no more items, right? I could write a yield return statement that calculates the digits of pi, right? So, it could be an infinite sequence, right? Most sequences in.NET practically are limited, right? But you can always take an unlimited sequence and then say, you know,.take5, and now it's a limited sequence, right? So, the cool thing about sequences is that we can create pipelines, right? And so, we can take a sequence and then add some operations to it, right? Select and where, an aggregate, and then take a list that's kind of boring or an input list and transform it into like an output list, a list that we're interested in, right? So, here, we take 1, 2, 3, 4, 5,.select,.where, and now we have a new list and then we can.4H and, you know,.4H through it and get the result, right? So,.4H, I'm using this cheating operator..4H is not really an operator, right? Because it evaluates something, right? It returns a real value. Like, if I had innumerable.range 0 to 11 billion and I run.select on it, that still runs instantly, right? Because it didn't do any work. It returned a guy who would do the work for me, right? I'm describing what should be done instead of do the work right now, right? So, that's really important to understand. That's called, you know, they refer to as like deferred computation, right, in the link, right? Just like when I do a database call and I do, you know, some query.select in link to SQL, I don't make the database call until I do the.4H, right? It's very important. As I said, a sequence is just some stuff in a particular order, right? Not necessarily sorted order, just an order, right? So, link is really cool because like I said, it lets us describe what we're going to do with the data without actually doing it. We've separated these two actions, right? Right? Does that ever make sense, everyone? Who doesn't it make sense for? Awesome. So, what I've described to you is a monad. And now you all understand monads. A monad is like a box that you can chain together and say what you want to do with data before you actually do something with it. And then eventually there's a function in.net, it's for each that lets you actually see the result, right, to unpack the box. I thought this was an RX talk. What's the deal? Let's talk about events. So, events, the problem with events is that they're not composable. So, what do I mean by that? I can't take on mouse up and on mouse down and then smush them together and create on mouse double click, right? I can't do it easily, right? But what I have to do is take hook on mouse up, hook on mouse down, create some boolean that has like is mouse down, right? And then create like a timer and do some nonsense to put it together, right? And once I had that event, I couldn't combine it together really easily to say on mouse down in the top left corner, right? Events don't let you combine them together very easily, which is a bummer. They're really one-offs. And so, as your program, as any desktop developer like, you know, mobile phone developer knows, as you get, your programs get bigger and bigger, these events start to fight with each other, right? Because they have like some like state variable that's in your class and they start to like, you write this function called update the UI that like fixes up everything every time any event comes in and then you realize that's super slow because you're changing everything all at once. They don't combine together. So, what is an event? So, I have an event called on key up and I'm going to wire it to the function display slide. An event is some stuff in a particular order. This is where your minds are all, quote, quote, blowing up. And so, you realize that an event and a list are the same thing except for one has already happened and one will happen, right? Does everyone buy that? Does anyone not buy that? Okay. So, that means, here's the important part is that if I can do select and where and, you know, select many and aggregate to a list, why can't I do that to an event? And so, reactive extensions is taking all the things you know about link and all the operators that you use with link and applying them to events so that you can do the same thing. So, the problem is we need a new type. So, I numberable was for lists, so we need a type for events. So, this is called I observable. So, an I observable is like a timeline. You can kind of like a stream of events, right? So every key is an item in the stream or this timeline, right? And so, observables can do three things in their timeline, right? They can on next, they can give you a new item, right? So, H I, right? They can complete. They can say there's no more stuff. We're done. Or they can complete and say something bad happened. So, complete and there's an exception, right? So, when we saw that timeout operator in the initial example, the timeout operator either returns the sequence that was the input sequence or it on errors with the timeout exception, right? So, just like I numeral only implements one function called getting a numerator, I observable only implements one function called subscribe. And so, subscribe actually takes this type I observer of t. You never actually use that type, use the extension method where you can provide the three functions whenever anything happens to that timeline. So, you can say when I have a new item, right line the item, right? When the timeline completes, right line done. When the timeline completes and something bad happened, right line, right? So, and so, the thing that subscribe returns is an idisposable. And so, what the disposable lets you do is disconnect that subscription early. So, you can say, all right, I've decided that I'm not interested in listening to these events, right? So, one thing that's really tricky is that if you're used to.NET, you're used to having to dispose everything, right? That's like a rule. You never, you only dispose the result of subscribe if you're unsubscribing early before it completes, right? You actually don't have to dispose it if it would complete on its own. And because it's not an unmanaged object, it's actually just a regular.NET object, the garbage collector will clean it up. It's not like, it's not like an unmanaged idisposable where if you leak it, then you're wasting some handle or something, right? So, where can I get this observable cool type? So, the most simple one is, is observable dot, observable dot empty, right? Which is just calls uncompleted. That's all it does. Another simple one is observable dot return, which just on next one item, like a constant or a variable, and then on completes, right? And so, there's another one called observable dot throw, which instantly fails with an on error. So, we can take lists and arrays, I-newables, and turn them into observables, right? If you subscribe to it, then it just runs down each item in the list as fast as it can. We can take any event, any.NET event, and turn it into an I-Anservable, like, you know, on mouse up, on mouse down, stuff like that. We can take asynchronous methods. So, a network request could be an I-Anservable with one item. There's a special kind of I-Anservable called a subject, which is badly named, but we'll just deal with it. So, a subject is an I-Anservable you can just push around by hand. You can just call, you know, on next, on next, uncompleted, right? In your own code, just manually. We'll show subjects later because they're really useful. And the last thing that's the most interesting if you're using, if you're using something like MVM or XAML is that property change notifications are events. Whenever property changes, you can have an I-Anservable representing that change, right? So, this idea, this last one is really powerful. It really takes RX and puts it out of the domain of just user interface programming or, like, you know, drawing stuff or drag and drop and puts it into every program. Every program could be interested in watching changes of other objects, right? As an object changes, you can say whenever the name property changes of this object, then tell me about it, right? So I-Anservable is just like I-Anservable. And in fact, it's so much like I-Anservable that you can prove that they're identical, that they're dual sides of the same kind of pattern, right? And so there's a lot of math that you can watch, Eric Meyer, prove it, and he's smarter than all of us. But the moral of the story is that every rule you know about link, all the things you've learned about link, apply to RX2. So just like when I have an I-Anservable and then I foreach it, it actually does work, it actually does something, right? Subscribe is the foreach of RX. Subscribe actually does something. If I have an observable and I do dot select dot select dot where, nothing happens until I subscribe. Just like in link, nothing happens until I foreach. So this is a concept that kind of doesn't exist as much in link but does exist in RX. So let me first talk about hot observables. So there are two kinds of observables in RX and they're hot and cold. And in fact, there are two kind of innumerables in link, hot and cold, but you never see hot innumerables. So a hot observable is like an event, right? Like mouse move. And so if you have five subscribers to mouse move, they're all going to get the same information, right? Because there's only one source of information, the mouse move. If nobody subscribed to mouse move, it's still going to happen anyway, right? It's just that nobody was listening, right? But imagine how observable dot return would work if as soon as you created the object, it put out a number and completed, right? It'd be useless because you could never subscribe in time, right? You would create the object, it would instantly complete and then when you subscribe, nothing would happen. You missed the boat, right? And so you'd always missed the boat. And so a cold observable is one where you subscribe, when you subscribe, something happens, right? So observable dot return does nothing until you subscribe to it and then it produces the value and completes, right? Just like if I for each over a list more than once, I'll start at the beginning again, right? So every innumerable in link is cold, right? If it was a hot innumerable in link, if I for each over at once, I'd go to the end and I for each again, I'd still be at the end, right? That would be very useful. So as I mentioned, there's this type called subject of T. And so subject of T is both observable and I observer. And so what that means essentially, it's a observable you can push around by hand. And so they're really useful as we'll see later for taking code that wasn't RX friendly or like wasn't written for our reactive extensions and kind of adapting it, like bridging it. So let's see a demo. So I created a new subject of int and I'm going to subscribe to it. And then when anything happens, I'm just going to, you know, write to call dump on it, right? Let's try this first. So on the same object, I'm going to call on next on it, right? I'm just calling it by hand myself. So if I run it, I see 234, right? And if I call on error, we fail. So here's an important thing about I observable is that once it completes, it's done, right? So if I add f.onNext 5, I'm not going to see that, right? Because we had already finished. So let's go back. Does everyone get subjects? They're really straightforward. Cool. So one thing that makes RX more complicated is that RX concerns itself with time, right? So link doesn't care about time, how long it is between two items. It only cares about order, right? It's A, B, then C. But it doesn't even know if you got A and B really quickly and then C took forever, right? You just called move next and it's waited until you got another value, right? So RX has a lot of operators that deal with time, right? So I can say give me all the items that happen within this five second window, right? And I can continue to group items by five second windows. And that operation is called buffer, right? Buffer with time. Or I can join something based on a different observables window. So I can say like tell me all the items when these two observables were both in the same area. Like when kind of like, you know, if you had an observable for each person in this room walking in and out of it, tell me when two people are in the room at the same time, right? So these are more complicated than just order. And so RX is necessarily more complicated because asynchronous is complicated, right? And so you can't hide that complication in some sense. So I said that I observable is an event, but practically you use it as one of two things. You use it as kind of an event, right? A stream of items. Or there's another thing you can use it for, which is a future result. So task of t is a future result. It's an object that is, we don't have the value right now, but eventually it'll give you the value, right? And so an IObservable that's being used this way doesn't have a special type, it's just an IObservable with one item, right? And so a lot of functions, if you're trying to return a network request or, you know, run some code in the background, you're going to use an IObservable with one item and then complete. Or if something bad happened, you're just going to on error, right? So just like TPL asynchronous methods are functions that return task of t or task, Rx asynchronous methods return IObservable of t. So we saw fetch web page, right? Fetch web page took a URL and returned an IObservable of string. That string is the, you know, contents of the page, right? It didn't return the page, it returned something that I could subscribe to that would give me the page eventually, right? So there are different ways to implement asynchronous functions in Rx. And this is really cool because unlike task of t, they don't have to be asynchronous, right? Return hello world is, it looks asynchronous, but all it does is immediately returns a string that runs in the same thread, right? So you can take functions that appear to be asynchronous and make them synchronous. And why would you want to do that? If you were in a test runner, right? You can make it, you can write your entire application asynchronously so that you're, when you actually run it, it will be asynchronous and run on other threads. But when you run in a test runner, it runs on the same thread in order and it runs the same way every single time. That's very cool. There's something you can't do in the TPL easily. You can do it, it's kind of a total nightmare, but anyway. So all of the code that I've shown you is asynchronous but not concurrent. We're all running this code so far, well, except for the code at the beginning, on the same thread, right? The subject thing never had a separate thread. It only had one thread. So the second thing is an easy way that you use an Rx to make something actually run on another thread. So this function called start. And start is kind of like task.start, right? It just, you give it a function and return a value and then it turns it into an I observable. And the cool thing is that inside this function, inside the return hello world, if I threw, like through an exception, it would turn it into an I observable that throws on error, right? So it's wrapping this function to run on another thread. And so I haven't said anything about how you decide what's on other threads and what's not on other threads. And the way you do that is through this interface called I scheduler, right? So there are a bunch of default schedulers. The one, the most common one is immediate, right? Just run it immediately, whatever you say, just run it as soon as you get it, right? On the same thread. Or you have one, like, task pool scheduler, which runs items on the TPL task pool, right? So you can choose what things run where, right? Rx doesn't care what thread it's running on because it's functional, right? Sometimes your code does care what thread it's running on. Like if you're running a WPF app, you care that it's running on the UI thread sometimes, right? So you can say, I want this code to run on the UI thread. And the cool thing is that Rx will take care of moving these results to another thread. It'll do all the locking. It'll make sure that things happen in order. It does all that stuff for you. You never, you can write a completely asynchronous program in Rx and never use a lock and have it be completely safe. So Rx will not change context unless you ask it to, right? In the TPL, it always demands that you run thing on background threads. That's what it always does. By default. By default, Rx will keep your thing, your code on the same thread unless you explicitly ask it to move it somewhere else. So you might ask yourself, how does this relate to async and await, right? So Microsoft has announced this async and await and it's really cool and it's all based around task of T is Rx dead. The answer is no. So the cool thing is you can use the TPL and Rx together, right? Because you can take an IObservable and turn it into a task and you can take a task and turn it into an IObservable really easily, like.2 task. That's all. And you can await an IObservable just like you can await task, a task. And what that means is you're going to get the last item in the list when you await an IObservable. And so if you're following the pattern of Rx asynchronous functions only having one item, the last item is the first item is the only item. So you'll just get the item. So you can combine task and IObservable together really easily. And in fact, I have a library called link to await that actually takes link and uses Rx to make an asynchronous link. It's kind of weird. You can like select async and then you provide an awaitable function. So it's a simplification of Rx. So if you think Rx is cool, but this is kind of melting your brain a little bit, link to await might be right up your alley. And it's okay if you're, this is melting your brain because it melted my brain for years. But when I finally got it, I thought it was really cool. It really changed how I write pretty much every program I write. Any program that runs on a more than one thread, right? Which is every program these days. So here's how you can take code that you've already written that is an Rx friendly and convert it to Rx, right? So we have a function that's like, this is an asynchronous method that's written using callbacks, right? So it might be the way that you would write an asynchronous method now, right? So it takes a parameter and then it also takes a callback. And the callback gives you the result. Does this sample make sense to people? This is how, like, maybe you'd write an asynchronous function today. It's kind of the straightforward way. It's not a very good way because you can't marshal exceptions, right? You can't call, callback and say, like, something bad happened, right? But a lot of people do. So in this asynchronous method, we're going to create a new thread, sleep for 2,000 seconds, create a string, and then we're going to call our callback saying we're done, right? And we'll kick off the thread. So here, we're going to take an async method and then make it Rx friendly. So you do this all the time when you're writing Rx codes. You have, you know, I did it the other day. I wrapped bits, the background intelligent transfer service. So it lets you download stuff in the background using Windows, right? It wasn't Rx. It will never be Rx. But I took a method and wrapped it so that it would return I-Observable because it was easier for me to work with after that, right? So the first thing I do is create a result. And so let's look at the signature. So I still have the parameter, but instead of having a callback, I'm going to return I-Observable string. And then I-Observable string is going to return one item, right, the result. So I'm going to create what's called an async subject. It's kind of a long story why you should use async subject. It prevents race conditions. But if you're using, if you're wrapping a function that only returns one result, use async subject. And you can look up online why. Or if I have more time, I'll just explain why later. But you can think of this as a subject. It works just like a subject. It's a moral of the story. So I'm going to call the original function in a tri block in case something goes wrong. I'm going to pass the parameter in, and then here's my callback. So I've got the result. And so in my callback, I'm going to call onNext with the result, call onCompleted, right? So I'm just calling async subject directly. Because it's a subject, I can just say onNext, onNext, onNext. If anything bad happens, I'm going to call onError. Then I'm going to return ret. Now, if you look at this code, it looks like it's going to execute sequentially. Like you're going to create a ret, you're going to create ret, you're going to call this function, and it's going to return the value. It's actually going to call this function, call this, call this, return the value. It's actually going to call this function, create this, call this function, and then return immediately, and then later this code's going to run, right? Because it's asynchronous. So this chunk of code runs completely different place than the rest of the code. It's kind of counterintuitive. So I have, I'm going to call this in my main function. And so because I wrapped it in iObservable, now I get all those operators for me, right? I get where, and I get select, and I get timeout, and I get buffer. I get all those things. So I run it. Rx is great. So we waited two seconds, we returned it. So, but if I change this filter to say z, nothing happens, right? Because I filtered it. Anyone have any questions with that sample? Cool. So my day job, I work on a program called Get Up for Windows, which we just released a week ago. It's very exciting. I think it's really cool. It uses Rx everywhere. Everything in this program is iObservable, not everything, but everything interesting that runs on another thread or is an event or anything related to it is an iObservable of something, right? Every API call that we make to getup.com returns an iObservable of user or repo or something, right? Every task we run in the background, right? Everything we like, you know, want to calculate something or open, you know, get a directory listing of files, everything we do in the background is an iObservable. Running get.exe, any process we spawn in the background is an iObservable. And then when it comes back, it on next and on completes with the standard output, right? Every command, so a command like iCommand in MVVM, like a, you know, copy, paste, you know, okay, cancel, every command and every UI change, whenever you move anything in a list, right, or add an item to a list or anything you do in the UI is an iObservable. It uses Rx everywhere. If you'll notice it's really fast, right? You never see the black screen of WPF not pumping UI thread, right? This is very cool. And it made our code a lot easier, right? Because not only was it really fast, which was cool, and we could write everything asynchronous without pulling our hair out, we could also test everything. We could test all kinds of scenarios that we couldn't test before, right? We could test what happens if you're on airport Wi-Fi and it takes five seconds before it gives you back a response. We could test when you're on no Wi-Fi and everything fails, right? Because we just return observable.throw, right? We could even test what happens using a technology called virtual time that comes with Rx. You could test what happens if you hit a button, fast forward in time, 10 milliseconds, make sure the progress bar is running, fast forward in time, 30 minutes, make sure the progress bar is stopped, right? You can test things using Rx code that you could never test before, sanely, without using, you know, thread sleep and it never worked and it took forever, which I think is pretty phenomenal. So, you can learn more. The great place to learn more is what are called the Rx workshop videos. So, there are a set of videos that are like 10 minutes long each. They're created by the creators of Rx and they're really informative and they're, you know, they're not too long and they teach you about a single subject in Rx and they're really great. I was so obsessed about this topic that I wrote a book about it, programming Rx and Link along with Jesse Liberty and, yeah. Thank you so much. So, questions, comments, concerns, rants? No questions? Everyone understands everything? No. That can't be true. So, the question is will reactive extensions be part of the framework? Originally a plan was that Rx would be part of the framework. I observable and I observer that interfaces are part of the framework version 4, right? But now the plan is to put it into separate library, which I think is actually a better idea because putting things into the framework means they don't change, right? They don't change very often and if they do change, you can't break anyone, right, ever. And so, having an outside the framework means they can update it way more. And in fact, it's on new get, which means that it's really easy to manage new versions of it. So, like, the less that's in the framework, the better, right? I'd rather have everything on new get, right? So, cool. All right. Yep. Anything to avoid? What are the things to avoid? Trying to think. Coming up with nothing. Oh, I know what to avoid. So, sometimes when you're writing Rx code, you're really tempted to start falling back to the old way of doing things. So, some people will write Rx code, but in the middle of that code, they'll start, like, you know, creating events and locks and timers. And what you're really doing is you're kind of defeating the contract that Rx has provided you and so things will start not working, right? You can't use, you can't combine locks and Rx, right? You can do things that guarantee that things, you can do the equivalent, right? But you can't just create a lock, right? Rx is doing the locking for you. You shouldn't do that, right? And if you do, then weird things happen. Just like if you had, like, an I in a new room bowl and you just happen to know it was a ray and you hacked into it, like, you casted it and started fiddling around and then bad things would happen to you and you'd be like, bad things happen to me. And you're just like, well, this is what you would expect. So, that's one thing you should avoid. And that's a common mistake. Sure, sure. So, the difference between subject and async subject is that subject is hot, right? If you subscribe to it after you have a message, you miss the, you miss the message, right? So, async subject is really cool because what it will do is it knows it's built in that there's only one item, right? Because it's built for these Rx asynchronous functions, right? And so, what that means is that once async subject completes, if somebody else subscribes to it, it will replay the result to them, right? And so, there's another subject called replay subject that will replay multiple results to them. And so, the advantage is it takes, gets rid of a lot of race conditions, right? Because if you make a network request and then happen to subscribe after it completes, you still get the result. It doesn't matter, right? So, there's no way to, like, miss the boat on this asynchronous request. You'll always get the result, the same result whether you subscribe before it completes or after it completes, which is really cool because then instead of storing the value you got back from a function, you can store the subject and then you're really storing, like, the replayed result, right? You can store, like, you know, it's really handy for writing, like, a safe code, like, concurrent safe code, right? Because as soon as I start to do something, I create the async subject and save it off, right? And so, now, the race window is really small because it's all it is is creating an object, right? And so, you can solve all kinds of race conditions that were just like a pain by using async subject because it guarantees that if you miss it afterwards, it's going to replay it. So, it's really, it's really quite handy. That's one of the things that, that the TPL doesn't have, that it's fundamentally makes asynchronous programming so much easier is replays, right? Replaying a result after it happened. Cool. How much, how much time do I have? Ten minutes? Oh, cool. Well, let me show you one, one trick that is completely unrelated to this talk but a lot of people don't know it. It's about GitHub so I have to talk something about GitHub, right? I work at GitHub. So, here's a really cool trick. So, let me go to this. So, com, yeah, that was a good one. And I will reconnect you in it. So, a lot of people don't know that GitHub is the best subversion host you've ever seen. So a lot of people think that to use GitHub, they have to use Git and that actually is in the case. A lot of people want to use subversion because of the, you know, tortoise SVN, they really like it. That's fine, right? So, check this out. So, blow your mind. You take this URL, that's the HTTP URL, we're going to copy it. We're going to go over to the terminal. SVN, check out, Git pad. We're going to spin for a while. Now, I just use subversion to check out a Git repository. And not only do I just, is this just like a one time like I can check it out, right? I have just like the subversion convention of branches, trunk, tags, that's the other one, right? I can make branches in subversion and push them to GitHub. I can create tags. I can use SVN properties. I can do everything that I can do in subversion on GitHub and push changes to it, right? So, I can do SVN. It's been so long since I used subversion. I can't even remember how it works. So, I have the history, right? So, if you want to use Git at your job, but all of your coworkers are like, you know, I like the subversion thing, they can use subversion, you can use Git, they all work together. Which is really cool. So, that's my cool tip. That's all I got. Thank you for attending the talk. Thank you. You You You
This talk will introduce the Reactive Extensions for .NET, an extremely powerful tool for developing modern desktop and web applications that should be in every developer's toolbox. I'll show how to get started with Rx, some theory about how Rx is related to LINQ and the new TPL async/await keywords, as well as showing some awesome practical examples on how to integrate Rx with your existing codebase.
10.5446/51031 (DOI)
Twitter handle. I'm not a frantic Twitterer. I've pretty much given up on blogging because I'm, you know, a little on the ADD side. I like to say I've hardest my ADD for the forces of good. And so keeping me down to 140 characters is just good for all of us. So one of the best ways to reach me these days is via Twitter. I will respond to you. For those who haven't run across me before, you work, don't you? Be a good machine. I'm older than I look. My mother says I turned about 35 when I was 15 and I've stayed there ever since. So it's been 30 years of being about 35. So I wrote my first line of code in 1977. It was 10 print hello. Guess what line two, the second line was 20 go to 10. It was a TRS80 model one with 4K of RAM and a cassette tape player and three error messages. What? How? And sorry. Which are still some of the best error messages I've ever seen. When you see object not found, I just think, sorry. The only difference now is because it's Windows, it makes you agree with it. Okay? All right? Registry is corrupt. Okay? Yeah, it's not okay, but there is never not a not okay button. Never is. And like I said, I like to do a lot of different things. So when it started then, I've always done programming, but I have a very strong hardware background. We had to build a lot of our own machines. My father's an electronics engineer, so he taught me to solder. I think it was six. So it's really all I've ever done. I'm completely self-taught. I managed to stay in high school long enough to graduate. That was about it. I've written a lot of software. I've written video games. I've built a lot of hardware. I've serviced everything you can think of. I've started a ton of companies. Many of them have failed. And I've always had a good time doing it. You know, this is, we're in the best industry in the world by a long way. We forget sometimes. So with all the problems and, you know, challenges today, we still have the best jobs out there. But because I've had such a strong background in hardware and networking and software, I ended up working on performance tuning a lot. Now, not just websites. These days, when we talk about performance, we generally talk about websites. I've performed tune D-com. And if you've ever done that, you know, the D stands for dumb. But the past 15 years really been focused on, this has been my work. The podcasts are one thing. I still do a fair bit of architectural consulting. Strange Loop is a company that we created. It was really a research project about 2004, which was taking all of our thinking around how to make websites go faster and try to automate it as an appliance. Turns out you can. I know I have the patents. And we've been making them ever since. So that's one of my side projects. And of course, it's the whole podcasting thing. I ended up putting it on its own slide because it's gotten out of hand. So,.NET Rocks. Started by my good friend, Carl Franklin. You've probably seen him around too. He's larger than I am. He started that in 2002, which is two years before the word podcast was invented. I came on board in 2004 as the third co-host on episode 100 and we're at episode 775, something like that. It's two shows a week, every week. I try to make the Tuesday show really technical show on a particular product. And then the Thursday show may be more career oriented or some history or heck, we'll talk about space. We're doing these geek outs the past of the while. Listeners of Don and Rocks, I already appreciate it. Oh, thank you. Run as radio is me exercising the other half of my brain. So that's the IT oriented show. We do that once a week. Shorter, a little denser, but it's keeping up the speed on what the IT world is up to. So a lot of exchange, a lot of directory. Obviously server 2012 is hugely important. So we've been pounding away on that. Two over 250 of those done now. And the new show we launched last fall, the tablet show. So it's just this recognition with Windows 8 coming in the state of iOS and Android that we need to talk about this every week. So we do a weekly now on tablet and mobile development called the tablet show. So they're all free to download. They're in iTunes, in the Zoom marketplace. You can go to the respective websites if you'd like to download. We have RSS feeds. We give the show away. You can give it to your friends. We sell advertising in the show. That's how it makes money and that's why we can keep doing it. That's the model that works. And it's been the most absurd job I've ever had. It's a long way removed from real life and I seem to be unusually comfortable with that. This talk is actually, I tend to write talks I'd like to go to. And I can't find anybody to write it. So I end up doing it myself. So I've talked about scaling a lot of different flavors and ways of making sites go faster. It's a great problem. It's really interesting work and the feedback I've gotten from other versions of this talk really came down to can you just give me a list of things I can do. So that's what this talk is. It's 10 things you can do to make your website go faster. And I've tried to order them in order of difficulty versus return. So high return, low difficulty is going to be very high on the list. Low return is going to be lower unless all these have pretty good returns. But they get harder. So many of these are not code based because generally speaking, changing code to do performance tuning is hard. There are easier things to do and so I want you to pluck the low hanging fruit first. Before we do that, I want to talk about this equation. Has anybody seen this before? I have talked about it before. It's called the performance equation. I did not invent this equation. The original version was done by a guy named Peter Sebeck and he was actually coming up with a way to measure when performance and I modified it for web performance because it helps me think about the problems we have in web performance because we always have R, right? The leftmost value we've got. What's the actual response time of the page? That's an easy number. But what is that made up of? And so I've broken it down into sort of four general areas. There's more to it. You notice I use the approximation similar, that little wavy equals because I'm not going to include everything. I'm not including TCPIP negotiation. I'm not including DNS lookups. Those account for some time but it's very little. The bulk of the time tends to be these four pieces. So bandwidth, payload over bandwidth, the total number of bytes you brought down, that's everything. The web page, the images, the CSS, the JS, everything down divided by the bandwidth which often people just stop there and go, that's how long things take which means if I bought more bandwidth I'd go faster and we know perfectly well that's not true, right? So there's one piece. The next piece is I call a latency piece. So RTT plus app turns RTT over concurrent request. So RTT is round trip time. Don't worry about the bytes. How long does it take for you to go from the browser to the server and back again? And this is where as developers we lie to ourselves a lot because if you're testing inside your lab, inside your office, well your ethernet cables are pretty darn short and your RTT time is really fast. Five, six milliseconds, you know, maybe 10 milliseconds because you've got a bad switch somewhere. But as soon as you go on the internet that number jumps. You know, even if that web server is here in Oslo it's going to be 35, 40 milliseconds, right? If we put that web server say in Munich we'd be 50, 60, right? Keep going further away, right? We hop across the pond, go over to the US, it's going to be over 100 milliseconds. If we go across two ponds, maybe we're in Japan or Singapore or India, Australia, 200 milliseconds is perfectly reasonable. That's given that the internet's behaving well that day. When the internet's cranky those times go way up. And if you really want to be sad, use satellite relay. So I've done work in countries with very poor landline connections. The reason we use undersea cables, they're short because geostationary satellites are 22,300 miles up and that's 300 milliseconds. The speed of light is hard to beat. 300 milliseconds each way. So we were putting together a system in Bermuda, the undersea cable, their cable wireless, it was terrible. So we were using huge DSS, got a great view of the sky, no problem. We're transferring these big files and in theory those huge DSS satellites will move 500 megabits a second, smoking fast. But because we've got 600 milliseconds of latency, I'm transmitting 1K and then I'm checking the CRC. So it's a half a second to send that, half a second to get the CRC back. Half a second to send, we're getting 1K per second transfer rates. Oh boy. So what do we do? Increase the packet size. Now I'm moving 100K packets and it's still taking a second. We went from 1K to 100K a second just by adjusting packet size, configuration problem. So I have one round trip here that it's just the cost of getting the page down. And then I add in the app turn. So that's all the resource files, all the stuff I've got to go and get multiplied by that round trip time divided by the number of concurrent requests. So how many concurrent requests can your browser do? Well, if your browser is HTTP 1.1 compliant, which is the current specification as of 1998 and it's sad, I know that, it's two, right? You're allowed two simultaneous connections if you're going to follow the specification. The good news is most browsers these days don't. They break the specification. So as of IE 8, I think, opens 6. IE 9's 8. Chrome 15 is 8. But it's that round trip that matters. So depending on your browser and the other problem with older browsers is they're kind of dumb. They've coupled the downloader to the parser. So you may have two connections open in IE 7, pulling down an image, a couple of images. But if one of those things is a JavaScript file, it'll actually stop the other connection because it has to parse the JavaScript before it'll allow anything else to happen. So sometimes you drop down to 1. And if you use a waterfall tool, you can actually see that horrible thing. So that's our latency part of the equation. Two more numbers here. Compute time on the server, compute time on the client. Well, compute time on the server we get, that's your fault, right? That's the code you wrote or it's the DBA guys, usually the DBA guy. And then compute time on the client, which usually is not that relevant but it is a factor here. How long does it take for the client to actually render the page? So those four chunks, this payload of bandwidth, the latency piece, the compute time on the server, compute time on the client are the bulk of response time. So how do we get these numbers? I know you guys can have the slide deck. I picked on.NET Nuke a while ago, actually. So I went to the.NET Nuke home page and I used websiteoptimization.com. So this is an analysis service. It's totally free. I highly recommend it. You type in the URL and it hits it in the browser flavor of your choice, external to you and then gives you a report back on what it got. So that gave me the total number of requests, the total number of bytes and actually how they broke down, how much is HTML, how much is images, how much is JavaScript, all that information. It actually does a pretty good assessment of the page. We'll talk about every resource file and how it's was handled and so forth. It's a great little tool. It's completely free. So that answered the payload number and the app turns number. I ran speedtest.net, you know, run whatever tool you want to see what your bandwidth is, right? I like that one because the dial, the dial, it's got cute dials, right? Blinky lights and so forth. And this was my cable modem. You can see that day, my cable modem had a good tailwind. So I got 7,000 kilobits per second download. Now, that's bits not bytes and you think you would divide by eight, except it's also a start and a stop bit. So it's really divided by 10. So I got about 700k per second out of my cable modem, which in circa 2007 was fast. Today it's pathetic. But back then that was pretty fast. An upload didn't really matter for this. So there was my bandwidth number. Round trip time, I used Ping. So I pinged from my machine to the.NET new server with Ping and I got 85 milliseconds. Now, if you're a network guy at this point, you cringe because Ping is not the same as Web. Ping is ICMP, different protocol. And if that concerns you, you can download a mysteriously named tool called TCP Ping and it will actually ping on port 80 in TCP the same way that a web browser would. And you'll find that the number is usually the same. Once in a while I've run in the circumstances where it's not the same. The usual reason it's not the same and I've had this happen is they've got stateful inspection on port 80. So we had a client who had put it, you know, port 80 is the vector of all attacks. So they put a big pics firewall on port 80 and we had this particular website where these very large day had like mega-and-a-half pages which worked fine in this house. But as soon as you went over the internet they had problems. And one of the problems was sometimes they wouldn't finish rendering. You get about half the page and it would die. So they blame the programmers because what else would you do? And the programmers say, test it on their machine and go, hey, it works great, right? Can't reduce the problem, moving on. It wasn't until we started doing TCP Ping and saw that the traffic was much slower of port 80 that we realized the firewall was part of the problem. And then I changed the port on the web server to 8080 and everything worked. And it was, we were overflowing the buffer in the pics firewall. I have now summarized two weeks of yanking my hair out in 30 seconds. But that's the kind of weird stuff you run into and tools like TCP Ping can actually save you some grief on that. So I got my ping time of 85 milliseconds. Last two numbers, server code and client code. So this I had the right code for. There are other ways and I'll talk about them but the simplest way is I grab a time stamp when the page begins render, right, in ASP.net or your browser technology or your server technology of choice. When the page finishes, response completes, the difference there is the server compute time. Client compute time, grab it in the header or use the HTML5 timing if you like writing different versions of timing code for every browser. That's good fun. And then you wait for the on load event to fire for the page. That's your total theory, total rendering time. It depends on the browser again. Each browser interprets what on load actually means differently. They're more or less right and that gives you that time. What you do with those numbers is up to you at that point. I have stuck them in a query string embedded that as a blank image so that I actually stick it in the log file associated with the same cookie so that I can actually extract all that information from the log. I have stuck it in an iframe when you add debug equals true to the query string. Which I find really useful because you can ship that. Just leave it in the app and then when you got a tech support call and somebody's complaining about the slowness of the site, you say add this to your query string. What does it say? And you say, ah, server compute time was 0.8 seconds. Client compute time is 237 seconds. You're like, you're running on a 286 with Netscape 2.1? Like, what are you doing? It's a way to track down the problems. Actually built into this deck, I have added this, this is actually an embedded Excel spreadsheet. So I've taken all those numbers and computed it out. So you see the 208K payload, we got 700K per second. So of the four and a half seconds that we measured this performance of that page, 0.3 of it is the payload of a bandwidth. 51 app terms at 85 milliseconds, two concurrent requests. I'm not going to deal with the weirdness on there. Two and a half point one seconds at that time. 0.8 for the server, 1.2 for the client. So suppose you are a developer and suppose your boss comes to you and says, we've got this page, it takes four and a half seconds to render and that is way too slow. You need to fix this. First thought is, he needs a hobby, maybe fishing, right? Because four and a half seconds is pretty good. But no, no, no, he's read the Forrester Report, says all pages must be two seconds, you got to get this thing down to two seconds. Now if you haven't done this analysis, what would you as a web developer do to make that page go faster? You probably come in on a Saturday, so nobody interrupted you. You would open up the code window for that page and you would stare at it until blood came out of your forehead, right? And maybe just suppose you have divine inspiration, you see the light, you rewrite the page and you double its speed, you knock it out of the park, genius, you take it from 0.8 second server compute time to 0.4 second server compute time. It took you the whole weekend and your boss is just not that impressed, right? Because you really knocked 10 seconds off the overtrial time. But when I do this, what number jumps out here? It's the round trips, right? There's half the time of that page is the round trips. Maybe if I combined all the JavaScript files I've got into one file or combined the CSS files or even got rid of some images, used browser caching or sprited the images, knocked those round, if I can take out 20, 25 of those round trips, I can pull a second off of that and still have the weekend. Follow me? It's a way of thinking about the problem. Every time I tackle a scaling problem, I generally have this equation at the top of the board because anything we're going to do and in fact every technique I'm going to talk about through here influences this equation. Where does the time go? We tend to shift it around, right? Do we take it down? Do we move it around? You know, when you run, switch an app over to Ajax, start using Ajax and set the page, what are you really doing? You're lowering the server compute time in the synchronous state, right? You're going to make it asynchronous. Actual net amount of server compute time is going to go up but the client doesn't necessarily see that. You're also putting more load on the client because if you're doing Ajax right, you're probably using stuff like jQuery and you're doing, you're just pulling data sets from the server which is less work for it and asynchronous and then bringing it down to doing the rendering on the client. If you're using update panel in ASP.net, shame on you because you're doing server computation, putting even more work on the server to do rendering on the client. Follow me? Equation helps me think about these problems. So I share with you and feel free to use it. It's just a good way to tackle it. It gets you focused on the right things. Now that being said, now that I've said, like, you know, maybe you've got a latency problem, what parts of this equation don't scale, right? So far, I've really just talked about performance. Performance is the single user's experience, how fast the site was single user. Scale is how fast is it for a thousand of his friends or 10,000 of his friends or heaven forbid, 100,000 of his friends, right? And in a really good scaling site, I want the performance number to remain the same no matter how many users are added. There's a lot of things I have to do to make that happen, including I'm prepared to slow down single user performance, right? My goal is the distance between single user performance and our maximum number of users is the same, as close as possible. I'm willing to move both lines together to get it there. And when we look at this equation, there's certain numbers that scale uniformly. It doesn't matter how many people you have on the site, your payload's going to remain the same. Your server time, your client compute time's going to remain the same. Number of app terms is the same. Concurrent request is going to vary a bit depending on the browser. That's not a huge deal. Ping, Donald control over that, bandwidth doesn't scale. Although last time I looked at the internet, it scales pretty well. You just buy more, right? In fact, the only number that doesn't scale here long term is server compute time. That is the limited resource. And so again, the equation helps me because I test the site, break this stuff out of the equation, I test it at a higher load and the only number that should be moving is server compute time growing. Now I know I'm server constrained and I can start working on that problem. But when people first come at me with performance problems, my first goal is instrumentation. I go after instrumentation, I do this. Make sure that our problem is actual server compute before we go staring at the server. Make sense? Good. Let's start. The very first thing you should be considering when you're going to improve the performance of your website is your configuration. If you're going to take chances, look, all performance tuning is risky. You're trying to make something that already works go faster. It's the worst kind of work you can do, right? Performance tuning is evil work because as soon as you've done it, everybody forgets. An absence of performance is noticeable. The existence of performance is not noticeable. So if you spend three months on a performance tuning cycle and knock it out of the park, people will forget within a day. What they'll remember is there was a quarter where you didn't ship any features, right? Performance is like air. You only care about air when you don't have it. If I took all the air out of this room, you would care about air, right? But when you have it, you don't care about it. So don't do this work for fun. Do it because you absolutely have to. So soon as we're going down this path, it's worth looking at these things. The latest version of IIS is faster than older versions of IIS. I have benchmarked it. I can demonstrate it every time. It is absolutely true. Take the time to test your site on the latest version of IIS. You will see a difference. Same with the framework. Features keep getting improved. If you have played with the script manager as of ASP.3.5 and the 3.5 framework, the 401 is better. The 451 is way better. Try the latest bits. You will be surprised. It takes time to do it. There's a couple of other configuration tricks that I particularly like. Most web servers that I run into these days are pizza boxes, right? They're 1U machines, two cores or four cores, four gigs or eight gigs of RAM depending on whether they're running on a 32-bit OS or a 64-bit OS machine. And most of them are 32-bit. If I ask this room, how many people are running their web servers 32-bit? Everybody running 64-bit? Nobody has any web servers? All right. Just glad to be here. So, here's the situation I encountered. We had a customer who had completely stressed out their web servers. And one of the fun things about working at Strange Loop was we kill web servers for fun and profit. So, we're making a box to make web servers go faster. So, what we do is we load web servers right to the limit, typically IIS. And then we put the appliance in front to see can we increase the headroom? Can we load even higher? So, I've seen IIS spiral around the toilet bowl a lot. And there's a particular pattern that you learned that this is what IIS looks like just before it dies. So, the normal thing that you run out of in a functional website, if there's not serious bugs, there's, there's, you're going to run out of threads. That is sort of your constraint and you eventually run out of memory. So, this, I use Perfmon. I can, you know, try to understand, is my website in trouble? It's not easy to do, right? Looking at the hard drive light blinking? Not good enough, right? It blinks a lot and it doesn't tell you anything. It's not like a little sign pops up and says, help me. It doesn't do that either. I use Perfmon, right? There are other tools, but Perfmon will do it. Everybody knows Perfmon, right? I mean Perfmon is like your own little internet inside the computer. Everything you need to know is in there, you just can't find it, right? So there's four measurements that I look at. The CPU, which is one of the default ones. I don't care if the CPU hits 100%. I just don't like it pinned there all the time. I want to see a bit of a sawtooth, right? And normally we are not CPU constrained in web servers. Web servers are laying around. The CPUs are smoking cigarettes and playing poker. They've got nothing to do, right? Other things they run out of first. Next thing I watch is Request per Second, which is part of the ASP.NET apps set. So for a particular app, how many requests per second are we going through? Because that's actually the work we're doing. How many requests are we serving right now? That number is really low. You have a problem somewhere else. That number should be steadily high, right? Whatever the traffic load may be. Next number I watch is Request Cued. So that's part of the ASP.NET set, but not part of apps, just to make it confusing. So these are the number of requests that have been sent to the server that are not currently being worked on. Now the correct number of requests in the queue should be zero, right? We should always be serving every request it hits. So why does that number ever grow? You are out of threads. There's only two reasons you're out of threads. One is you've got a lot of long running requests going. Probably the database guys fault blame him go home. That's the usual one. The other one is long running garbage collection. So garbage collection is normally a very quick thing. It happens so quickly you don't even notice, right? This was a decision we made long ago living in.NET. We're going to allocate variables quickly. We're going to drop them slowly. And so we have this heap that grows, right? As you, and you think ASP.NET should be the perfect garbage collected environment for this. Because you create a bunch of variables at the beginning of a page. You manipulate them. You generate the page. Then you drop them all. You don't care. They're all marked as deleted. So your heap should grow, grow, grow, grow, drain down. Grow, grow, grow, drain down. As each garbage collection happens. But you have session in process, you evil person, you. And so now you create a bunch of variables. You slop in a big chunk of session. I swear you're putting data tables in there, aren't you? And then you do another one, another one, another one. And then you do a garbage collected. Every time it hits one of those big session objects, that's the rewrite them to the large object buffer. And that takes time. Now, normally garbage collection is pretty smart about running when it's not in panic. But if you're running out of memory, it will run a panic GC. And panic GC is a big deal. It's hard on the machine. It takes a few seconds. And when a garbage collected is running, no pages can be served, right? So all requests are queued. So the death spiral of IIs looks like this. Getting close to limited memory. And that's the fourth thing I watch is total number of bytes in the dot net heap. You get close to the limit of memory. A panic GC runs. Memory gets released, a certain amount of it, because it rewrites those session objects. The request queues will jump up because the request is still coming in while that GC runs. Now, if GC is finished, all those requests start to drain. That eats up a whole bunch of memory. It has another panic GC run before it finished draining the queues. And the queue gets a little bit bigger, a little bit bigger, a little bit bigger, a little bit bigger. And mom is watching. IIs is out there. And IIs is not like what you're up to. And IIs will then restart the worker process. Boom! Which is a way of freeing up memory very quickly. But it's a little hard on the application. In older versions of IIs, I actually blue screened the machine doing these high velocity things, which is another way of freeing up memory, but even harder on the application. And if you're just on the website, what do you see? The site's just chunky, right? You're doing your thing. And then one day, one of the pages is the render. And then when you go, when you refresh again, your shopping cart's empty, right? So I had a customer call me in. They were at that state every 20 minutes. You put two things in your cart and then they'd be gone. Nobody could finish an order because it was every 20 minutes. Now this was a two core, four gig machine, 32 bit version of the iOS. How much memory of those four gigs was available in the.NET heap? At what point did they panic GC? Do you know the number? It's less than one gig. It's less than one gig, it's about 800 megs. Which is crazy when you think, I got four gigs around, how come I only get 800 megs in the heap? Well, the reason is, two gigs to the system, two gigs to user. Now there is a switch you can throw, the PAA switch, to tell the OS you only get one gig, I want three, which is another way of telling the OS, I need you to run really slowly. Not a good switch to throw. In that two gigs of user space, you still got to run an iOS, you still have to run the framework and your app. You just don't have that much memory. So in the panic of how do we get these guys functional enough to go fix the memory leak they clearly have, we switch the machine over to 64 bit OS and add four more gigs around. Okay? You could try this. It's pretty simple to do. Go to 64 gigs, add 64 bit and eight gigs or even 60 gigs of RAM is enough. I mean RAM is cheap. The first thing you'll notice when you do this is that your app crashes. You'll notice that right away. Why? Because you compiled as any. And compiled as any is a great flag. It's a default setting on the early versions of Studio 2010 and I've now set it to default and compiled as 32. But when you compile as any, compile any work just fine as long as everybody's running 32 bit. Because you've never tested on 64 bit and I guarantee you don't have the correct database drivers. That's the one thing catches people every time and there's more. There's differences in running 64 bit mode than there is in running 32 bit mode. So you now go back, recompile, set as 32 bits. So you actually run 32 bit on the 64 bit OS. Now you can have two gigs in an address pool and if you want to get fancy you can set up two of them and actually use four gigs. Okay? So we went from having 800 megs available heap to four gigs of available heap and the app still had the memory leak. It still had the problem but now we're recycling every day instead of every 20 minutes. So life was functional. We get them back and work. We got that done in a day. Now we could spend the next few weeks figuring out what the heck you people are doing with memory. You follow me? I can't find any reason to actually run a website in 64 gigs because the performance is pretty much the same. We are, have this argument all the time about, well now we're actually fitting the registers properly when we're running 64 bit mode versus 32 bit mode but the pointers are smaller so that's a little more efficient like you go back and forth. I've never found significant performance differences. The only reason to run in 64 bit is you need to address more than four gigs of RAM in a given process. You're running a website. If you need more than four gigs for a web page, what are you doing? Because I want to know. That's a lot of memory. Right? So you're probably perfectly safe running in 32 bits. I've just not had any problem with the 32 bit mode. Performance is great. It's a simple configuration and it's one of those things that just sort of releases pressure on the machine. I would also say this going back to my point about run the latest version of IIS, 7.5 which is 2008 R2 changed the threading model because the number of cores is going up so much. They've actually modified the way the OS works and IIS takes advantage of that and the thread per processor cap has been lifted. I've had apps abruptly accelerate because they were thread constrained because you just switched to the latest version of IIS. You got to test your site, make sure it functions correctly, but you will get a performance benefit from that. So it's worth doing that. Same with setting up separate app pools. And if you're stressing the machine at this point, you're probably running dedicated servers anyway. Like this is a server for that site or multiple servers for that site. So the app pool set is pretty simple. Run an integrated mode. It's faster. If you're building new sites that fine, usually we run into this one. You're starting running in classic mode because you've migrated a site from several versions of IIS along the way. So you sort of made, you came from ASP.net 2.0 and when you got to four, they now, they stayed in classic mode. Most of the time you can switch from classic to integrated. It's just not a big deal, but it does need to be tested. So this sort of checklist, once we've committed to the sites to slow, we want to do the work, this, a lot of it can be done in a day. This is not weeks of work, right? Testing everything, trying to different configurations, hammering it out. Well worth the time that you can get enough performance out. There's always more performance to be had, right? These big high performance sites talk about performance tuning like painting the Golden Gate Bridge. When you get done, it's time to go back and start again, right? Really tuning sites. I've dealt with applications where every time I got through a tuning cycle and got some performance, it had a party and can draw it to themselves and started on it again the next day. Instrumentation. And I started, I really ought to start with instrumentation because before you do anything, you need instrument, okay? And this gets back to people forget about performance. You need measurements before you changed anything, measurements after you changed them to actually have evidence that you did anything. Because people forget about performance very quickly. Boomerang. Library out of Yahoo. It's free. It's open source and it's basically the client side piece of doing that client configuration. It's a JavaScript. It's really clever. Just dropping it in will do the basics for you of telling you how long it took people to render pages. You can set up the beginning system to capture all the data. There's lots of tweaks in tuning you can do in it. So rather than just writing it yourself, you can pick up a library and do it. If you're already using Google Analytics, there's now a Google, a portion of Google Analytics called TimingJS. Most people don't know about it, but it's a very sweet little library. And if you're already taking the time to load Google Analytics on your site, use TimingJS to do the same sort of things that Boomerang takes. I like Boomerang better because it's more tunable, more codeable. But if you're already paying the price of having a library loaded like Analytics, you might as well use the Timing Library as well. Everybody know what Fiddler is? Fiddler is your friend. That's all I need to say because everybody seems to know what it is, right? Keep Fiddler around. It tells you the truth. It's that more trouble has been solved because I was actually able to see what went from the browser to the server and what went from the server to the browser via Fiddler. Just take time to learn it, learn to look at those things, watch how cookies get screwed up, watch how authentication gets repeated unnecessarily. All kinds of performance problems can be found there because you've abruptly find out your app is not doing what you thought and Fiddler will show you the truth. Preemptive's real-time intelligence. Have you heard of this? Anybody? You own it if you own Studio 2010. This drives me nuts. This is an awesome product that nobody's heard about because they put it under DotFuSkater. So the guys at Preemptive are great guys. Their original product is DotFuSkation. And in the installer, when you look at the installation, you'll see Preemptive DotFuSkater, which nobody checks because who are Fuskates, right? But if you actually open that up, you'd see underneath it real-time intelligence. So what is this? This is real-time production method profiling that doesn't impact performance. So method profiling is the best tool for actually understanding what the heck's happening on the server. Performance tuning is evil work, right? It's work that makes stuff more complicated. So you only want to tune what you need to tune. You do not want to tune for fun, right? And so we need to do method profiling to actually find out what are the methods that are taking the time on the server. And method profiling, you know, if I can use ANTS, I can use Studio Profiler and it will actually show me. These are the methods you're spending the most time in and calling the most. So bubbles to the top, these are the things I want to work on. The problem is that method profiling hugely impacts performance. So the normal way for me to actually figure out how to do method profiling is I have to build a load test, which I hate doing and I'm good at it. But I hate doing it because I've never been able to simulate a real workload. Load testing is great when you want to do A-B testing. So here's version one. We've moved to version two. I benchmark with a load test to how it performed in version one. I benchmark with a load test in version two and that will tell me, hey, we're 10% faster, we're 10% slower. I'll tell you that. But that's not what people want load testing to do, right? People want to be able to use load testing to say, will the website survive the weekend? And that's incredibly hard to do. It's very hard to simulate a realistic load. I've taken IIS log files, run them through web trends, created profiles based on what people actually do throughout the site, created load tests for each one of those things, built in delays, tried to simulate latency. I'm still wrong. And when you've got to do method profiling, you've got to build these realistic load tests. If you don't build a realistic load test, the methods, the right methods aren't going to bubble up anyway. And then this thing came along. And it's in Studio 2010. And it will simply do that in real time, in production. So I'm doing more and more of my testing just on the production site. Let it run there. Let the users be the load testers. They're good load testers. And now you get back, here's where people are spending their time. Real-time intelligence is a whole bunch of things. It'll tell you what versions of apps people are using. It'll tell you what features are being used. There's lots of good stuff that's in this. And there's a free version in Studio 2010. You can buy the upgraded version. It gives you more features. It runs in-house. There's all kinds of good stuff there. I got nothing but praise for this thing. But for instrumenting the server and actually knowing where the pain points are, this will give you the answers. So I recommend it. If you really, this will just be a routine thing for you now. Here are the methods that we're spending the most time on. If we're going to tune anything, we tune this. This will give us the most benefit for the least effort. And that's what I'm after. I only got so many cycles to do tuning. I only want to work on the things that matter. Easy one. Turn on compression. It's still not on by default. Well, it's on in the latest versions of IOS. It's on for static resources. It's still on for ASP.net. Anybody know why? You'll love this. So Microsoft, when they shipped IE 5.5, had a bug in the G-SIP decompressor and it couldn't handle G-SIP compression, which is a default compression. So to not ship a web server that wouldn't work with their browser, by default, IOS 3 had no compression turned on. That's why. And it's persisted. Just kept on going, right? So they still do that. Now, if anybody's still running IE 5.5, they don't deserve to be on your website anyway. So turn compression on. And the bottom line is this eats CPU resources, which you've got lots of, right? This eats CPU resources decreases payload. That's the main thing. So that's a worthy benefit. Use a CDN. Anybody played with Content Delivery Network? You know, we tell revenue-based sites, use Content Delivery Networks because Content Delivery Networks are expensive, right? Akamai is the big gorilla in this business and it's hard to get Akamai working for you for less than about 30 grand a month. So it's not an easy option. But that's not what I'm talking about here. Here I'm talking about the fact that major JS files like jQuery, like the Microsoft Ajax stack are available via these CDNs. So CDNs do two things. The first is they reduce your payload because you're no longer serving those bytes, right? They're not, let Google serve them, let Microsoft serve them. But they're also geolocated so they reduce latency. They're not going to pull it from your server. They're going to pull it from the server closest to them. They're already minified. They're already compressed. You get right down to a specific version number. This is an easy code change. Just go in and add the line of code. Then instead of referencing your copy of jQuery, you reference Google's copy of jQuery. Save yourself the bytes. This is low hanging fruit. Easy to do. Browser caching. Why do people hate browser caching? There's legitimate reasons. The big one is the PDF file boo boo. Right? So you put up a PDF file and you put up the version with all the markup in it instead of the one that had everything cleaned out of it. And then you, because you browser cached, when you realize the mistake and you update it and replace it, people still pull in the old one because they've got it. Right? That's what people don't like about browser caching. And you can fix most problems with browser caching by being disciplined around naming files. So long as you change the name, browser cache doesn't have a problem. You don't even have to do that anymore. If you're working ASP.net, use the script manager. Script manager library will do that automatically for you. And in the four or five one, it'll do even more. But the bottom line is you can simply tag up these things, are resources that should be browser cached and they will be. Browser caching does more than just tell the browser, hold on to this. ISPs watch browser cache tags. They look at cache control tags. Why? ISPs charge you money, but they also pay money. They're backhaul. They call the type that they run to other major internet ISPs, cost them money. So they're motivated to reduce the amount of bytes that move across that. So they put servers into their endpoints to cache whatever they can. This is the model that Akamai and level three and all these companies work from. Is that they know every one of these ISPs have various caching points on them. And so they try to read those cache control headers to cache resources. You should be taking advantage of this. Because it doesn't just benefit the guy who hit your site. It benefits the next guy who hits your site from that ISP. Follow me? So you can get browser cache performance on a first hit for somebody because somebody else hit the site earlier and the ISP has already cached it. And all it takes is marking up your browser cache properly. One thing I would warn you about is don't put outrageous life spans on browser cache. A year's good enough. So it turns out that browser cache values are actually stored as seconds. And it's a signed integer. So if you put 100 years in, it actually overflows the signed integer and you get no browser caching, right? Your limit is 2.1 billion seconds, roughly 67 years. And if you need to cache longer than the internet's been around, what are you doing? A year's good. Maybe five years, okay? But don't go too far. The latest browser is a fix this problem. But if we all had the latest browsers, we wouldn't have any problems anyway. So keep the number realistic and you obviously have it work properly. It's just a nasty little whammy. And this is something you find in Fiddler. Because it overflowed, it'll actually come back as a zero. Don't cache. Host name aliasing. So I want to tackle the concurrent request problem. Now, the latest browsers, again, don't have this problem. They're already opening to eight connections. But a very simple trick you can do in code is to just change the URL alias. So your prefix, www1, www2, www3. If you're dealing with IE 7 customers, so they only allow two connections, so I can have two www1s in a row. That'll be two connections. Then two www2s, that'll be two more connections. It'll open them all, right? You can open as many as you want. Just create enough aliases. The challenge here is managing them yourself to actually make sure that you're opening the right number and the right sequence so you can use all those names correctly. But it can absolutely reduce the latency on the page, okay? We create more connections. It's an easy trick. There are tools to help you with this. For example, if you own F5's big IP, that's a low-balancer firewall HTTP compressor, they actually have a feature called a domain name aliasing and it'll do that automatically for you. It'll automatically do the rename based on the browser it was hitting. So it says, oh, this is IE 8. I'll use six connections per name. It's IE 7. I'll use two connections per name. So there are tricks you can do there. And obviously, this is not as big a deal as we get in these browsers that are violating the specification. By the way, they're finally working on an HTTP 2.0 specification. They're only 10 years late. And you thought the HTML5 spec took a long time. This thing's hopeless. But one of the things they're doing is getting rid of this whole concept of concurrent requests that you're just going to get everything at once. There's a bunch of different policies around that. Minification and combination. So make your JavaScript files and your CSS files as small as possible and combine them into one file. Now, why do we break them apart? We need to manage them, right? We need to keep things orderly. In the end, maintenance is an important aspect. And maintainability of our app is important. Legacy apps are apps you could no longer maintain. That's how they become legacy, right? You can't manage it. You stop maintaining it. A couple of ways to go about this. If you're using ASP.net, use Script Manager. Script Manager actually can list all of the JavaScript files and it will automatically combine them in four or five, not only automatically combine them, minify them, and compress them, and browser cache them, and then monitor any changes for you. If you change any of those files, it makes a new file automatically. You do nothing. Just use it, right? You already own it. It's in the stack. It's free. If you want to do this by hand, there's two sites. So there's the JavaScript Compressor, and it's literally there's two text boxes. You paste your JavaScript on this side. You click a button. You pick up the JavaScript on the other side, now smaller. Go for it, right? And same for CSS. It's just a way to go about this. But there are other tools that make it very simple to do. And this reduces round trips. So cut back on the latency, reduces the size of payload for really no cost, right? Especially if you can do it in a way that doesn't impact your maintenance. All right. I will now jump directly to Evil. This is a, have anybody heard of MHTML? This is Evil, okay? We're going to talk code. This is the worst, evilest thing you can do. Why would you do this? Okay? Generally, the only place I've ever applied MHTML is to a landing page. So the page that people are most likely to go to first. We know we get a certain bounce rate. Bounce rate being the number of people that hit this page and know nowhere else. So they, you know, if I've got an e-commerce site, I study how people buy stuff from my site. And I know they hit this landing page, then they go here, here, here, here. It's like five, six pages in, then they buy, which is the minimum requirement to buy something. And the longer they stay on the site, the more likely they are to buy. And they get the sort of threshold of here's what the buying cycle looks like. Now, my bounce rate on that first page is usually a pretty big number. It's like 60, 70%. And one of the arguments is the speed of that page is the reason they're bouncing. It takes too long. You've done this. You've gone, you've gone, you've Googled for a product you want to buy. You click on the first site, it takes so long to load, you go to the second site, never get right to the first site. You're a bounce. Right? So it's in my mind, I need to get the maximum performance out of a page as fast as it can be, even if I'm prepared to commit evil. Right? You are prepared to sacrifice chickens to make this page faster. That's the path we're going down here. What I'm going to do is eliminate all the request files. There is only the page and nothing else. So how am I going to do that? That means all my JavaScripts in line, all my CSS in line, and all my images are in line. They're all the page. So the page is bigger, latency has been completely limited. Follow me? This is evil. This is the ultimate and unmaintainable page, but it is fast. So how do you do it? Built into IE of all flipping things is, I'm actually online here. Let's go to.NETRocks.com. Okay, I'm not online. They're not going to have any problems with the performance on this. It'll be fine, right? It'll be well. Okay, nobody surf. Still thinking about it. So built into IE, this is, there we go. We can use file, save as, and the default is this web page complete, HTML. There's also this web archive single file, MHT. So if we save this, let me show it to you. We need to look at this in notepad. Where did it go? So what it's actually done now, okay, I've lost it, is embed all of the, there's an image. What's that? I'm not sure. So there is a 1.4 Meg ASP.NET page that is basically, MHT.ML is MIME rendering of HTML. So what it's done now is taken all the images and it's MIME encoded them. So I can go through here and cut each one of them out and use them in my page. Follow me? So I now have MIME encoded so if you see my images, I can you CSS to place them however I like, it's just a straight stream of text. Completely unmaintainable, right? Every time I want to rebuild the page, I have to rebuild the page, right? It's not a trivial thing to do. But when my boss says all out fast, this is about as fast as you can go. The challenge now is you only have to pair it on the payload. Like what can you do to get the payload down? And how much of it can you actually do this way? How much of it is actually dynamic? It needs to be computed on the server. Now that's when latency is the issue, this is the ultimate killer is the MHT.ML. I don't recommend it. It's just a tool in my toolkit for when we're absolutely desperate for performance. And one of the problems is you must test with different browsers. There are rules. MHT.ML is not consistently implemented. I told you it was evil. But I wanted you to know, you know, here we go. That's as far as we can go. All right, let's talk about less evil things. I've already mentioned AJAX. And AJAX is a good strategy. I don't have a problem with it. First thing is the AJAX library itself is big, but it's also available on a CDN. So make sure you do that. So right off the bat, the price of AJAX doesn't hurt you because the AJAX library is way too large. I mean, JQuery library is better if you're going that route. You're better off. Really think about your page design. The main thing here now is you're able to load in whatever order you want. So if you're evil, you will load the ads first. Anybody seen a site like that? All right? The very last thing that loads is the article you actually want to read. They'd rather not load it for you at all. Right? That's what they're doing. Now, you can use this for good. Right? You know that everybody's going to this landing page, but a certain quantity of those people immediately need to click on something else. That should be the first thing loaded. The least amount of time they have to spend on that page, that the first thing that came up is the thing they're most likely to click on. The next thing that comes up to the next thing they're most likely to click on. So we're now doing that asynchronous loading, but we're actually doing it so that people don't have to finish rendering the whole page. It's better for you. It's better for them. It's using it for good. Right? That's the best thing I can think about on AJAX. And the SPA thing, I think the jury is out on this. Lots of folks are convinced that this is the future, that all web pages will be built as a single page. You will go into a div tag and you'll just keep loading stuff into it. I don't know that I buy in. I get the idea that we're just going to get rid of the form post as a round trip entirely. And that HTML5 is going to make everything beautiful and shiny. I just, you know, I've heard the story before and I'm such a skeptic. I've got about 15 minutes left and I want to spend it all on caching because caching is the most important piece. And I really don't even want to talk about the ASP.net caching features per se. I want to talk about data caching because ASP.net caching is just fine. This again, start here because this is low hanging fruit. I mean, in the end, what is caching? Stop rendering the page you don't have to render. There's a bunch of ways to do that. You know, it's a question of how frequently things change. So you can do cache policies where as long as the query string is the same, the page is cached in memory. So I'm now trading memory for performance. And that memory is the.net heap. So if your memory constrained, this is going to make things worse, not better. Now the good news about cache objects in.net heap is they're flagged at a lower priority than, say, the session object. They're flagged as you can throw this away in an emergency. So if you have a panic GC, it'll actually toss all the cache objects out. So you're going to hurt performance to recover the machine memory-wise, which is good. It's a good feature. That came as a service patch in ASP.net 2.0. But the big one for me is data caching. So let's go back to our sort of exercise, right? I've used real-time intelligence or I've used some kind of profile and I've identified a method. This is the method we're spending our time on. Let's presume, say, it's like a product list, data request, right? It's always a database call, inevitably. Say that query runs in three seconds, which is not horrible, but it could be a fraction of a second. But the more important thing you know is that it doesn't change that often. And it's being hit constantly. It's being hit once a second, right? I've instrumented out, you know, we're hitting once a second, so it's constantly running. This is a beautiful candidate for caching. It doesn't change all that often. We only add new products once a month. So why are we constantly hitting the database for that? So I take this perfectly harmless method that retrieves a data table and I'm going to add caching to it. So I put the library in. What do you do? You write a little line of code. If the cache object is null, then run the query, populate the cache object, return the cache object, right? And you test it on your machine. You run it once, it takes three seconds. You run it again, fraction of a second. You're a genius, you ship it, right? What happens? The first time two people run it, all hell breaks loose, right? Because they collide with each other. They're both trying to populate the cache object, two people things happen. So you're like, oh, okay, I need to lock the method. Fine. If the cache object is null, lock, run the query, populate the cache object, out you go. Now you test it and it works beautifully, right? It worked perfectly the last time. But at least now you have a lock in place and it doesn't have the lock collision. So say you run a load test, you run a query per second, right? They hit the page once a second, three second query. So the first guy comes in, the cache object is null, he locks the method, starts running the query. Second two, the next guy comes in, he checks the cache object is null, it's still null, he hits the lock, he waits. Third guy comes in, he checks the condition, still null, he waits at the lock. First guy finishes, populates the cache object, pops out. Second guy now gets the method. What does he do? He populates the cache object because he's already passed the if. So the correct pattern is if the cache object is null, lock, if the cache object is null. The problem is that this still works when you do it wrong, right? It is actually faster. If you just get a couple of seconds of gap, you know, the next guys are going to come through, they'll all catch up. If you look at the query profile, you're looking at going, why are they running that query so many times? It should only run once. You see it run over and over. And I've seen it like run 50 times because we were hitting that hard, right? And the stupid part is those ifs look dumb, right? Like a junior program come through, he goes, why is he checking for null twice and he'll fix it for you, right? So you got to put a comment block on it. If this looks dumb to you, don't touch it. Talk to Richard, right? And it gets bad as whole idea that performance tuning is dangerous, right? Because it's obscuring code. And it's not all, you know, we don't always sure we're going to get a benefit, right? You are hacking here when you try and do performance tuning. So you take a benchmark, how fast the site go now? How fast is this page run today? Then you make a change and you typically make the code more complicated. Then you run it again. Is it faster? No. Then back out the changes, right? Don't leave changes in that aren't actually providing benefit. Yes, it is making the benefits. Now you put in the, we did this. I know this looks dumb, but we did it for performance reasons. Don't mess with it. You know, check the log on why we did it this way and you go on from there. And don't go mental with cache because cache has a price. You are now trading dot net memory for performance, right? And if you run the machine out of memory, it's going to throw all the cache objects away. Ask me how I know. You know, people get the cache hammer and every bit of data is a nail and they cache everything. And they run the machine out of memory. So one of the habits we got into is we started instrumenting cache. How many times does a cache object get reused? We use a dictionary object and every time you did a cache call, we just add a, that, for that cache object to counter. And we found out 50, 60% of the items we were caching never got used again. Just wasn't worth it. And we're still not at the hard part of caching, which when do you expire? When do you expire a cache object? Now, we're developers, so when do we expire? When you need to, when something changes. That's the way we think. It's very logical. It's also very dangerous because when becomes a very hard thing. What happens if you get it wrong? Right? I'm a big believer in microcaching. I'll cache for a minute. Because if I'm on a high load site, they're really hammering on the data all the time. The fact that for a minute they get it all from the cache and then they repopulate the cache and go again, that's a big benefit. That's enough. But a minute is enough, right? It provides sufficient benefit. But it gets away from the bigger problem of caching, which is what happens when you're wrong, right? So if we're playing that product list game, say we've got the inventory. Well, somebody buys something, it's entirely possible that we don't update the cache in time, somebody else will buy it as well. Now, that's actually a business problem, right? Now, if we have a back-order system, we can probably compensate for it. You've experienced this. If you've ever gone to Expedia, seen a great price on a flight, went to buy it and said, guess what? We don't have that price anymore. We have this price for you, which is invariably more. You know, what happened? You just had a failed cache yet, right? Expedia doesn't go to the airlines every time you want to check a flight. It caches a whole bunch of them because it knows that you want it to be faster than it takes the time to pull all those things down. It wants the site to be fast. So for 99.9% of the cases, everybody gets a great experience of being super fast and for point one of the time, you get surprised with a higher price. And you can do the same thing with caching, right? A back-order system would fix that because most people will have the benefit of a much faster site when they order so you'll get more orders and once in a while, somebody's going to be back-ordered. And it may make them unhappy, give them free shipping. Very few problems can't be fixed with free shipping, right? Just a path to go down and it's a way of thinking about the problem. Very useful to be able to go to our employers and say, look, I'll make the site faster, we need a back-order system. We have to deal with the fact that once in a while we're going to sell something we don't have. And when they say that you cannot do that, you can only sell what we have, it says, okay, as long as you're happy with a slow site, that works for me, right? That's the price we have to pay. We have to balance these things off. And things get crazier when we start talking about multiple servers. I mean, it's one thing to do logical expiry, right? Expiry when things change on one machine. The cache is here, the change is here, the cache can be expired here. But what if the cache is here and expiry happens over here? Now you have to start doing notification. And notification becomes an N plus one problem. Works great for two machines. Pretty good for three machines. Four machines that starts getting pretty noisy, right? One expiry, one change, now three expiries. That happens a lot, right? And there's still some time delays. And sometimes they get lost. Somewhere around seven machines, it gets so stormy, that conversation that it breaks down. There are tools, libraries to try and compensate for this, but the basic math about synchronization of caches becomes a very hard problem. The concurrency of caching is hard. So do it carefully, cache only what you need to. And you have to build into practices about how we deal with being wrong, right? You're moving the data out of the database closer to the user to improve performance. There are consequences. And you just have to articulate those, right? That being said, this is the single most effective coding strategy that exists for improvement performance. I've tried lots of different things. This is the thing that wins. And it doesn't have to be hugely complicated code. If you can't afford the.NET memory, try dumping the product list out as an XML file once a day. Because IIS is magically fast. It'll actually cache that XML file in non.NET memory and read it off for you. It's just a question of how frequently you can cope with the change on it. You follow me? It doesn't have to be the obvious practice. There's a bunch of different ways to solve this thing. And the IIS is good at different things. You want to see IIS magical? Don't run ASP.NET on it. Really. It's super fast and smart without IIS in the way. It uses all that memory for caching those static files and fast. So we actually created separate pools of the ASP.NET of IIS servers just for static resources separated from our ASP.NET server. Because ASP.NET is happier when there's no static resources going on either. So separating out became very effective. But caching is not a simple practice. You just have to work your way through it. And data caching is the powerhouse. So if there's any one thing when it comes down to coding, this is the thing to do. Just I put it last for a reason. To me, it is the last resort. Don't do it for fun. Use the right pattern and do it because you absolutely have to. And if you don't have to, if you are not getting yield from your cache, if you don't get multiple hits off that cache item, stop caching it. It's not helping you. There's one more trick I'll give you about caching that I found extremely beneficial. Get away from the idea of allowing the user to populate the cache. The day you start caching, you're not dependent on caching. A few months later you will be. You cannot function without caching because load's gone up and now you've been living on caching. And allowing the user to repopulate the cache is the problem. You chuck the cache object out and the next guy has to come in and populate the cache and everybody waits for him. Don't do it that way. Create a new cache object and roll it over. So you know the cache is expiring. We call to a service, the cache manager, say this cache object is now out of date. It now runs the query again, independent of any user, right, completely asynchronously, builds the new cache object, drops the old one, replaces the new one. Users only ever hit a cached object. They never hit the database directly. Now meantime, while that's going on, we're still serving stale data. So you better have a compensation method for stale data. But you'll maintain your performance, right? Remember my goal for a single user, maximum number of user space is the same. If we always hit from the cache, we get closer to that goal. Follow me? It's a pattern for how we do caching. I don't want the user populating the cache. It's my job and I'll keep it separate from that. Okay, I think I've run through all of my time. So thank you very much. I will take any questions. Otherwise, I'll see you tonight at the party. Thanks.
Join Richard Campbell as he opens up his web performance tuning toolkit and walks you through ten different techniques for improving web performance, rating each by difficulty, risk and reward. You’ll learn about a variety of techniques for reducing payload size, latency, server and client compute times. Some techniques are easy, like utilizing compression, and some are complex, like implementing MHTML. But each technique has the potential for improving the performance of your website - and Richard will talk about the cost of that performance as well as the benefit.
10.5446/51034 (DOI)
I want to get started. I can just go. I'm already on. If we're really quiet, we can listen to this talk right here. Good morning, everyone. I'm Sarah. Thank you so much for coming to my talk. This is a fabulous conference in your fabulous city. I had the best breakfast buffet this morning. It's my favorite thing about coming here. Is that bad? I think that's a good thing. It's a compliment. I like to find out a little bit about my audience and what your experience is. Could you raise your hand if you've ever used Node before? Can you raise your hand if you are traditionally at client side, JavaScript developer? That was most everyone. What about server side developers? A lot of people are both. It looks like it's good. I'm going to start this talk by telling you, I don't use PowerPoint, a little about me. My name is Sarah. I'm going to tell you a little bit about my background and where I was coming from so you can gauge the level of difficulty. I have been building software for 10, it's actually 11 years now, I figured, yesterday. In America, it is impolite to ask how old that makes me. I'm not going to tell you. Don't ask. Seriously. When I first started out as a developer, I did data warehousing in MS SQL for a few years. I did a lot of visual basic and T SQL and things like that. That was the level of coding I was doing. Then I did the next four years doing ASP.net and C sharp. I really enjoyed ASP.net, their MBC framework. I did that for a while. Then I found JavaScript. In ASP.net, they give you packages that you can work with and you use their built-in libraries. When I started looking into those libraries, I found JavaScript. There are two in the past. 10, 12 years, there have been two big advancements in JavaScript. They are the biggest ones, I think. Three, now that there is no. The first one was Firebug. I don't know if you remember doing JavaScript before Firebug existed, but it was awful. It was terrible. There wasn't any debugging. You were doing everything with logging in the console and that was not great. Or alerts. You did a million alerts. Then after that was jQuery. JQuery just made JavaScript a lot easier. At that point, both jQuery and Firebug existed. I really enjoyed developing in JavaScript. Since then, I've focused on the client side, building frameworks for clients, for their companies and object-oriented JavaScript. I've also played in Python and Ruby enough to be bad at it. I think talking about software can be really boring when you're just talking about syntax. I decided to try something a little different today. Today we're going to be examining a personal failure of mine while building software. Has anyone read the book, People Wear? There's this fabulous book that you should absolutely read. It's called People Wear. I didn't see any hints. It's basically about how in software when there's failure, it's usually not the technology. Because we, there's just awesome stuff out there. Usually when software fails or software company fails, it's the people. Or it's the mistakes they made. One thing that we don't talk about enough are times that we've tried and done things and failed at it. I think it's important to discuss it with each other. You never see it. If you're ever in New York, I live in New York, there's something called a New York tech meetup. Everyone would feel wonderful about being a developer, nerd or anything, whatever you consider yourself. It is awesome. Once a month, there is, we have basically an amphitheater now. It's the first Tuesday of every month. It's also aired online. It sells out. They open up tickets and groups over the month. It's impossible to get tickets. They sell 900 tickets and they sell out right away. All the New York tech meetup is people that are working on cool shit and want to show it to you. Basically they do five minute demos where people just go up and be like, this is what I'm working on. I like it. The audience asks us questions. That's it. It's two hours long. It is awesome. You go around and you sit there and you look around and you're like, wow, I am not alone. There are 900 other people that think this is cool. It's super fun. Anyway, they had a series on failure or a test series on failure. This guy came up and he was like, yeah, I had this startup. He was talking about, it was a really intimate failure of his. I was like, wow, this person is really opening up. That is awesome. I'm so glad that we're finally discussing this. He's like, yeah, and then I was laying off employees and that was going to be it. We got into the tubes and this is what I learned. Then I decided to start doing something different. Then we were group on. I was like, that's not a failure. Group on is kind of a big deal. I've heard of that before. Anyway, I think it's important to talk about this stuff. Hopefully we can all learn. I've definitely learned. I'm not going to do the conclusions for you. I'll let you come to them. There are things we should discuss though that are not in the story. It's throughout. We're going to talk about node. We're going to talk about what I learned going from client to server side JavaScript. Just the things that you have to do that you maybe haven't done before. Here I'm going to cover some things that don't work into the theme work of the story. Why do you use node instead of client side JavaScript? It's a good question because client side JavaScript is awesome, especially with things like backbone and great libraries that we can use. So security is a big one. If you want to do authentication and stuff like that, you probably shouldn't do it in the client side. It just doesn't really make any sense. So security, it lives on the server. It's a lot more secure. Concurrency, people doing a lot of things at the same time. And you can manage that using node. That's one of the more fabulous things about it. It's also as cool as shit. So some other things we should know that aren't in the story. Node package manager, it's called NPM. So the way you install packages that you're going to work with in node, it's kind of like gems if you do Rails or Ruby. And it's basically like homebrew where you're just like brew install or gem install, that kind of thing. So you can actually, I think you can homebrew NPM. If you want to learn anything, but there's a ton of libraries, I'm going to go over a lot of the libraries in this talk. But there's lists of them. If you go giant, they're the ones building out node.js. They have a GitHub account with hundreds, I'm guessing hundreds. It looks like at least a hundred different libraries. And the difference is between them for the different tasks you need them for. So if you want to go check out some libraries or learn more about, I'm not going over all of them, obviously. But if you want to learn more, check out their GitHub. Okay, so the beginning of the story, it starts like this. Do you guys do hackathons here? Hackathons in North, you guys have to do hackathons. Hackathons are amazing. So do you really not do hackathons here? You have to do hackathons here. Okay. So hackathons can be like 24 hours or 48 hours where a bunch of people get together in the same place. And it's usually a contest. There's usually a cool prize. For photo hack day, there's like a $10,000 first prize, which is like five Kroner. I've been making that joke for like the past five days. And also, so you got to be on the NASDAQ screen. So your app that you built, if you win, is on the NASDAQ screen in Times Square, which is like huge. I wonder how that went really, like what's interesting about that to people watching the NASDAQ screen. And there's all kinds of prizes. There are sweet headphones there. And people sponsor food. And basically, you stay there for 48 hours and it's gross because you've been coding for 48 hours, the room of other coders, also coding for 48 hours. But it's awesome and it's really rewarding. So in the end, everyone demos. There's like a panel of judges. They ask you questions. And somebody wins. So this is photo hack day two that happened this year. For photo hack day one, my team got the People's Choice Awards, which is basically like winning. We won the sweet jet blue gift cards for our application called Stash or Stash. It was, you know how there's all these things that put mustaches on people? It was the opposite. We used the face.com API to isolate points on people's faces that had mustaches and replace the mustache area with their skin. So it took mustaches off of people. So that was awesome. So we were coming back and photo hack day two and we were going to bring it. I was thinking about this for like months beforehand. I knew what we were going to work on. Like I'd been planning this, dreaming for this. It was really exciting. So my idea was, so I love octocats. Do you guys like octocats? I obviously love octocats. I think they're so cute. They're the GitHub mascot, right? And I think they're adorable. And people make different kinds of octocats. There's like a whole page dedicated to different arts surrounding octocats. I love them. And the face.com API is really awesome. I don't know if you've ever heard of them. They basically you upload a picture to them and they will return back to you a lot of information. They will give you gender of the person in the photo. You have to do like profile. So like gender of the person based on percentage. If they're wearing glasses, kind of facial hair they have, just like all kinds of data, where their eyes start and end. But it's a really, really awesome and powerful. I think Facebook just bought them this week actually. Was it Facebook? Somebody bought them. Someone bought them this week. So why not give people their own octocat? So people take their picture. They upload it to the site. And an octocat that looks like them is dynamically generated and sent back to them. That is hands down going to win the hack day. No one's missing of anything cooler than that. So that's what we're going to do. So two months beforehand I started recruiting my team. I was talking to people about this. Listen, we've got to get the team together. We're all going to go. This is what we're going to do. This is how we're going to break things down. This is what we're going to work on. So this is a secret to winning a hackathon, which happens to our team. I run an organization called Girl Develop It. And we do low cost software development classes geared towards women. That's my elevator pitch. And so the one thing is the majority of our students are designers coming to learn to be developers. And they are absolutely our secret weapon at hackathons. Because our hacks, like everyone else's hacks always look like garbage, but they're cool. But ours are cool and they look really good. So I was like, we're going to get a whole bunch of client side people together to make this look amazing. And then I should probably get another person working on the server side so I don't stress myself out. And when we did stash or stash, I was the only server side person. I was up for like 48 hours in a row. I was exhausted and a zombie. So I was like, I should get at least one other person so that I'm not doing this all by myself. And we should probably get an illustrator as well because we're going to be making these OctoCats. So we have to figure out a way to make features for the new OctoCat. So that was that. And then beforehand, so before a hackathon, you're allowed to do all the research you want. You just can't start coding. So I was in charge of the research. And the game plan was to get the attributes that I could from the face.com API. They returned to you most of what you need, but not everything. And then dynamically, so if you layer SVG files on top of each other, you can make a composite image. So that was my game plan. I was going to get different features. I was going to get SVGs, and I was going to layer them on top of each other to make an awesome OctoCat. And then of course, the profit thing. It's a default for the internet. Okay. So it's a day of the hackathon. I'm going to actually take a sip of my water that I've been holding. So we sat down, and I put everyone in three teams. There was like seven or eight of us. There was an illustrator. Her name was Tina. She was awesome. Actually, there was like five or six of us. We like lost people. We designed and templating Pamela and Tony. And then back into implication, just going to be me. We lost three people of our team right away when they heard what we were working on. They were like, we're probably going to do something smaller. I'm trying to remember what they did, actually. It was interesting. And I forget. But we'll learn more about them on the other side. So I got on there, and I was definitely project managing. I was like, all right, Tina, you go sit over there. You're going to work on illustrations. Client-side people, you get to designing. And me, I'm going to start on the server side. I did not find that other person to help me with the server-side stuff going on. So the client-side team as well, they were not the HTML, CSS design. That's all that they were working on. So Node uses templating. I decided that I was going to do this in Node. For reference purposes, this is my second Node application. I had done an e-commerce Node application for a client. So I was not 100% comfortable, but I at least knew a little bit about what I was doing. So Node has templating engines like Rails does. And Jade templates are probably the most popular ones. They look a lot like Hamel. We're going to compare them in a second. So I've worked with the templates that makes it easier to display data. They haven't done that. But the cost of them to learn how to use Jade rather than me doing data in HTML and CSS was, I was like, all right, you guys. Here's a tutorial. You're going to sit down. We're going to do HTML, CSS first. I'm going to show you how to change it to Jade. And then we're going to be golden. So just to compare a couple different templating engines, Jade is a super popular one. It's pretty good looking. It's white space sensitive. Personally, I don't like things that are white space sensitive. But a lot of people do. It's not awful. When I looked at the giant list of templating engines, Jade and Hamel.js are made by the same team or person, maybe the team. And this was literally what it said. Jade is like Hamel but easier to read. So that refers that Hamel is just like Hamel but less easier to read than Jade. So you can decide what that means. Mustache, Js. I don't use anything with mustaches so you guys are going to figure that out. And Js HTML is also a popular one and that is a razor view engine for Node. Also you have a lot of hosting options. Really easy ones too when it comes to working with Node. Node.jitsu. They have a fun IRC chat. That's a bonus. They're free which is also a huge bonus for now. I think they have an open beta now. So before you had to get a beta invite, I think now anyone can get work with Node.jitsu. It's super simple. Just like, does anyone use Heroku? Yeah, it's basically Heroku. The same type of thing with commands. Which brings us to Heroku. They do node hosting as well. They are bigger. Sometimes that is a lot better because it means they're probably going to be around for a while and they care about their documentation and they have the people to support you. So Heroku is probably more dependable. You can set up your own servers which gives you more control. It makes you cooler if you do that and you're not just doing deploy kind of. It means you have more spare time. But it does give you more control over your server. So you're up to the defaults. There's Windows. Just kidding. Has anyone done node hosting on Windows? So I haven't talked to anyone that has. But apparently there is node hosting for Windows. I wouldn't go in that direction because that is up to you. I'm sure you're so encouraged to do it now. So routing. So if you don't do server side development, routing is telling your application what page to serve up. And it's kind of in the controller. So you need a routing engine in Node. Well, you don't have to have it. Here are your options for routing in Node. Express is what everyone uses. Express has is a multi-layer. They do have a lot of libraries. They do routing and authentication. And they do a lot of stuff. So Express is super easy to add. It's just npm install express. Node also has baked-in routing that you can use for simple routing. Used to use the URL library that's included. Do URL.parse. Flatiron. I haven't used it. I've heard really good things about it. And they say it is simple, decoupled and unobtrusive, which sounds wonderful. But that's what they say it was on their site. So you see what you hear. So one detail that's important to the story is that we started at 12 noon. So the whole team got together at 12 noon on Saturday. Those were 11.30 am on Sunday. So it's 7 pm. Our illustrations are done. Here's one of our illustrations. How cute is that octo cat? So cute. They're beautiful. And so at that point, while I was working on the stuff like the Rowdy and things like that, I had Tina, our illustrator, turn them into SVGs. Turn all the features into SVGs so we could have those for layers. And then I moved on to the database. And I heard ODM for the first time last week. You heard ODM? So I'm used to saying ORM. And I was even saying like ORM when it came to Mongo and things like that. And someone said ODM last week and I was like, ODM. Oh, I guess because they're not relational. That makes sense, right? But then I heard that for the first time. So Node allows you to use really any database that you want. I usually use Mongo because it's so easy. And they have an official driver for Mongo, which I like better, before there was just something called Mongoose. And the image mapping for Mongoose was a little collogy when it came to like lazy loading and child objects and stuff. So I like that when Mongo came out with their official ODM, that I went with that, that one's pretty great. But you can literally use anything, Postgres, MySQL, MSSQL, all that stuff. So that's pretty great. And they have, so if you go on the giant site, they have the drivers for these right there. And most of them are the official drivers made by the people making the database, which is always good. So I was also making my objects and entities. I wanted to put one of them up here so you can see kind of what it looked like. In here I'm using Mongoose. This is before the official Mongo mapper came out. But this is very familiar JavaScript, right? So I'm making a new object called cat schema. And it has a lot of values in it. Everything is a string. I don't know if you notice that. But there's no idea or anything because Mongoose takes care of that for you. It makes you need, well, Mongoose takes care of that for you. It gives you unique ideas to all of your objects. But here's, this is kind of the idea I had going, right? So there would be a cat and cat would have different features. It would have glasses and glasses, yes or no. It might have hair. It would need a hair color, right? And eyes and eye color and mouth. This is what the cat was going to look like. So based on what you look like, it was important for me to get things like skin color and stuff like that. But for now, I was just going to work from here. And this is where we got started. So here, so I don't know, here's another thing we should learn about, modules. So you can write, you know, the libraries that people are writing, you can do the same. And it means that you're just building a module and node and you can reference the module you've built using the exports keyword. So right here, I'm using the exports keyword to do some routing for a page called new under, in a folder called photos. So this is, when I'm generating a new photo, this is what I do. And photo is another object. So I just wanted to give you guys an idea as to the syntax. There's not going to be, there may be time for coding today. So we might see some stuff. But exports means I'm calling a module from outside my file that I'm currently working in. And then we do, so I was also working on authentication. This is where things started to get a little, went to start to go a little wrong, right? So I decided that I want to do authentication and what kind of authentication would we do. Obviously people would not be allowed in the site unless they get authenticated with the GitHub. So this is actually a battle, right, with our team. Because they were like, well, people should be able to get an Octocat even if they don't have a GitHub. I was like, no way, you know, we need to make it so people that have GitHub get an Octocat. And then we can find out, we can find out what their primary language is. And then we can have images that go with their languages. So it's like, Tina, work on Python, get a Ruby, we'll find out what people are working on most and then we'll add that to their Octocat. Right, so you can get a really personalized Octocat. So I use something called EveryAuth, which is exactly like it sounds. You can authenticate with anything. It's baked in, so it's, you know, Facebook, Google, GitHub, LinkedIn, all of the things. It's all baked in and they make it really easy. So I definitely endorse EveryAuth. Back to something that's baked into Express. They also do authentication. Most of your routing packages and most packages that are there to just assist you in using Node have authentication. Okay, so now we're at 12 AM and our designs are done and they're ready to be implemented. And they were super good looking. How cute is that? Also, I bought Octocat.me. It was our URL. How great is that? Right? And this is what it looks like. This is our little Octocat and it has a machine and it's really awesome looking. So I was super excited at this point. I was like, yes, we're so winning this thing. And so it was time for the people working on the design to get going with turning the designs into jade templates. So that's what I had. That's what I had the design team working on. We were four people. I like how there were teams. So this is just what a jade template looks like. And like you can see, it's white space sensitive. It's pretty familiar to those of you used to templating engines. You can add properties in Perenn and different things like that and do nesting with tabs. So it's pretty, the good thing about the templating engines is accessing data. You can pass objects into the engine and then it's as simple as doing your object name dot property whenever you want to reference it. So that's what I appreciate about the templating engines. Otherwise, you can use traditional HTML CSS and pass in the data through JSON and then populate it in client side JavaScript. So at this point, I'm working on the photo upload and also getting data back from the face.com API. So here we have my API keys. Don't write these down. And I was also using, do you know showoff.io? Have you ever seen showoff.io? It's actually a pretty neat tool. You can do this yourself, but they make this really easy for opening up your local host to use as a remote server. So in order to develop locally, I was using showoff to host my image. So that's a cool tool for any of you who do web development. I think it's like a dollar to use it for a day and $5 to use it for a month. So that's a great price point. So I was making the API call to face.com and it returns back great information like the gender and the mood of the person. I tried to work that as well. I had a smile and a frown and all this stuff. It gives you whether they're wearing glasses, but not all the information I wanted. So the gender and whether or not someone is wearing glasses is the only thing that's very effectual for our Octocats. And that's not enough. I didn't think that was enough to know about people in general. So I had the photo upline done. And I was, so we have the designs done, with the illustrations done, people are working on the Jade templates. I've got the photo uploading, the authentication done. And looking back at this, I'm like, wow, I did a lot in a really short period of time. Like it was like 12 hours. So we had the photo upload, the authentication, returning data from face.com. So at least halfway there. So we're posting ready, routing is done, the database is set up, authentication is working, the API is being called, and the views are being implemented by people that are learning how to use ViewEngines as we speak. They did an amazing job. So what I wanted to do next was get skin and hair color, right? Because if I get skin and hair color, that would be amazing. And then we could put hair on people. And then I even, I was going to do lengths of hair based, because gender is not a Boolean. It's returned to you as a percentage value. So it's like if people have a higher female percentage, I'm going to give them longer hair. That's a weird gender indication. But I thought I would work with it. So there is, so getting the color of an individual pixel, which is what I plan on doing because the face.com API, some of the tags they give you is like the location of the eyes and the location of the quarter of the mouth. So I was like, okay, I'll just go in the middle of those two things. I'll grab that pixel. I'll grab the color. The chance of error is pretty wide because there's something in the way of my face. I'm going to get that color. But at least we'll be close enough to get the color of people's faces. So I looked into all of this. This took quite a few hours. There's a couple node libraries. I actually, when I was putting this talk together, I found a whole bunch more. The community is growing like crazy. I was like, wow, I wish I had seen this then. But the graphics magic was one I played with for a while. There's a lot of dependencies that I had to install. And I was just seeing they're doing like npm blank. Npm, just like, it would be like, you're lacking this. Okay, install, install, install. And then that didn't work out. So I went to something called node box. And then something you can do is if you make an HTML5 canvas, you can pass it a pixel and you can get the color of it. At that point, when I started to do that, it was 3 AM. I was exhausted. And I did not get far with that at all. So I finally got to the point where I'm tired. I'm not getting skin in here color. This is it. And at 4 AM, this is the number one image when I searched for panic. This is the band panning at the disco. I didn't know if anyone would recognize them. I feel glad that no one did. So at 4 AM, I was in full scale panic. I looked at the team and I was like, just so you know, we're not getting, this is not going to get done. We should just quit right now. This is not getting finished. I know because right now I have six hours to do all the stuff that's left and it's not possible. And of course they looked at me and they're like, no, you're great. Come on, you can so do this. We're great. We're most of the way there. We've been working for so long. Let's get this together. You know, Sarah, it's going to be fine. I don't think you understand, this cannot be physically done by humans. And they were like, no, totally. We were so close. We're almost there. I was like, all right, you guys, I'm going to keep trying. So the next challenge that I ran into is the fact that I need to store color and superimpose these SVG files on top of each other. Three things I have not done before. So I had gone into this like there are some big holes, and we'll do reflection at the end of the story, but going into this there were some big holes in where I didn't know what I was going to do. I just figured there was a way to do it. So this is another case of this. So I didn't know how to do this. I had done this before, but I was going to figure it out and it was going to be fine. So there's a couple. There's something called SVG edit that works with Node doing image editing. D3 is another one. There's something called Rafael. I don't know if you've heard of Rafael before, but they do it's a client side and you can use the server side as well. Image manipulation library for SVG files. So that was something that was good. Something I ran into was the fact that SVG files are really big. So at first I was trying to store the text of the SVG file in JavaScript objects and calling them, but that was messy. Because if you got one, if you accidentally hit a key, the whole thing was screwed up and you couldn't really figure out where. So then I was like, all right, well, I'll store them in Mongo then. I'll throw them in the database because then you can't really mess with them. You can just call them and you're not going to really overwrite them. But then I figured someone had to sit there, either someone had to sit there and do insert statements, which I'm the only person that knows how to do that. So that's not going to happen. We don't have time for that. Or I have to build an admin screen for someone else to enter in the SVG file text. So I went down that path for a little while and that didn't work out. I was also going to scrape the data from inside the SVG file and return it that way. But that was also difficult because for a couple different reasons, mostly because of the HTML encoding. That's what I ran into an issue with. So at this point, and then I was thinking about storing it, you just in a page and note it and doing it that way. And that went for a while. And then we reached the 11 AM point where we were at a full scale. It's the worst thing, right? Because the rest of the team totally pulled their weight and they worked hard. They were up all night doing these jade templates, something they'd never done before. I felt like being in a battle. I felt like in general in battle where half your army dies. You're like, sorry guys, I'm super bummed. I don't think any general sounds that way. I don't think that's ever happened. But you look at your team and they're half asleep and they've been working so hard and they want you to get it done so bad because they worked so hard making something wonderful and you just don't have it. It's not there. And we got to a point where we were just trying to return something on a boot. I was like, okay. Because hackathons, usually the demos for hackathons are pretty collugy. I was really disappointed because I wanted this amazing application where people were like, oh my goodness, this is the coolest thing I've ever seen. But usually hackathons are thrown together. And if people have only something half done, it's fine. So at the last minute, I was like, all right, I'm going to do it just so it gives me a gender. We'll have two cats. One will be the boy cat, one will be the girl cat. It's enough for proof of concept. We'll put it in there. And I was so tired and I had nothing left in me and I couldn't get that to work. And so at 11.30 when the demos were, I just quit. I quit. And the saddest part of all of it was my team went and watched the demos. So they were just sitting there after having worked all night and they didn't get it done. They were great and they were awesome to work with and I've worked with them at Hackathon since so they don't hate me. But I think it was a really good learning experience because I'll let you take away, extrapolate from what you want. I think the biggest thing I learned was to not have more than one undefined when you have a limited period of time. So my rule now is if I'm learning something new, I only learn one thing that's new. And the rest of the thing, unless I have an unlimited amount of time, the rest of the time I go with something that's familiar. So rather than doing a node application with SVG manipulation and layering and all these things in a weekend, possibly either finding someone that had done that before would have been good or doing more research into actually playing with SVG manipulation before the weekend. Getting something that worked, maybe it's not that so it's not cheating, but knowing what I'm doing while I was going in the weekend, it was a big thing. So I think that was super important. But while we're here, so that's the end of the story. While we're here, I forgot to turn off all the things. We're going to get Skype messages and everything. So while we're here, I figured we could take a look at Octocat. I have to plug in as well. It's not looking good. There's a repository. If anyone feels like looking at it and working on it. I figured we could take a look at the code base. Also I'm going to turn, there we go. Sorry. Okay. So this is what our solution looks like. And to go over some of the folders that we're looking at, and we're going to look at some of the important files that you have when you're doing a node application. So this is my server.js. I do a lot of my config settings in here. I can do my routing in here as well if I have simple routing. A lot of this code was written while in a partially comatose state. So I apologize if there's anything weird. I tried to do my best though. That's my every-off stuff. I'm not going to go line by line. But a lot of the config settings for your application you're going to have in your server.js file. So things like my view engine is jade, where to look for my image files, where, what kind of, so we're using CSS less, which is a lovely thing. And node allows you to use it easily. Where to look for my views, different things like that. So these are a lot of the settings that I, that you have in your application. So that's an important file to have. Another super important file is your package.json. This is basically your manifest file. It tells your, it tells the server what packages you are using. So in this case you can see, can you see in the back? Okay. So in this case you can see I'm using express and jade and mongoose. And I installed them locally using, so once I have this file. Which happens when I, so what, when I'm using a package I do an npm install. So I do npm install package name. And the package name and the version number automatically gets put in my package.json file. And then on the server when I just do npm install, it installs these packages. That makes things super, super simple. I have my models. There's a lot of models in here. I went a lot of different directions there. And, and my routes. And my routes, just to kind of take a look, you've seen some of this. My controllers, I'm sending certain information out to my views. So then we can look in my views folder. And, and that's where, so this is a photo upload page. You can see jade is pretty simple. It's just white space sensitive with nesting. We already took a look at this before a little bit. But, but it makes it, makes it relatively easy to do HTML templating. And then I have my public folder, which is where I keep all my assets. So public is where I have my server side job, sorry, client side JavaScript, my images, my style sheets and stuff like that. So let's take a look. I'm going to start up, show three. So that will, this is showoff.io. It's just going to open up my, my ports. Oh, how is that possible? Yes. Okay. And then in order to start my local server, I just do node server. Must have changed something. Hold on. Okay. What file is it telling me? Module. That's unfortunate. Let's do an npm update. Oh, that's a bit. Sorry, I promise this was just working before. I must have changed something. That's a bummer. Well, I think I can go to, I have that solution online as well. So let's try that. So it should be. Okay. So this is where it's running online. So, so I have a little occat me guy. And I can sign in with GitHub. I haven't tested this remotely. I'm sorry. All right. So we're going to have to debug this issue. It's cannot read. So I must have done something in package that Jason. I'm in the right folder. Oh, you know what it is? I'm not in the right folder. There we go. That was a scary one. I thought we're going to be debugging for the next 15 minutes, which is a super fun talk. All right. Okay. So now I have my GitHub off. And the worst part of it is I worked on the GitHub off for sole. It really doesn't give me anything but locking out people that don't have a GitHub. So that wasn't very valuable. So you just go in and you upload your own head shot. I have this fabulous one that doesn't look like me anymore. It's different hair. And then I hit create. And then nothing happens, which is what we're going to work on right now, because this is how far I got, right? So let's take a look at I should plug in because there's 15 minutes left and I have 13 minutes of battery. This is one of those things that's going to change quickly. Sorry. Something you want to do first. All right. Great. Okay. So this is a photos file and I'm going to open up that view now. So views photos show is the page that we're looking at. So I can tell that by going here and it's photos and photos, the default is show. If I go to my routing, which we should take a look at, photos, routes, and it's show. So it'll take, it's the index, my apologies. So if you go look at my, it's a good thing I did that. So if you look at my routes and you go to the index, that'll be my default. The index is my default page. So that's why it's not photos index and that's set up in my server. So I'm going into photos.index. That may actually be false. My apologies. The whole server thing kind of got me flustered. So let me collect myself for a second. We're going to find this out super easily. Okay. So now I'm going to refresh the page. Okay, good. So that must be set up in my server.js. So it's, we're going to be photo. Take that alert out. It's sending me to the show.jade, which right now I don't really have anything in there. I have a body and all my HTML stuff, but nothing that I can reference. If you look under image, you'll see photo.pass is something is being passed and that's jade doing its rendering of an object that's being passed to the page. So photos and object being passed to the page and right now, if I go look at that page, so that image ID is photo hidden is this. So right now the photo that's being sent in is the photo that I uploaded, which is a photo. But let's see if we can, based on gender, pass in one of the cats, right? So we're going to take a baby step rather than doing the whole thing with the SVG layering and all those things. We're going to take a baby step and we're going to pass in one of the cats based on the gender, what the face.com API tells me. So we have 11 minutes and we're going to do it. It wasn't the lesson that I overcommitted myself. Okay. So I'm going to my routes for photos. And here's where photos.show came in. So here's my route. We've taken a look at this before. The face.com API is sending back data and information. What I'm going to do is I'm going to make photo, so just as a baby step, I'm going to do photo. And then what was the value? I'm just going to take a look back. For the photo, it was photo.path. So I'm going to make photo.path a photo that we have locally. So I have to look in the images folder. So I'm going to grab two photos. I have a boy cat here. This is a friendly looking boy cat. Is that popcorn? No, I think that's beer. None of these gender generalizations are alphabetical. So I'm going to grab these two photos. And I'm going to go to my back end folder. And I'm going to go into my public folder and images. I'm going to put them in there. So now I have my two pictures. And as you can see, when I upload an image, it's also getting thrown in here as well. You can see the 947, that image was uploaded to this folder. So I have girl.png and boy, what kind of file is this? Let's give it a, well, that's not. Okay. All right. So for now, I'm just going to do, should be able to do images, boy.png. For now, I'm just going to make it a boy regardless. So then photo saves. And then I'm going to go over to my photos and I'm going to go into the show. And then I'm going to take out the style here because it says don't display it. And I'm going to just image photo path and hopefully that path works out. So I'm going to save this. I'm going to restart my node server. Oh, this is stuff I'm printing out. So as you can see right here, this is all the information that I'm getting back from the face.com API. I'm printing out my console by doing a console log on the server side, which is super convenient. It acts just like on the browser side, sorry, client side. And it gives me all that awesome information, which is super useful. I really like this API a lot. So it can tell that I'm, it's 95% sure that I am a female. Great job. It's 95% sure I'm not wearing glasses. Awesome. It's pretty sure I'm smiling. But this is really good information. And if you have a lot of time, you could build something pretty cool. It gives you the yaw and the pitch of your face, I assume. So where your ear is, so that's cool stuff. Okay. So I have to restart my server because I've changed something in the controller. So that means I should restart my server. And then I'm going to go over to the page and I'm going to start over from the beginning. Okay. So I did something wrong. Okay. Sign in with GitHub. Okay. I don't know what happened there. So I'm going to upload a photo. I've got to have another one that I've already used. All right. I'm not going to go through my pictures directly. Okay. So I'm going to upload this picture. Do I have my local server running for show off? Okay. I have to start my... Okay. Let's try again. Third time's a charm. So start from the beginning. Oh, no. It doesn't like show off right now. Okay. What can I do? I can do... Get the local house going and then I'll just go to the third page. That's how great my... It should be photo. Can I just skip? Nope. No. I won't let me do it with that off. So one thing I can do... Nope. Is it going to stop? Strange. All right. Let's try it and hope that it works out. That is not... Yeah. All right. So now we have our boy cat that's working by default. We have four minutes and an iffy server. Let's see if we can throw an if statement in there based on gender. I don't know if I... Actually, I don't know if I want to risk it. I don't think I was a misery. So we have our boy cat showing. So that was step next. And so what's to come with Octocat.me? I don't know. I'm still working on it. I'm taking baby steps. I'd really love to put it out there sometime, so hopefully in the next few months. Because I think it's a sweet idea. It's just not a sweet idea of something to do in a weekend. So thank you guys. Thank you for coming. Can I answer any questions? All right. No questions. Okay. Well, thank you guys so much for coming. I appreciate it. And I hope you enjoy the rest of the conference. Thank you for having me in your lovely city. Thanks. Thank you.
Using JavaScript as a universal language for your web application sounds super nice, but how to get started? In this talk we will build our first node application together. Learn about common packages (and what npm is), as well as drivers and design patterns. We will learn how to send serverside JS clientside, how to write our own modules, and where to look for simple hosting. This isn't a "write a webserver in a few minutes" talk, we're going past that and walking out with a fully functional Node.js web application in 60 minutes.
10.5446/51036 (DOI)
Good morning. Welcome. Hope you had an interesting morning so far. And my intention is to mess with your minds a little bit this morning. So if you've had a little bit of an easy time, people talk to you about languages, technologies, and frameworks and that kind of stuff. Our focus today is going to be a lot more on the things that most people don't tend to talk about very much in software. About the runtime behavior behind the code that you write. The way the technologies that you take for granted actually do behave that may end up causing you problems. And the problems are the kinds of things that are really hard to find and really hard to address. So based on the title of this talk, Command Query is in Consistency, you can probably guess that it's going to relate a little bit to CQRS. So who's heard of CQRS before, Command Query Responsibility Segregation? Okay, that's just about everybody. Good. Who's already using CQRS on some kind of project? Okay, got maybe 50%. Now, whether you're using CQRS or not, what I tend to see as I travel around the world and work with clients is that people end up making the same kinds of mistakes repeatedly or more accurately there's the same symptoms the way that people make mistakes is very different each time. So most of it has to do with business logic. So with regards to business logic, I think it's fair to say that all of us in our systems have some kind of complicated business logic, but I want to talk about a specific kind. The kind that involves both commands and queries. So who has a part of their business logic sometimes has to look up some additional data, you need to query some additional data in your business logic? Yeah, okay, so that's just about 90% of you. And it turns out that in that area that when you have your most complicated business logic, which often gives the business the greatest value, that's where we actually have consistency problems that are hiding. And this is irrespective of whether you're using CQRS or not. Even if you're using a fully synchronous end-to-end architecture talking to a fully transactional relational database, these things can bite you. So a lot of times, you know, everybody's heard about this eventual consistency and, yeah, you can go use Mongo, it's WebScale and all those kinds of things, but a lot of organizations say, well, we'd rather keep it safe and stay with the things that we know. And what I'm here to tell you is that even when you're using the safe architectural choices of doing everything synchronous, even when you're using the safe technology choices like using SQL server or Oracle, even then sometimes you could end up with an inconsistent system. I don't want to tell you why and how that ends up playing out. Now, first of all, I want to talk a little bit around the background, you know, which type of world are we in now? So the always-on, perpetually working, you know, lots of users in parallel type of system has become more and more ubiquitous. Users expect to be able to access their data whenever they want, wherever they want, and it's not just each user by themselves. When you have a situation where you have users that are only operating on their own data, life is actually pretty simple. I call those types of systems, they're multi-single user systems. They're kind of like a multi-user system, but each user's kind of in their own little data and that's it. It's when users are able to touch each other's data that things start to get even more interesting. Now, unfortunately, when we look at not just the technologies that we're dealing with, but the programming practices and the paradigms, a lot of them are based on that same object-oriented thinking. Well, you write objects and you persist these objects in some kind of database and it works, or at least it works on your machine pretty well. Then you put it in production and some kind of problems happen. Usually the way that we address consistency concerns when we have multiple users operating on the same set of data is using things like optimistic concurrency. Who's heard of optimistic concurrency? Yes, just about everybody good. So, we got our traditional first one wins and last one wins concurrency, where first one wins means first user that gets in gets their changes done, last one has to redo their changes. Sometimes this gets even more interesting when we jump into a user and themselves and the user is able to do things relatively quickly. So, let's jump right into code. The most traditional and oculus type of domain object that you've probably seen, the order object. Everybody, when giving some sort of presentation, always reverts to retail and here we've got an order object. Now, this order object is implementing some, I wouldn't call this complicated business logic, but we got a new business requirement that says as a part of processing a new order, we want to decide under which conditions a customer is going to be getting a discount. Now, if this customer in the past week, the customer that is submitting this order in the past week did more than 250 dollars, euros, pounds, Norwegian Kroner, Swedish Kroner, Danish Kroner, whatever your currency of choice is, then they get a discount. And if not, they don't. Your average developer looks at the requirement, implements this code, doesn't really think twice, checks that it works okay, maybe writes a unit test or two, commits and off it goes to production. Now, the question is, all right, so what's so bad about this code? Nice, clean, object oriented, domain driven, we can make it test driven too, you know, we got all the little check boxes ticked out of the best practices. The area where things become a little bit tricky is what happens if it's, I don't want to say two users, but the customer in essence is an account and we can have multiple users from the same account buying stuff on the same account. So you have two users at the same time, both of them are submitting an order. All right, so imagine two threads going through that logic at the same time where the last week of orders was $200, each of them is submitting a new $100 order. Both of the logic goes down to the database, both threads go to the database, take a look at all of the existing orders and see, oh, no, we've only got $200 worth of orders. Both of them return false, right? Where if these users just made their requests a couple of seconds apart, then one of them would have gotten a discount. Now, that's not a very nice thing to do when you think about it. I mean, ultimately, we are penalizing our users for this type of behavior, for purchasing too quickly. Now, before I jump into sort of the bigger solutions, I want to start with how we ended up here to begin with. So this idea about processing these types of things in real time, before that, we had the good old solutions, the ones that, you know, anytime you needed to do any kind of data crunching, what did you do? You wrote a batch job, something that would run, you got your nightly batch, somebody comes along, yanks a lever and the machine starts humming. And off it goes to the database and calculates, calculates, calculates, and does a whole bunch of stuff. Here's a code that might look like something like that. So we've got some customer object that has a method that's invoked in the nightly batch. And what it's doing is fairly simple. It's just, you know, going and looking up these types of orders and then saving some sort of flag on the entity. Saying, all right, you know, this customer now should be given a discount. The problem with that is that we have this sort of reset problem. Say, well, a week, you know, how exactly are we counting that? And then in terms of the order logic, the order logic is simpler. Now, we just check that give discount. Sorry, it's not this, it's a customer dot give discount. So this is how we used to approach these types of logic. The, you know, anytime you need to do a big historical type of data crunching, you did that offline, you stored the results in some sort of Boolean values or other rolled up aggregate values. And then in the real-time environment, you just check those values. The problem with this, and it was a real business problem, is that once again, if we had two users coming to us, in this case, it's not clicking the button at the same time. In this case, it's actually clicking the button on the same day. If we have two customers or two users from the same account coming to our site, both of them submitting an order, then ultimately we're saying, no, sorry, if you want to get a discount, come back tomorrow. You realize how ridiculous that sounds. If you were a customer saying, no, if you make your purchase today, you don't get a discount, but if you come back tomorrow, I'll give you a discount. So it's understandable why businesses all over the world are moving away from these type of batch jobs. It doesn't work out that well. And that leads us back to this type of logic. Now, this shortens down the problem, at least with regards to the discounts. The bigger issue is that once again, we have a consistency problem. Now, just framed another way, we've got a race condition. We've got the ability for two users to be invoking some logic that when that logic happens at the same time, it ends up making a wrong decision. I want to talk a little bit about why that happens, because a lot of developers, they've been lulled into this sense of complacency with transactions. In other words, we know all this code over here is running in a transaction. Our belief is, assuming we're using a traditional transactional database, that all the code that's in there is going to come up with the correct results. The thing is that at a database level, and DBAs are good at thinking about these types of things, when they start talking about consistency and isolation levels, they pull it out and start thinking, okay, so we've got this transaction is touching this entity and that transaction touching those entities. When you have multiple transactions that can run at the same time, reading entities and writing entities, then the question of isolation comes up. And so in this case, we have a very simple scenario. We have transactions one and three that are operating each of them on a specific entity. One of them is on entity A, the other one on entity B. Then we've got our third transaction, the one in the middle, that is reading some values as a part of its complicated business logic. It's reading the value of A, reading the value of B, doing some sort of calculation based on those values, and using that to write the value of entity C. Now, what happens from a database perspective, and I'm talking about a transactional database here, none of that eventually consistent garbage, the stuff that you assume gives you the asset guarantees. The, I don't need to think about it guarantees is another way of presenting asset. So the problem that we have here is that our transaction number two over here gets the value of entity A as it was before transaction number one started. It gets the value of entity B as it was before transaction number three started, this is those stale values in order to come up with a value that should be an entity C and then writes that down. From the database perspective, this is a perfectly reasonable behavior. Your assumption is when I'm in transaction number two, the end state of entity C will be consistent with the states of entity A and B because I was in a transaction. The thing is that databases by default, when I say by default the vast majority of them, we're talking about SQL server oracle postgres. Unless you specifically tell the database, I want you to check versions, but not just the version of the entities that I'm writing because that's, that stuff databases know how to do. I want you to also check the values of the entities that I'm reading, known as multi-version concurrency checking. I want you to make my transaction fail if the value of entity A or the value of entity B changed during my transaction. Why? Because I require that entities A, B, and C be consistent with each other. Now, occasionally developers run into the setting, so anybody here using N-hybernate? Yes? For those of you using N-hybernate, are you familiar with an MDCC flag, the multi-version concurrency checking flag in N-hybernate? No? Maybe one of you? Now, it's a flag that occasionally a developer when spelunking around in N-hybernate will find. This is, in a sense, telling N-hybernate, hey N-hybernate, I want you to check not just the entities that I'm writing, but the entities that I've read as well for their versions. And if they're different, fail my transaction. Suddenly one developer will say, hey, I wonder what would happen if we turn this on? They turn it on and all of a sudden a whole bunch of their transactions start to fail. You start getting a lot of exceptions in the log. They're, oh no, no, no, no, turn that off again. Without realizing that that's feedback, saying, hey look, you actually have concurrency problems in your system. But like most developers, we don't like exceptions and we'd much rather pretend that they're not there than actually deal with them head on. And that's the problem. When dealing with parallelism, it can be quite tricky. It's an optical illusion, these transactions. It looks like they're not parallel, but they are. So just so that your eyes and head don't explode while I'm talking, I'm going to shut that off. All right? When dealing with parallelism, we need to start thinking through these scenarios. We need to start dealing with the fact that we're not just building multi-single user systems anymore. The bigger the domain model that we build, and that's the challenge, that when going to build domain models that have to support multiple users operating in parallel on a common set of data, the traditional domain model approach, you know, it doesn't always work so well. They're not necessarily safe for parallel execution. So that's a problem. You know, when people say, you know, I'm going to be doing domain-driven design, and then they create a whole bunch of entities and one to many relationships over here and many to many relationships over there, the belief is I'm doing domain-driven design. And I hear this all the time, both in the discussion groups and I see it also with customers, say, well, customers, my aggregate root and orders, my aggregate root and product is my aggregate root and they've got aggregate roots all over the place. But the interesting thing about domain models, at least as defined in the literature, is that an aggregate root is a consistency boundary. Remember hearing that, an aggregate root is a consistency boundary? Yeah. It's an important thing. That's actually the main thing about an aggregate root. However, when we look at what we just saw, that if you have transactions that can touch multiple entities and these transactions are running at the same time, you might not actually have a consistency boundary, in which case you might not really have aggregate roots. Aggregate roots are not a given. You have to work hard and understand the behavior of your domain in order to come up with a domain model that will have real aggregate roots. And that's a problem, not only for your traditional n-tier types of systems, also for the people doing CQRS. A lot of times when they're going to do CQRS, they're still applying these same styles of domain models. And that causes a problem. And the issue is that mistakes in this area can really be quite costly. The reason is it's not just the bug. It's the fact that you don't know how many things it's influencing. Is this influencing million-dollar orders or is this influencing thousand-dollar orders? Are we giving customers too much of a discount, too little of a discount? You really don't want to fudge around when it comes to consistency. So this is worse than a system losing data. This is system garbaging up its data. It's getting it into an inconsistent state. And the problem is that, well, most testing doesn't uncover it. I mean, when you look at it, if your users say, hey, look, I think the system made a wrong decision and we start debugging it, it can be really hard to find those bugs, right? I mean, how many tests you need to start doing parallel testing? And you need to find some way to assert the right state. This is hard stuff. So this is one of the areas when dealing with these types of things, we really need to start designing in the consistency up front. I'm all for the agile mantra of staying away from big design up front. But there are certain principles that you kind of got to get right. You got to have a good, strong foundation. Otherwise, we end up with systems that are defective by design. This is a wonderful website, by the way, defectivebydesign.org. You know, all sorts of things that kind of makes you wonder whoever designed them what they were thinking. Unfortunately, a lot of our traditional domain models in a parallel collaborative type of world are defective by design. Thing is, we can do better. We can do a whole lot better. But it requires us to think about software and think about requirements in a slightly different way. So to sum up this new way of thinking, think about it like back to the future, okay? Those wonderful old movies that you saw when you were younger that were really cool. Has anybody rewatched one of the back to... Didn't it really suck? Like the second... You're like, I can't believe I really like that. Always be careful with revisiting the nostalgia. Sometimes it's better to leave it as a pleasant distant memory. But the hinge in dealing with software and parallel type of software is exactly that. It's that concept of time. It's rethinking the arrow of time and how we program with it. So the traditional requirement that we said was, all right, we've got this stream of orders that are coming into our system. And whenever we get an order, we look back in time, seven days, or whatever the requirement is, sum up the total value of orders in that period of time. And if the value is greater than 200 and somewhat dollars, then they get a discount. And we mentioned the problem was if we have two things happening at the same time, we can end up with an inconsistent value. We can achieve the same result instead of looking backwards in time, looking forwards in time, which seems a little bit unusual. How can you predict the future? The thing is, you can't predict the future, but you can fold the future into your programming now. And let me explain how you do that logically. So when we get an order, ultimately what we're interested in is the last seven day total. Instead of calculating that, you know, at the time of the order, we can turn that around and say, actually, let's introduce that concept into our domain itself. A customer has a seven day running total of orders. Because ultimately that's what the business is telling us. It's talking about the last seven days. What we're doing is we're just making that explicit. So when we get a hundred dollar order, what we're saying is, all right, the seven day running total is a hundred dollars, but that needs to be decremented seven days in the future. In essence, we want to send data to ourselves seven days from now saying, you know, you need to decrement the seven day running total seven days from now. And as we get this stream of orders that are coming in, we say, okay, now we increment it by $200 now, and we throw information into the future and say, decrement it by another hundred dollars then. And as we keep on going, you know, aggregate the data that we need so that we have a current running total. That's telling us, okay, now it's $300, now it's $400, et cetera. And as we catch up to ourselves in time, then we need to have that kind of wake up call saying, hey, look, it's been seven days since order number one. Say, oh, okay, then what I need to do is to decrement it down to 200. Okay, it's been seven days since order number two. All right, let's decrement that one down as well. The difference here is that, like I said, instead of looking back in time, we look forward in time. So as we're modeling this, as new orders are coming in, we're doing that same type of behavior over and over and over again. So that when we finally arrive at a specific order, we're not actually doing a query. We're not doing a historical lookup. But rather, we've folded the history into our current state. But we've done that in a highly consistent way. The reason that we've done it in a highly consistent way is that what we've done is we've in essence created, let's call it an upside down batch job. So remember the code that we have with our batch job where we said, no, over here, we're going to have a customer that has some sort of give discount equals true value. We rolled up some state and we put it in the customer. We're doing the same thing here. We're rolling up some state and putting it in the customer. The important thing is that when we're dealing with a customer or something like that, what we can say is, well, now we have a specific entity that has specific data. This is one row in the database. Databases can guarantee us full transactional consistency at the level of a single row. In other words, if we have two things that are happening at the same time, one of them wants to submit a new order and another one is dealing with a timeout, then ultimately that bit is going to lock on that single row. That in essence is the trick of an aggregate route. To design your logic in such a way that all transactions are touching just a single row, a single entity. However, in terms of programming, this requires something that we don't really have. We need a way to program time. And you know, system threading timer, not such a good idea, is it? The problem with those types of in-memory timers is that if our machine crashes, well, it won't remember that it needs to decrement the value in seven days. So we need a highly reliable, durable transactional way of dealing with time so that no matter what happens in our system, we won't lose that information. Once we have all of this kind of stuff, then we'll be in a situation where we can say, all right, now we can build real aggregate routes. So I'm going to switch over to code now to talk about how we actually build these types of things. It's not going to look like your average domain model. Let me tell you that. It's not going to have one-to-many relationships. What this is going to be building on is a pattern that in service of us we call the Saga. But this comes from the Saga ideas ultimately taking what we were talking about here, saying what we have over here is a long-running process. We need a transactionally consistent, stateful, long-running process that can be managed by time. The Saga pattern was discovered, documented in by the database community as they were dealing with what's known as long-lived transactions. So they were trying to address the same kind of issues, but more from a data up perspective. What we're now looking at it from a domain-driven design perspective is a behavior down. So it's not a, you know, by the book Saga definition, but it follows a lot of those same principles. And within service, as we try to make it easier to program these types of things, because like I said, number one, well, time isn't something that we have a very good way of programming, either in.NET or in Java or in Ruby or in Python. It requires a little bit of infrastructure and also requires us to deal with the issue of, well, the consistency. What happens if I have a timeout being processed at the same time as a new order is coming in? We need to make sure that those locks hold up and that's why they have to go on the same entity. All right? So when dealing with this, I'm not going to spend too much time talking about the infrastructure. A lot of the ways that Saga's behave with regard to an infrastructure perspective are documented and you can see it online. I want to talk more about the behavioral modeling side of it. Okay? So what I'm going to do, I'm going to switch over to code, like I said, but to make this a little bit more interesting, what we're going to do is, so here we've got sort of the basic level of just a simple project, we're going to start writing this in almost a test-driven fashion. It's not going to be entirely test-driven because I required just a little bit of stuff to set up so you can see how this is going to play out. So we're going to start off with the message, the thing that's actually triggering our behavior. We said, well, we've got orders that are coming in, these orders have a total, and ultimately we need to make sure that all of the order totals, as they are correlated by customer, that we calculate the seven-day running total. Okay? So I'm going to set up a couple of messages so that we actually have something to deal with over here. So here we have an order accepted message, and this message has a customer ID so that, as I mentioned before, we want to make sure that we aggregate the stuff based on customer ID. Each order will also have an order ID so that we can clearly identify it, and it has a total, which forgive me, I'll just use a double because decimals are a little bit ugly, I have to do lots of casts later on. All right? So we have double, and we have the order total. Now, in dealing with this type of saga, ultimately when we're going to model behavior, and we look at this behavior, you know what we had over here, let me zoom back up to the picture. Over, there it is. Oh my God, it disappeared on me. There it is. That when dealing with, we're saying we have some behavior on customer and we have some behavior on order. And we actually need to pull that behavior together. The total value that we're dealing with needs to be stored somewhere. So instead of talking about this as an order saga or a customer saga or naming it as a regular entity, I'm going to call this what the business people call it, this is a discount policy. All right? It's the policy with which we use to decide which customers get a discount and how much. Okay? Now, for this discount policy, we're going to have some discount policy data. Now, in Service Bus, we have this interface called IContainSagaData. And let me squeeze that over a little bit over there. Okay? So the IContainSagaData has some required field. It's not particularly problematic. So why is it complaining about this? Discount, of course, discount policy data. There we go. I don't want to talk too much about this type of information, but ultimately, an ID is something that we need to clearly identify this quote unquote entity. And the discount policy actually contains the behavior. So I'm just going to set up that saga representation saying that this is holding the discount policy data. What we have over here is a situation where we're saying we have a stream of messages, these order accepted events that are coming in and our saga wants to handle them. So if this is the first order that we've received for a customer, then we're going to actually be kicking off a new discount policy for this customer. Say I'm started by messages of, we call that order accepted. Okay? We're saying over here this discount policy for a specific customer instance, what we want to do is we want to take the order total and increment the seven-day running total of our customer. So we can say over here, let's create a public double seven-day running total. And when an order comes in, we say, okay, the data's seven-day running total plus equals message.total. And what we'd like to say is, and seven days from now, decrement it back down. Right? We want to take this message that we have right here. I say, you know what, play that to me seven days from now. So this is one of the things that we've introduced within service bus as a messaging framework. It's easy for us to hold onto data in various cues. Unlike with HTTP, HTTP doesn't give you a good way to put something on the shelf and get it back later. So you kind of have to, okay, we'll put it in a database and we'll have something pulling against the database and it's quite messy. Messaging systems allow us to model these types of behaviors easier. So one of the things that we can do over here is say, request a timeout, passing in a, well, when do you actually want the timeout? So here I say, well, I want a time span from days seven. And what I'd like to say is, the data that I want to get back is ultimately the message that I have right now. So I can take the message that I have over here and say, you know what, send me back that message in seven days. Where when the seven days are up, well, then I want to decrement the value back down. But as I mentioned, you know, this is sort of the basic stuff of setting it up. I'm not going to implement the logic of the handling of the timeouts just yet because I want to do that via tests. But I want to set up that extra little bit of infrastructure it says, and my discount policy needs to handle timeouts of the type of message order accepted. Okay? So over here what we're saying is, when I get a message back due to a time period being elapsed, now I'll do the rest of my logic. Now let's take some of the scenarios that we had in this picture and start writing some tests for it. Because as we know, when going to write domain models that have complicated business logic, and I got to tell you, once you start programming with time, you're going to want to be able to have some decent tests. Because if you don't have automated tests, imagine what your manual testing is going to be like. Somebody comes into the office Monday morning, presses a button, says, all right, see you next Monday, and is waiting until actually the seven-day timeout goes by. This is, in essence, the problem of doing full-blown integration tests when you actually have a clock that needs to be dealt with. By having a message-driven time model, we can simulate the passage of time with a message. So let's start writing some, I'm not sure I want to call them tests, because we're more looking at this as sort of a scenario. Given that, you know, you got message number one, message number two, message number three, then this is what should happen. Now one of the things that we said, and I want to go back to our original code requirements over here, said, well, then we need to actually give the customer a discount. So this discount policy ultimately needs to decide what message it's going to be sending out. So I just want to add that final bit over here, and Roy will forgive me for not doing everything test-driven, but we'll add that command over here that says, let's call it give discount, or sorry, discount order, and we have, that's actually not a discount, right? It's talking about processing an order that has a discount. So we have a process order message that can have a public, let's call it double, discount percent, okay? And that's ultimately what we want to be checking, that when a message is emitted of a process order that it has either a 10% discount or a 0% discount. All right? So let's start writing a test here. I'm not yet sure what to call it, because ultimately this is a scenario that is going to be playing out over time. So one of the things that we created within Service Bus was a nice way to do these types of message-driven, time-driven unit testing, because it wasn't really easy to do with just regular N unit, X unit, J unit, whatever. So we start off by initializing our test. So this is the test.initialize, and hopefully it'll identify that for me. Okay, so we've got, and I know what the problem is. That method, let's call it scenario. Confuses it. All right, so we've got N service bus,.testing.test.initialize. Now I can get rid of the beginning part. Hopefully. Let's try getting rid of that so we get shorter lines. Okay? And now I want to test my discount policy saga. And what I'd like to say is that when I give it the first order accepted message, it will emit a process order message without any discount. Okay? So I'm going to expect, ascend. In other words, this discount policy is going to be getting an order accepted message. I was going to say, you know what, this guy doesn't get a discount, it should just be a regular process order. So I expect a process order message that this message, its discount percent is equal to zero. When this message, when the saga receives the order accepted event. Okay? So I can say the saga is invoked here, so we have saga.handle, new, order accepted. And in here, let's pass in an order total so that we can actually take this to the next step. Alright? So we have a new order total where the total value equals, let's say that that's 100. Okay? So we've written our first test saying if you get an order accepted, you're going to send out a process order message without any discount whatsoever. Let's run that really quick and see that, well, it's not going to pass clearly because we haven't actually sent out a message. And there it tells us over here, expected send invocation, process order not fulfilled. Okay, so we need to do that. Let's go and make the test pass. Okay? So inside this logic over here, say when you get a handle, what we know that you need to do is send out a message. Alright? So we just do a bus.send process order message. And in this case, I'm just going to make the test, you know, this is the TDD style almost. I'm not going to try to be too smart, I'm just going to make it pass saying that the messages discount percent is zero. Okay? So let's run our test again. And this time it should pass. Hopefully fingers crossed. There we go. So far so good. Now we want to start saying, well, if I increase the value of this order, in other words, if this was a $300 order and then I gave you $100 order later, within the same week, then you will get a discount. So again, Roy will forgive me for actually just tacking this on because we don't have lots of time to get this done. So what I'm going to do is I'm going to start chaining the next order. So here I'm going to, and occasionally I like to do this, expect send of that same order, process order message. Only this time the discount percent should be 10%. If the total value of the first order was $300 and now I'm going to give you a new, let's say that it's not a $300 order, this one's going to be a $100 order. Okay? And now we can run this again and see that, all right? Now it fails again. Why? It's saying, well, the check evaluated false for that expectation. Clearly, we know we didn't do any type of tests in here to say that, well, you know, if the value, the 7-day running total, was greater than whatever, then we needed to do that. So let's quickly do that over here. We'll have a double discount percent equals zero if data dot 7-day running total is greater than, let's just say, or equal to 250, then the discount percentage is 10. Bus.send process order with the discount percent and now we'll run our test again. And this time, again, fingers crossed, hopefully it'll work. I'm not that good at this, you know, running the code at the same time. It says process order not called. What's that? Well, the first order hasn't timed out yet. Well, the problem that we had here was, all right, so we've put in a value that's 300, right? Total value has increased. We send in another order and our 7-day running total is what? It's very little code. I'm very happy that Andreas and yes, that was a plant. To have this a little bit interactive, right? The problem and I want to be upfront with you guys about this. Writing traditional domain models tends to be easy because we tend to think about them in a very single threaded fashion. No, we do this and then we do this and then we do this. It's very simple, straightforward logic. When we start dealing with time-driven processes, we need to start thinking about, well, what happens if things happen in parallel? So there are other concerns to deal with, okay? So ultimately, we have a little bit of a bug over here. What is it? Anybody see it? Anybody using Unservice Bus today? I'm curious. Yeah, okay, maybe 10% of you. The discount is applied to the first order. Well, ultimately what we're saying here is that when we get an order coming in, we increment the value. So what would be our 7-day running total? We passed in, remember our test over here, the total is 300 and then we pass in a total of 100, right? So ultimately our running total should already be how much? 400. Let's actually check that out, okay? So debug it. So the first time we run through this, we see our 7-day running total is indeed zero. That's great. We request a timeout, we calculate a percentage and we see that the 7-day running total is now 300. For the first one. In other words, the problem that we had here is that we incremented our 7-day running total too soon, right? In other words, the failure here is ultimately a failure saying, well, we expected a discount percentage of zero the first time and that's what we failed on, okay? So when I finish running this, let me go back over here, I expected send invocation and I'll just step through that and we can see the unit test session down here. It's important that we look at the calls that were made and at which point in time there was a failure. Over here all it's saying is calls made, a single order service dot process order message down here, okay? So in other words, it failed the first time we sent a process order, not on the second one. So always you will need to be looking at when doing these types of tests, not just a specific assert, but ultimately which one of these failed. So going back and fixing this, we've done almost everything that we need to do. The difference is that the incrementing of this value has to be done at the end, right? Where we say request a timeout from this message, calculate the discount based on the 7-day running total and then after you've calculated the discount, and it's probably better to have this bit done right above this. Now when we go and run our test, we'll see is that the first time that it goes through it says, okay, well we actually had zero, now the test passed and on the second one, well everything was great. Now let's add one, the important twist to this is time. We say, well what happens if time was up between these two orders? We got the first order for $300. We waited 7 days, it timed out and now we got $100 order. So this time, we should not actually get the discount. So this will be a, let's actually create a separate scenario over here. So I'll call this scenario two. It behaves very similarly. However, in addition to this when, what we're going to be doing, we're going to be doing here saying, and the saga times out. Okay? This method over here is doing a fairly intricate thing behind the scenes. Suffice it to say that what it does is it is watching the clock for you. So in your domain model, in your saga, you're saying, wake me up in 7 days and then wake me up in another 7 days and then wake me up in another 7 days, one after the other, what this when saga time out does is it replays time in the correct order for you. In other words, if you had one bit of code that said, well wake me up in 3 days and another bit of code that said, well wake me up in 7 days and another bit of code that ran later and said, wake me up in 5 days, say, well actually I'm going to replay you the timeouts in the order of 3, 5, 7 and not in the order that you open the timeouts. So this makes testing these things quite a bit easier. Now when we say this, okay, when the saga times out, in this case the discount percentage should again be 0, right? Because ultimately it's been 7 days and then when the next order comes in, the 7 day running total should again be 0. So now we can run this test and this is our scenario 2 test and this one we'll see should fail. That says, oh no, sorry, on the second invocation the check evaluated to false. You see that over there? So okay, so what we need to do is ultimately that final bit of our logic saying when time is up, this is when we need to decrement back down that state. So we got the data dot 7 day running total minus equals our state's total. Go back, run all of our tests, make sure that they pass now. Oh, we see that both of the tests pass. So the important thing here is actually two things. Number one, in dealing with our business logic, the logic itself was not extremely complex, right? In other words, when we compare the logic that we would have written in a traditional domain model, say, well, it kind of looked about the same length, right? Go to the repository, look up all of the 7 day history of the orders, calculate the sum total if the value is greater than this, then give this discount, if the value is less than that, then give that discount. So we haven't written a whole lot more logic. Another thing, while traditional domain models are also testable, who's written unit tests for their domain models? Yes? Good. Who's had to change their unit tests for their domain models more than once or twice? Okay. Who, if ever, got to the situation where they were wondering, what's the point of writing unit tests for these domain models if I keep having to rewrite them over and over again? Yeah? Some hands going up, wait, is Uncle Bob in the room? No? Okay, Roy, no, no Roy anywhere? All right, yeah, I can see. That's often been a problem that when we have domain models that change very often, a lot of times our unit tests end up having to change with them. Unfortunately, because a lot of the traditional domain model unit tests tend to be very coupled to the internals of our domain model structure. When we're dealing with this type of behavioral entity, the way that we test it, and you saw that over here, is we're talking about very much black box testing. I don't know how you're structured internally. I don't know that you actually have a property called data.7dayrunningtotal, and I don't care. All I know is if I give you these messages on this side, then you should be putting out those messages on that side. And as long as the discount policy requirement is the same, then those tests should remain the same. So we have pretty good testability. We have short code, object oriented, same as before. Main difference is the fact that we have time baked into this, where if the time out method is trying to run at the same time as the handle method, then regular optimistic concurrency blocks out one or the other, or causes one to fail, right? That's what happens when you have a conflict. So in your domain models, who's had an optimistic concurrency violation exception? Yes? Yeah. What do you do about it? Who logs it? Okay? You log, it's a best practice. Hint, whenever Udi says something's a best practice, it's not a best practice. Okay? So logging is great, but the question is, well, what do you do after that? So for all those of you who are running in Service Bus, the answer is simple. Well, we do nothing and Service Bus just retries it for us, and then it'll work the next time. For all those of you who are running traditional domain models without in Service Bus, when you have an optimistic concurrency violation exception, what do you do other than logging in? Who retries? Okay, we got about four or five hands going up. What happens if you crash? Well, we don't retry anymore because we don't remember that we actually got the exception to begin with. Kind of sucks, but hey, we didn't like that order anyway. So in dealing with these types of issues, its consistency, its reliability, it's all those elements that make up an aggregate root, a real DDD consistency boundary. It's making sure that you have the necessary infrastructure in there. So that if these two methods, the handle on the timeout, we're running on the same time, and here it doesn't matter if you're using a relational database or using a document database. Because we have modeled our behavior in such a way that it is a single element that we can have the locking behavior on top of it. So this is what ultimately gives us the consistency back again. As you see, dealing with time, it's not, in terms of programming, it's not lots of code. But for a lot of us, the hard part, and I just want to go back to this picture because it really is hard, these different perspectives of time, modeling this is something that is counterintuitive. It's something that we don't have a lot of practicing. So anytime that you have requirements, anytime you have a business expert that is coming to you saying, look, what I need you to do is to look at a historical query as a part of your business processing logic. Now, note that down, but understand that for you to understand the way that you're going to have to program this and the way that you're going to have to test this, you should really draw it out on a timeline. Sometimes it can be relatively simple like in this case, all we had was a single message type, a stream of order accepted events. In other cases, you may have multiple message types that are coming in. The more complex your logic, the more messages will be going in there. So I don't want to say always, but really, young, until you get proficient with looking at the world from a looking forward in time rather than a backward in time perspective, always draw things out on the timeline. And please, you know, write the tests. With sagas, with this types of logic, I'd say write the tests first. Sometimes it's too difficult to keep five different scenarios in your head. That is the challenge over here. So instead of saying like in a regular domain model, okay, well, we'll just implement the logic for each requirement and test it afterwards. And so, with sagas, what I recommend you do is draw a timeline and say, all right, we got message number one, message number two, timeout, message number three. What should be the behavior? Write a test. And I'd say that even more strongly, this shouldn't be test-driven development. Again, forgive me, Bob. This should be test first development. What I want you to do as you're doing this, sit down with your business expert because they're the ones that are going to be able to tell you how time actually relates to your business process. So you sit with your business expert, draw out a timeline, define what is the behavior, write a test for it. Don't make it pass, otherwise the business expert will stand up and leave. Okay? Once you got them, say, all right, let's go through another scenario. What should I do? Write a test. Your business expert should be able to read along this type of thing with you and say, yes, this is an accurate representation of the scenario. Go through scenario after scenario until the business expert says, that's everything. After you have codified all of the requirements as a set of failing tests, then go and make them pass. Okay? So it's test first development rather than test-driven development. You might be able to think up these scenarios yourself if it's a domain that you're very familiar with. Still, I'd say, you know, get a business expert in there. Time is something that especially in the more established domains, people are familiar with at least on the business side. There is a whole lot of experience on the business side when you're talking about mortgages, banking, insurance. These guys understand time very deeply. A lot of the newer domains where business, the business was computerized from day number one, online gambling is a good example. That's where time, sometimes business expert don't know to actually give you that information. Don't assume that the requirements you're going to get are going to be readily expressible like this. And that's the hard part about DDD that's known as ubiquitous language. Time is a big portion of your ubiquitous language. And unless you can work that into the conversation in a way that everybody speaks the same language, implementing this kind of stuff is going to be difficult. So just want to wrap up. We've seen lots of good books in software over the years. We have domain driven design. We've got the patterns of enterprise application architecture. People are familiar with the domain model pattern. In some cases, people are more or less familiar with some of the messaging patterns. The area where a lot of problems start are just around the fact that people continue to assume that when they're doing CQRS or NTR type architectures that they can build the same style of domain models as with a multi-single user system. You can't. The consistency won't hold up. In the best case, you'll lose data. In the worst case, you'll lose data and the data that you'll keep will be inconsistent. The problem is, it won't show up in testing. Who has tests for their system? Yes? Who actually as a part of their system testing runs tests in parallel to make sure that the data at the end is consistent? They've got one out of maybe 150 people. Okay? It's hard to write those tests. It's very difficult to express the way that things are. So A, start writing those tests, start running those tests, but B, more importantly, start designing your system up front with the idea that you're going to have to deal with this. So again, add an implementation perspective and Service Bus has been designed from the ground up to give you this type of consistency. But it's not specifically about in Service Bus. You could take everything that I just showed you. It's plain old CLR objects and put them on another infrastructure. You could run it on mass transit. You can run it on Rhino Service Bus. You could run it on Azure Service Bus. But always check back down to that issue of transactional integrity, consistency, this kind of stuff. Don't assume that the infrastructure is going to handle it for you until you've really gone down into the bits and bytes. So I hope I've given you a new perspective on the whole command query argument. It's not about dividing up commands and queries. It's about your domain model. It's about modeling your behavior. It's about finding out where the appropriate transactional seems are. If you get that, then you'll have a high performance reliable and consistent system. Thank you very much.
As developers build larger and more complex systems supporting many users collaborating on growing data-sets in parallel, many are turning to patterns like Command/Query Responsibility Segregation (CQRS). Unfortunately, the baggage of building N-Tier style business logic continues to weigh on their modeling efforts, often resulting in domain models that don’t handle consistency correctly in the face of race conditions. Join Udi for a new perspective on CQRS using a new twist on the saga pattern.
10.5446/50955 (DOI)
כן, לא used,<|iw|><|translate|> ‫בבקשה ו sper 👘 terminal contest,cience.org. ‫אני יודע שזה נמநרה, ‫ innovative move. ‫אתה מאוד תישבי211 после את העמHold Day, ‫ר ‫ Oscis myself, my name is Alon Flis, I am the founder of CodeValue and the chief architect of the company, we provide consultant and training services, but the most important information in this page is my blog URL, so you can go there and there is a post with all the information that you need in order to download the slides and the demos, and also my email so you can send me questions or anything that you want to ask or like to know. So what we are going to see today, I will start by with introduction to concurrent programming in.NET, in the.NET framework,.NET 4.0 and 4.5, then we will talk about the task parallel library or TPL, how many of you use TPL? Quite a few. Then I will introduce the data flow network and then we will talk about the building blocks of the network, how you build the network, what are the characteristics of all of these blocks, we will talk about the concurrency control, how you can change the behavior of the network and we will get into some advanced topics, and I will also show you several demos and code and also a quite cool demo using the Kinect as the source for information. In 2005, Herb Satter said that the free lunch is over, and when he said free lunch, he meant to the fact that we used to write software, we used to write programs, and whenever there was a new CPU, our program executed much faster, so we just didn't care, we write program and we knew that in a year or two, we'll get better performance, but in the last couple of years, this is not the situation, if we will write a single free-threaded program, we'll get much the same performance on new CPUs and sometimes even less performance because of power consideration and other consideration. CPU provides a very good abstraction, we don't care most of the time about the CPU, we know that there is something there that will take our code and execute the code, but this abstraction is not good enough today because today we get CPUs with much more calls, much more CPU in the machine, and the abstraction of execution as single thread is not enough, so we need a new abstraction and this is exactly what TPL in.NET 4.0 provides us, an abstraction to build the program or a program that can utilize the CPU or many CPUs, the concept of multi-CPU is not so new, if you think about server applications, we have like more than decades of application that we had to write for the server side that has to utilize more than one CPU, but doing it manually is very hard and it's even harder to take the same application and to go to a machine that has more CPU and utilize all of the new CPUs, we need something that can decouple the number of CPUs in the machine to the code that we need to execute. So to get back our free lunch, we need a better abstraction and as I said, TPL, the third parallel library provides this abstraction for the.NET developer and what we get in.NET 4.0 from TPL is several high level abstraction that knows how to break our serial code to be executed on many CPUs concurrently. So parallel link, maybe it's the higher abstraction that we get, we use the same link query language that we use to use, but we tell it to run as parallel and it knows to break the query to execute it on different CPUs and we'll see how the system does it later on. We also have the parallel class for parallel 4, parallel 4H, parallel invoke that can again take a range, break it to smaller ranges and execute it concurrently on different CPU cores. The idea is that if we go to a machine that has more CPUs, we can benefit from these CPUs because we can break the workload to many more tasks. So we have this decoupling mechanism that decouple the number of CPUs from our execution, our code execution. The basic building block of the TPL is the task class or task instance. It's very easy to create tasks, if you are in.NET 4.0 you can use taskfactory start new to create a new task to be executed. If you are in.NET 4.5 you can use the task.run provided the delegate or lambda expression and it will be queued to be executed with one of the threads that run on one of the CPU cores. You can get the future result of a task, you can wait until the task is over, you can use the new C-Sharp 5.0 asking an await keyword to create an await for a task in.NET 4.5. You can ask the status of the task, is it cancelled, is it completed, is it faulted and so on. The mechanism that executes the task is the task scheduler, we have a default task scheduler that comes from the system, from the.NET framework. The default task scheduler uses an algorithm that is a work stealing algorithm and I'll show you an example of how the work stealing algorithm works. You can create your own task scheduler, if you want to have a different behavior of executing tasks, for example you want to provide some environment variable, you want to execute the task in the UI thread, you want to have different security identity and so on. So you can provide your own task scheduler, so how the default task scheduler works, whenever your program creates new task and it doesn't matter if it does it using parallel 4 or just task.run, it doesn't matter it creates new task in an arbitrary thread, this task goes to a global queue. So all the task goes to a global queue and now inside the scheduler there are different threads that each of them has different local queue. So when a thread is free, which means that there is a CPU core free, waiting to do something, the threads take the task from the global queue to be executed. And whenever this task creates new task, the new task goes to a local queue. This provides us better performance because we use the same CPU cache and we don't need to synchronize between the local queues, so we save these logs. So we execute those tasks in the local queue, we in queue them and then we can take them to be executed. Now when a worker threads finish executing a task and there are no more tasks in its local queue, this means that we have an empty CPU core, a CPU core that waits to do something. In this case it goes to the back of the other task queue and still a task to be executed. And now if this task creates new task, it goes to its local queue. So we spread the load between all the worker threads and since there is correlation between the number of threads and the number of CPUs, we don't get to situation when we have an oversubscription, when there are more threads running than CPUs. So we don't get into context switch that we don't need to do. And if we go to a machine that has more CPU cores, then there will be more threads that will execute the same load. To see that, let's go and see an example of getting back our free lunch. So in this example, what I do is I calculate prime numbers and I start by using only one CPU. I have a loop with concurrency level and for five seconds I run with one CPU, then with two CPUs and more and more. And I see that I get better result and I also see that I utilize more CPU power from the machine. So let's run it and let's look at the task scheduler. So we see that we get better or higher prime number, higher result. And we can see that the CPU load in the machine gets higher and higher until we will get to eight. This machine has two CPUs, one CPU, four cores, hyper-threaded. So it's like eight execution engine inside the CPU. So we can see that we get better results and we also got to 100% CPU for the last five seconds. And if I'll go to a machine that has 64 CPUs, I'll get even better result. So this is getting back our free lunch. We just moved to the newer machine, newer CPU, more cores, better result. So this is the basics of concurrent execution in.NET and now we can move on and talk about the new member in the family, which is the task parallel library dataflow network or in short, TDF. It's TPL dataflow network or TDF, so I'm going to use TDF a lot. And the whole idea of TDF is a new way to create tasks that will be executed within the same task scheduler that we saw in Parallel 4 and Parallel Inc. So it's a way to create new tasks. TDF is based on a model which is called the actor model and this came from Wikipedia. And the actor model is somehow like the object-oriented model that says that everything is an object, so in the actor model everything is an actor. An actor is like an object, but it's active. It reacts to messages, it gets messages, do something and then send messages. And it works in concurrent to other actors. So this is the actor model. Microsoft implemented first the actor model in Visual Studio 2010 for the C++ for the native solution. In the native solution they called it the VC++ asynchronous agent library. And the same team re-implemented it into.NET and they made it somewhat simpler than the C++ implementation and very strong. So we use TDF to build a data flow network and the goal is to achieve high performance, high throughput and low latency execution. Okay, and we'll see how it builds soon. So let's dive into the building blocks of data flow network. So we have three different kinds of type of blocks. We have a source block, we have a target block at the end and we have a propagator block. All the blocks implement, all of them implement the eye data flow block, which is an interface that we'll get to see in the next slide. We'll get to see the interface, the methods and property of this interface. And then source blocks implement the eye source block and some of them also implement the eye receivables source block that gets non-blocks to query messages from a block. So most of them implement also this interface. Then we have the propagator block that implement the eye-propagator block interface, which is just an extension for, just extends eye-source block and eye-target block. And then we have the eye-target block, which represents a target block. I know that this is small, this is why I get big captions. And so we have the eye-data-flow block interface. The interface has two methods, the complete and fault. Complete tells the block that that's it. You finish execution, you will not receive more messages and if somebody wants to give you more messages, you should decline those messages. Fault tells the block that is in the faulted state. Sometimes it just happened, if there is an unheld exception, the block will get into the faulted state. But anyway, complete and fault is the completion, move the block into completion state. And then we have also a property which is the completion, that this is the resulting task of the block. So you can take the completion and do a dot wait, just wait for completion or use a wait, the new keyword. Or you can ask the status of the task from the completion state. So all the blocks implement the eye-data-flow block. Now we have the eye-source block. And also the eye-source block also has several messages or several methods. So the first one is consume message. It's a little bit confusing, but consume message means that the target block can call the source block and ask it to offer a message to it, to consume a message from the source block. So this is like the pool model, like link, because the target block goes to the source block and asks to consume message from it. If the block has the message, it can provide the message to the target block. The link, too, is the way to build the network, is the way to connect target block into a source block. So this is very simple. And there are two methods, the release reservation and reserve message, which provide a way to get the network to behave like in transactional way. Because sometimes target block wants to take messages from different blocks. If it will just take the messages, and it will wait for other messages to come, other target block cannot consume the messages that was consumed. So we can get into a deadlock condition. Reserve message, target block can call reserve message to reserve a message, and only if it can reserve messages from all the sources block that it needs, then it can consume the message, otherwise it can release the messages. So in this way, we solve one of the main problems of Dataflow Network, which is deadlock because of consuming messages from different source blocks and compete on these messages. The target block is very simple, it has only one method, which is the offer message, and again the source block call the target block and offer it a message. And in this case, it's the push model. It's like reactive extension, if you know about RX. So actually there is no pull or push model, there is a negotiation model between source block and target block. And we will see soon the negotiation, the protocol. The iPropagator block is just a block that has a source and target, and it resides in the middle of the network. So let's see how block executes messages. So let's go to Visual Studio. We'll start with a very simple block, which is the action block. The action block is a block that has a buffer, so it can receive many messages. And then whenever there is a new message in the block, one or more, there is a new task that executes those messages. And the execution is just a delegate or lambda expression that you provide for the block. The message type in this case is just an integer, but it can be any type that you want to provide as a message. In this case, just provide an integer. And whatever they do whenever there is a new message, just write the integer, the message, and also the current ID of the task that executes the message. Next, running in a loop from 1 to 9, I post these messages. 9 messages will be queued in the execution block queued and will be started to be executed. Then I sleep for one second, post the 10th message, tells the block to complete and wait for completion. Okay? Let's run it. So we get 9 messages executed by task number 1 because the task executed message goes to the buffer and sees that there are more messages. Take those messages, execute all of them. Then we have a sleep for one second, so the task finds that there are no more messages. It just goes away. And then where there is the post of the 10th message, a new task is queued to execute the new message. So blocks usually act in a serial way. You don't have to care about synchronization because there is only one task that executes all the messages, which make things much simpler. You get the concurrent from running different blocks concurrently. Okay? But if you want to execute the messages in more than one task, what you can do is provide information, execution data flow block options, and here you can, for example, provide the max degree of parallelism, and let's put it to be 4. And now what I told this block is to run concurrently with 4 tasks. And when I run it, I see that I got task ID 12134 because until it spawned another task, the first task could handle three messages. I can change it a little bit. I can tell it that each execution takes more time. So what I'll do is just change the lambda expression and add thread sleep. So each message execution will take another 10 milliseconds, and now I can see that I get 14341234 and so on. So different task executes the same, the message queue. Okay? Let's continue on. So now I want to talk about the hand-checking protocol between the source block and target block. So let's go to the beginning. I have two blocks, buffer block and action block, and I told the block to be bound with limit capacity. It's bound to four messages in the input buffer of the buffer block and two in the action block. So the block cannot consume more than two messages. And now the hand-checking protocol starts. It offers a called offer message and gets as a result, they accept it. Offer another message, get as a result, they accept it. When you offer another message, it gets as a result, they postpone it. Pospond means that the target block wants to execute the message, but there is no room, it can't do it right now. So it postpones, it tells the source block, in the future I may go and ask to get the message, or you may offer me the message again. Now the action block free its queue, but now it can call back and ask to consume the message that was postponed. Okay? Another way to use this protocol is to use the post. Post is a synchronous non-blocking call. It means that you can pass a message, and if the target block cannot consume the message, you just decline, you will get back the false result, and then you can decide whether you want to try to send again, or do something else. So I offer the message and I get the declined. There is no room for the message. Another situation is when I execute all the message in the target block, and then I called the complete. Now the block cannot get more messages at all. So if I try to offer message to the block, let's do it again. This is the one before. So again, the block should be completed. And now I offer a message and it declined for good. The block cannot receive more messages. So this is the hand-checking protocol between the blocks. The whole idea is how I move the ownership of the message from one block to another block. Okay? And most of the time your messages will be.NET types. Most of them, they will be reference types, so just moving references or the responsibility for messages from one block to another. So most of the time there is no copy. There's just passing references between blocks. Okay? So it's pretty fast. How do you build a network? You just use the link to method. The source block provides the link to method which gets the target block and the data flow link option. And there is also another static class which is the data flow block that has many extension methods. And three of them are for using link in a better way or some way that you can provide more information to the link. So what are the options for the link? You can tell the link to be appended or prepend to the source which means that there are blocks, you can connect more than one link to a block, more than one target. And if a block has more than one target and now the block has message, it will first offer the message to the first block and then to the next and then to the next in a round robbing fashion. So you can decide which block will be the first and which block will be the last just by providing two or four in the append. Okay? There are some blocks that copies the message. For example, the broadcast block will just copy the message. You will provide a cloning method and it will copy the message. In this case, there is no round robbing and you don't care which one is connected first. The max messages is a way to provide the link number of messages that can be passed through the link and after the number of messages this number is consumed, then the link will be braked and the target block will be disconnected from the source block. So for example, you decide that this block needs only to consume one message, you provide max messages equal to one and after this message is passed via the link, the link just goes down and the target block is disconnected from the source block. So it's a way to change the network while the network is running. Populate completion is a very important feature, very important option. It allows source block to propagate the complete or default state to the target blocks. It's a really good way to tear down the network. You create a network, you pass propagate completion equal to true and then when you finish, you just tell your source block that's it, complete, and it will tell the target block that everything is completed. Okay, very useful. Another feature that you can use in one of the overload methods of link 2 is, gives you the way to provide a predicate and it will filter the messages that goes through the link. The mechanism behind it is that it just creates new block, which is a filter block. The filter block is like, think about like an action block that executes the predicate and if the predicate is wrong, just return false and that's it, or just return declined for the message. So in this case, you can spread the load to different blocks with different predicate. This one we just use this kind of message. This one we'll use or consume other kind of messages just by providing predicate. Okay? So let's look at a simple but more complex example than the single block example that we had and I also want to show you an open source project that we built that can show you the network as a debugger visualizer of Visual Studio. So let's see both of them. So... In this case, we have a broadcast block and broadcast has to clone the message so the cloning function is very simple, just the identity lambda expression. Then we have a transform block that transform from int to int. Transform is like select, if you think about it, like link select, because you get something, you can transform it to something else, either other value or different type. So in this case, it's from int to int. We have both one that transform to positive, does nothing and another one that transform to negative. So it just inverts the value. Then we have a join block. Join block takes two inputs, sometimes even three, depends on which kind of join blocks you have, and it makes a tuple from the inputs. So you get a tuple for all the inputs. So we have a join block and then we have a batch block. Batch block knows to take the input and put it in a group, it creates array of input messages. So in this case, we have array of tuples. And then we have an action block that gets this array, iterate the array and write the result. We build the network by link to, by calling each block the link to method, and then we build the network and then we iterate and we post message to the first block that are transferred this message to the other block. Let's run it. Okay. So we got into the breakpoint and let's go to our first block, which is a broadcast block, and here we have a debugger visualizer that we built that shows the network. So we have a broadcast block with all the information of the block, two transform blocks and then a join block and then a batch block and then the last action block. Well, it's too tiny, but you still see it. It's an open source project, the visualizer can go to CodePlex or go to my last post from today, there's a link there, you can download and play with it. But it gives you a very good visualization of the network. And also the information about each block, how many messages it has in the input and in the output, and what is the state of the block and so on. Okay? Okay. I started by telling you that the whole idea of TPL is to find a way to create many tasks and to let the different CPU cause execute the different tasks. So I want to show you an animation that shows how it works, how different blocks execute on different CPU. In this case, we're going to have three different blocks, two CPUs. Okay? We start with two tasks for the different blocks and let's see the animation. So two CPUs works, CPU one finished, so the task goes, so we can give the CPU one to first block. Now we can give the CPU to the second blocks, now we can give the CPU another code to the last block. Okay? I will show it again. Cause things happen in parallel. Whenever there is something to do, the task works, but whenever there are no more messages, the task just goes away and the CPU can go to another block that has something to do. So we schedule the CPUs to different blocks, we use tasks and if we had more blocks we could schedule more CPUs to more blocks. Okay? Good. So actually now you understand what is data flow network and how it's built with the link to and how it behaves with the with the protocol that move messages from one block to another. Now it's time to get into the building blocks that comes from the network. So we have three different categories. We have buffering block, executing block and grouping blocks. All of them gets, you can change the way they work by providing some option. So there is the base class of data flow block options and you can provide the capacity limit a cancellation token if you want to cancel the execution of your tasks. You can provide the task scheduler and then for executing you can provide all the things that buffering block is but you can also provide the max degree of parallelism that we saw. How many tasks on county will execute the block and there is another option which is the single producer constraint that means that there is only one producer that creates message so you don't have to use lock when you read the messages. The implementation. It means that you will execute a little bit faster because you will not use this block. For the grouping blocking block again you have all the things that come from the data flow block option. This is the base class for the options and you also have two other options. You can tell the max number of groups that will be created by the block and also you can tell the greedy behavior of the block. Greedy means that the block does not wait to get it's a grouping block. It needs to get more than one message. So if I am greedy when there is a message in one of the sources I will just take it but then I can get into deadlock condition. If I am non greedy I will wait until I will get all the messages from the source and then I will grab all the messages and it uses the reserve and release messages underneath to do it. So this table summarizes all the blocks but this provides better explanation for each of the blocks. So the buffering block, the first one is very simple buffer block just as a buffer. Second one is a broadcast block. It has a buffer but it knows to copy the message to different targets and the last one is the right one's block which receives only one message only the first message and copy it to the target. If you know about lazing.net or singleton this is like lazy. It gets once the parameter and then it can copy it anytime that somebody some target needs the message. Execution block, so the action block very simple block, it has a queue and a task that executes the block. Transform block is like select in link you get messages and you change them to other type or other value and transform many blocks, gets messages and can copy the result, transfer the result and copy the results to many targets. Gooping block, we have the batch block, takes the input and create arrays of inputs you give it the length of the array and it provides batching. Join block gets two or three sources and create tapples. So you can use these tapples later on and batch join creates tapples in arrays. So these are the building block. The data block class is the static class that implements many methods I'm not sure that you're familiar with the extension methods but probably you do. And the all idea of extending interfaces is to provide a way to do what in C++ we call multiple inheritance. Because when you extend an interface you provide in one place the implementation to all the types that extend this interface. So the data flow block has these extension methods that actually extends all the other blocks. Okay? So we have three overload for link 2 in this class and we can use the data flow network as observable or as observer to connect data flow to our X to reactive extension. We have a choose which means that we can have more than one block in the input when we will choose one of the blocks. So it's like a block that come from data flow block class. Also we have a null target. Null target provides us a block that gets information that's nothing. Okay? And we have all the way to send message and receive messages. So how you send them receive? If you want to send them receive in synchronous way, in a blocking synchronous way, which is not popular today, today is the asynchronous way to do things, but if you still want to do it, you can just send async in a wait or if you want to receive blocking you can just receive and wait and you will get a message. If you want to do it synchronously but with a non-blocking manner, you can just post a message and if the post fails you will just get false. You don't have to do the while. It's a busy while loop. But this is how you can post a message without waiting. And you can try receive, try receive, we'll get out the message in the out parameter and then you'll get true, otherwise you'll get false. And if you want to use the modern asynchronous way of doing things, then you can use the block send async and you can add the await keyword if you want to continue execution just after the message is sent. Or you can receive async and then you can continue execution after the result, after you have the result with the await keyword. So it's time to use the connect and to show a sample of more complex data flow network. So if you are not familiar with the connect, the connect is a machine, it's a device that has many inputs. It has color camera, it has depth camera, it project infrared pattern and then it reads this pattern and then it gets the depth like a radar. And it also has an audio array, but I'm not going to use audio here in the sample and it also knows to track two human bodies and to provide the skeleton information. So I took the connect SDK and I decided that this is a good source of messages to consume. It's 30 times per second different sensors, lot of load, let's do it in data flow network. So I connected the depth camera to a broadcast block, the color camera to a broadcast block, the skeleton source to a broadcast block. I also used the file system as a source for background images, so I have a background picture store that provide me background image. And again I connected it to a broadcast block, so if I want to consume it for more than one target, I can do it. Then the skeleton source I connected it to another block which is gesture recognition block and the gesture recognizer can send messages to two targets. One is the background picture, it sends a message to move to the next or to the previous picture when I do gesture like this. And also another block which knows to take the picture and decided whether to change it to be a green picture and then we have just green screen or whether to provide the source picture. And again the gesture tell it when to show green and when not. So I get the background to a joint transform block, I also get the depth information and the color information to the joint transform and here I do all the magic I'll show you the magic and then I have an image that was generated by the network 30 times per second and I show it in a very simple WPF application. Okay, let's run it and then I might show you some code. Okay. So I get the effect of green screen, even I have many things behind me but because the depth camera knows where I am and the skeleton knows how to follow me. You see that I get a very cool effect and I can use gestures to change the background and I can use Superman gesture to move to a green screen and if I'll show you the load, you can see that the load is sprayed among all the CPUs 30 frames per second and this is quite cool. Okay. And I built the network using the predefined blocks, I have here all the blocks but I have background image manager block and I use other block like... Let's look at... It's an iPropogator block that gets command and providing a target a bitmap source and to implement implementing it, I didn't implement all the all the interfaces methods what I did is just delegate those interface methods all those interface methods just delegate it to other block to a broadcast block for the result and to action block for the input and so on so I use the building blocks to build my own blocks You can download the sample and you can look at the code for connect for xbox or for pc you can download the connect SDK and play with it Okay. So concurrency control Once you have the network you can change the way the network behave The whole idea that we have many tasks and we want to execute the task but sometimes we need to change the code behavior For example, we can change the task scheduler because the default task scheduler is not a fair task scheduler it executes a latest task before early tasks because it provides better throughput but not fairness so I might want to change the task scheduler or I can decided to execute in execution block the message in concurrent way gives the max the growth parallelism so you can change how many tasks will execute a block I can specify the number of messages per task if I want to have again better fairness because if the task has messages if the block has messages the task will never go away it will consume all those messages all the time and then there will be no free CPU to execute other blocks so if I see that I am in this situation I can say doesn't matter what you execute only 10 messages and then the task has to leave and then there will be another task to execute this block later on so this is the number messages per task parameter you can cancel Qtask using cancellation token the default cancellation token source that you know from TPL so you can provide a cancellation token you can remove the unnecessary lock if there is only single producer and you can prevent that lock but by not using by not using greedy algorithm you also have to handle exception if you have an unhandled exception the block will become a faulted or you can explicitly call I find out that the situation is no good I don't find the Kinect sensor I can't work so let's make the network in a faulted state let's call a fault so you can call a fault or you can call a code of thought and handle exception and then the block will become fault for example from the action you just throw an exception and the block just will become fault or the block can receive fault from other block if you ask the block to propagate its completion okay or if you just call or not implement the interface according to the rules then again all this hand checking all this negotiation if you don't obey the rules of the protocol again you can become faulted block that become faulted stop handling message it clears the queue it's fault the completion task and it does not want to execute any message anymore faulted source blocks also stop offering messages it propagates the faulted state if we told it to do with the propagate completion and it unlink the target from the source so this is the behavior of the network in exception state you can implement a custom block you just have to implement those interfaces and you have to implement the protocol the hand checking protocol there are three different ways to do it one way is to use the encapsulate method of the data flow block okay the encapsulate takes two block it takes the source block and the target block it combines them together and create a new block okay this is very simple mechanism mechanism or you can create a block that this is the same way that I did with the Kinect example I just created a block that implement all the interfaces but then I delegate all the calls to other blocks and action blocks for the to do something broadcast to send the result to other block a buffer block to get the information in the input or you can do it the hard way but then you have to handle buffering and message ownership all this protocol and synchronization and completion state and non gridiness reserve release and so on and it is not so easy one of the thing about data flow network is that it's very easy to write code because you don't care about synchronization you usually run only with one task you just run just build or implement your logic and you build a network okay but if you need to create custom block then you get all the difficulties so usually you don't want to go with this path but sometimes for performance for very specific block you can't build your custom block and actually there are some documentation in the network that you can find that tells you how to do it and some examples of it so this is data flow networks we saw what is.NET TPL we saw the mechanism underneath.NET TPL this task scheduler that knows how to schedule task and execute task among different CPUs we saw how we can get back our free lunch we saw what is.TDF the task data flow network we saw how we can design and implement and connect different blocks we saw the nine build default building or primitive building blocks that comes in the current data flow network and we saw that we can customize customize existing block build new blocks with existing blocks or even implement our own block if you have any questions please go and ask yes I want to time them yeah like so the question if I have several calculations in the network and I want to time them I think that you need to provide another mechanism so each block can wait or can synchronize with this mechanism to get the timing okay because this is one way another way that we did in one of our project is that we needed to take information from stock exchange and then we spread it or we bought it to many thousands of different network different algorithm, different permutation of the same algorithm actually and then at the end we need to join them back and sometimes in the network we find out that we need to filter some of the messages but then it's very hard to join them back so what we did we just passed null messages that say nothing but they say that there is something that might be here or there will be something that needs to be linked in this case and this gives us something like what you need to ask for it gives you the time everything is sync because the network always in all of the network has the same number of messages so this might help you so this might help you not the question yes. איזה Lorenzo disputesinha shoulder helix? מה יהיה לשiza המחיית אופ engaged זה עЕН עד支持 spies יהיה אוקיי. אני Commandות שם Sponsored וחמשת בפיקרittel נека ש einzige חuman שוואץ אשרחيع המה tested בעוקע לתמעטה, שאולי את Roxure asks for your method to give it a better result so you get better Dharma betterOOO It's not so easy Actually If you have enough qualidade then you might consume all the CPUs that you have כ intact, כakancriually outdoors, blocking,abling, Okay, but yes there are sometimes that you have to think about queuing and not to get to not to to throw too many message into the network it's always the case in pipeline and network and it's not so easy, it's the abstraction but you still need to think and you need to see that the load is right the thing is that you can watch, you can see especially with this divager visualizer that we created, אתם יכולים למה קר Sunny יש planners routers, photograph, twitter各位 אתם יכולים למה PPT שונים 방송 reflectiveии או Emma שיצ たBLEFA מהעקב לכ choosing? Run נוכל לך בוא לא 그러면 שמה יש אבל? אפשר עły ירושים ‫ואם נמדפ gefragt cartridges, ו preliminaryvid 44regrowers, ‫תאלא שאצד시에 ‫estershow permission, או __ ‫கவונ CMSCanary medo ruling, קונסטרט תரלՕ. ‫אז יש réal voilà מ Sharkn said R ‫ collab, ej?... ‫ 2011 R את המקניזם 보ופה על הע função?aturiously bulb אקדול העzial באיזה Crispalor אchn fever שאת vibe זה בטח 13 בטח אקדול או את נת<|es|><|transcribe|> דיס בהנהגת獨רים..?adduc publish, weakness of spec לא מדומות下 miners, שכי נלח regret את ג telion גראב communicated רקPA בואו, בואו, בואו, בואו
Thread Parallel Library Dataflow is a new addition to the .NET parallel stack. It is based on agents and message passing blocks. Agent is based on the Actor Model and is one of the building blocks of a Concurrent Dataflow Network. The idea behind the TPL Dataflow library is that you build a network from agents and message blocks. Messages are sent concurrently from, agent and message-blocks, to other agents, utilizing the network you've built to solve a specific problem.
10.5446/50957 (DOI)
Alright, it is 10.20 so we'll go ahead and get started. Good morning. Welcome to the second topic of the last second topic track or time slot of the day. My name is Barry Hawkins. I am an Agile and Lean coach from the United States based out of Atlanta, Georgia. And what I'm going to talk to you about in our time together is what I've titled Agile, Lean and the Space Between. I'm in my eighth or ninth year of practicing and applying Agile and then Lean on top of Agile. One thing I've noticed in the past few years is to an increasing degree folks are viewing and communicating. Agile and Lean as these two separate and often opposed entities or camps. If it were a Venn diagram, the circles would be like this and they're just very begrudgingly touching and admitting any related effort. When I would hear folks talk and speak at conferences and write and they would say, well, in Agile we do this. I would think to myself, well, when I've used a Lean approach I did that as well. And when Lean folks would say, well, in Lean you get the following benefits. I would think to myself I've had that from day one every time I've done natural implementation. So I thought it would be good to come up with a talk that addressed how they, although they have distinct origins, there's actually a great deal of shared, a degree of commonality between the two and in fact if anything they complement one another. So let's get into that. So Agile. The Agile umbrella. Of course Agile doesn't mean any one thing. Increasingly I think we're all asking ourselves what does it mean at all. But in 2001 when the term was coined and codified, it was representing an umbrella of methodologies that shared certain values and the Agile manifesto, which is basically a four point value statement, served to sort of reinforce that hey, for XP and Scrum and Crystal and DSDM and adaptive software development for all of these things, what we're saying is that we care about things like the individuals and their interactions with one another being more important than any processes or tools we've adhered to. We care more about did we actually deliver something that people are using that's accomplishing our goal in the first place more so than have we documented it and produced beautiful enough artifacts. You know, have we, are we actually collaborating one another rather than using a point in time agreement to beat one another up with. Are we responding to change as we're learning or are we actually adapting our approach or are we saying well no, this is what we decided six months ago when we were way more ignorant and so we're sticking with that. Agile was saying let's throw off those old things and move to these new and part of the, in my opinion, part of the effectiveness of Agile is that these methodologies, these approaches were developed by practitioners of software development themselves as opposed to some management guru sitting in an office with lots of consulting books and text books dreaming up theories. And in the course of these practitioners doing what they did they actually stumbled upon some principles that are actually part of the more sound and healthier parts of management theory. One of the things that Agile forfeited or lacked somewhat was because these were practitioners of programming they weren't too incredibly concerned or immersed in those touchy feely people things like culture. So when you, when you look at the documentation for Scrum and XP and those sorts of things they'll, they sort of just take, you know, cross functional participation and, you know, a healthy culture and teams made of motivated and self directed people they sort of take those as a given. And so these are the things that need to happen and these are the artifacts that you need to produce and these are the technical disciplines that you need to pursue because that's what works. Of course when that movement gained popularity and began its groundswell out came all the questions from the people who didn't have those cultures in place already. And that's why so many of the original Agile folks are as grumpy as they are. They're about 20 years into answering some really annoying questions that they never thought they'd have to address. Now Lean, so Lean, what we call Lean, funny thing about Lean, so Lean software development because it came after Agile as far as the software development world is concerned, a number of folks have considered it the new and shiny thing. It's, it's got a, it's, in fact it's one less syllable so it's even smaller. But the truth is Lean actually has a long and storied history that stretches back to around the time of World War II and just after. Lean, because of its history and because of its roots and what happened with the work, the foundational work of Shuhart and Deming that then fed into the work of Taiichi Yono and the people at Toyota and other industrial companies in Japan. There was way more attention to the workplace and the environment and the mindset around a spirit of continuous improvement. And so when those elements got ported over to a software development context, primarily back in 2003, Mary and Tom Poppendick were really sort of pioneering, leading the charge. I'm sure some of you have seen that book, but leading the charge of taking those concepts which have historically been applied to manufacturing contexts and saying, well here's how they translate to a software development context. And initially when they arrived, we viewed them more as a supplement and a compliment to Agile and saying, hey, you know how there are certain things that work but we've just never known why? Well it looks like it's because of these things, because of these principles, like trusting people to be smart enough to do the work you hired them to do. Apparently that's why self-organizing teams sort of work when you get out of people's way. But then I've noticed in the past few years, particularly as there's sort of a neo-lean movement going on in some of those camps, particularly where Kanban is a central tool being used. I've noticed this tendency to differentiate and interview Agile and Scrum as these dated outmoded approaches to doing work that we've really moved beyond. And if you have skinny jeans and a better haircut, you probably should be using lean anyway, because it's far cooler. But the truth of the matter is there's a great deal of overlap that in my experience that Venn diagram is more like this, like almost completely covering one another. And so that's what I'd like to talk to you about and it's what I refer to as that space between. So commonalities between the two. For one thing, they share some pretty fundamental core values. They're trying to accomplish the same thing. And if you can get past the arguments and the semantic battles about types of boards to use and terminology and that sort of thing, you'll see that there's actually a great deal of similarity. For one thing, cadence. Both the lean approaches and the agile approaches are saying, hey, you really want to get, you really want to have your work structured in such a way that you're delivering work at a constant and repeatable pace and in a holistic pace. In other words, I'm iterating through the process of generating and refining requirements and understanding the intent and the conditions of success. And then I'm delivering on those all the while collaborating with the stakeholder or the representative of the stakeholder about what I'm trying to do. And then at the same time, I have members of my team who are validating, hey, these changes in this volatility that I'm introducing to our existing system are not violating any of the existing conditions of success and agreements that we have in place. And that most effectively happens when you have a regular cadence. And so the agile processes tend to do this via iterations. The lean approaches like Kanban do this through flow and the regular measurement of flow, but not limiting things to time boxes, but more so looking at functions or features and their flow through the team. So cadence. They both are trying to accomplish that, but they do take nuanced and slightly different approaches. And frankly, each one has its strengths given the type of product or the type of backlog that you're serving. And there are times when I'll choose more of a lean Kanban approach and there are times when I'll choose more of an agile scrum approach. Another key value that they share is working in small batches and working in small batches goes all the way back to some of the early work and the foundational work that fed into what became lean, the Toyota production system first and then lean. When you work in small batches, you're minimizing your risk because you're taking, you're saying, okay, for a given change set, I'm minimizing the amount of change that I'm introducing into the system. And then I'm validating that before I incorporate it into the system at large. When people have worked broken into smaller batches, they tend to be better able to estimate it in the first place. They tend to be able to do a better job of testing it and exercising the assumptions and the conditions of success. And when you come to the end of a batch, you're also better able to evaluate how it went. Not only did we do what we set out to do, but how did it go as we were doing it? What complications did we run into? Were there delays or bottlenecks that particularly stick out that we should try and address? And most often that's addressed through looking at things in retrospectives. But the larger the batch, the harder it is to do that. When you have six-month release cycles and it's actually a set of 8 to 15 features that you developed and then you just stack them all up and then the QA folks are going to look at it and regress it. So much has happened since each one of those change sets was introduced. You slept, there were holidays, you might have gotten drunk a few times, all sorts of things have happened that make it more difficult to remember. So those first three change sets, how did those go? And what all did we put in there? I don't know, let's look at the change locks. Oh well, almost everybody sucks at version control commit messages. Oh, that's unfortunate. Oh well, I hope the QA people do a good job. Me too. Another value that they both share, cross-functional collaboration. To me, cross-functional collaboration is one of the most exciting things for me to introduce to a group when I actually drive home the point that, hey, what we're doing in this approach to work is we are working in small teams that are centered around a meaningful area of work within our company. And we're including in our team, the team is not just the developers, it's not just the developers in QA. It's everyone it takes to have an idea go from concept to customer. Everyone who has a hand in making that happen has a voice within this team. They're helping us to elaborate requirements, flesh out assumptions that we normally blow right past. They're helping us as we execute to have a robust set of conditions of success defined. They're spotting potential problems before we actually introduce them to the end user. The cross-functional, they're also, also because you have that, you have all of these sets of eyes looking at potential opportunities for reducing waste, wasted effort, wasted motion. And there's, when you have that many different disciplines, when you have all of the involved disciplines looking at a problem and looking at the process, there's this synoptic perspective that just, it can't be beat. No amount of rigor and brute force regression will ever catch the things that you can proactively catch and prevent an address with cross-functional collaboration. So those are some of the shared values. Now, shared dependencies, when people, including me, talk about Scrum and Lean and people say, well, you're really a, because I use a lot of Scrum, well, you're really a Scrum guy and you just use Scrum. No, no, actually not. I've never just used Scrum. I've never just used Lean Convon, because if you think about it, neither one of those is a complete system to producing software. They share quite a few dependencies, but two of the key ones. One is requirements management. Neither one of them does a fantastic job of requirements management on their own. Both of them rely on the elements that have become sort of the de facto standards. I mean, user stories is definitely the prevalent force. I mean, within user stories and specifications. I mean, tons of people are using user stories. I still primarily use user stories, but if you're using behavior-driven development or specification by example, then you have the same intent as user stories, but a little more explicitly defined so that they feed into a good solid test, good solid practice of confirming conditions of success by executable tests. But neither one of those things is actually identified as, well, that's an agile thing. That's a Lean thing. It's something that developed on its own out of necessity. And the reason it's such a good fit for these things is, remember, we work in small batches. One of the shared values of both of these approaches is working in small batches. Well, user stories lend themselves to small batch work because with user stories, we're creating these small, these thin vertically sliced chunks of functionality that represent the work from top to bottom, everything it takes to go from concept to customer to implement this slice of functionality. And we're working in those small batches and incrementally layering on and building out the overall system that we're trying to produce. It's sort of like, I tell folks, it's like baking a cake sideways. You're introducing layers like this and then ending up with your result. So requirements management is something that they both share a common dependency there and how big of a part of your process is requirements management. Typically pretty big. Technical practices. It's funny, not as many people are talking about XP anymore. It's almost a, did they play, is Downton Abbey over here? You guys watch Downton Abbey? No? Yes? No? Yeah? I almost feel like technical practices have become like the Lady Edith of agile software development. It's the really effective and productive middle sister that nobody wants to talk about and she sort of gets, that's fine, go drive a tractor or do something, be the post office lady inside the little hospital. But we actually owe a lot to XP and if anything, I think how little XP is being talked about explicitly is an indicator of its success. Because what, which of the approaches would you say, well in this one you don't use, you actually don't use unit testing. You don't refactor in this approach. We don't use continuous integration or you don't ever pair in any of these approaches. You don't say that. Why is that? It's because the technical practices that we associate with both lean and agile processes are this set that's become the de facto standard. And if you look at it, those were born and refined and honed largely within the XP community of practice. So in my view, I don't think it's so much that XP is now not cool. It's that XP's technical disciplines have become the way. It's how you approach effective technical practice within these iterative processes. Now they share, you know, agile and lean share values. They share some common dependencies. I think they also share some common failures. And let's talk about those because it's good to talk about your failures. First, an unhealthy focus on events and artifacts. Ever been part of a Kanban board versus a task board argument or a flame war? Amazing the amount of drama we stir up over those. And I've gotten sucked into it at times. Because when you've had a tool, when you've suffered under other approaches, like if you've suffered under like absolute chaos or an absolutely suffocating heavyweight phased process, when you come into something like Kanban or Scrum and suddenly you have these artifacts that are providing this visibility and insight into how it works flowing through your team, and you start to identify these large swaths of waste and eliminate them, you develop an emotional attachment to these things. This is my tool. It's my trusty sidearm that I carried into the fray and it helped me get through and live. And now I'm here and I have both my legs and arms and I'm so happy and I love it. You know, this is my task board. There are many like it, but this one is mine. One of my friends, Bruce Echoll, is always quick to remind me of the second noble truth of Buddhism, which is attachment is the source of all suffering. When you get too attached to a given thing, you can defend it in unhealthy measures and you can reject similar and even other beneficial things because you are so very attached to this one idea or this one thing. And in both Agile and Lean, I think we've gotten super focused on, like in Agile, we're super focused on our task boards and our product backlogs and our burn down charts and our velocity numbers and the range of integers that we allow when we're estimating story points. You know, in Lean, we've gotten super obsessed with Kanban boards and what is a Kanban board? It's half the time I'm expecting the Kanban police to wheel into the war room and say, whoo, whoo, sorry, could you pull over? That is not quite an effective Kanban board. We're going to have to downgrade you to just being a regular cardboard. Oh, crap, what does that mean? Does my insurance go up? We've gotten way too focused on the things that are measuring results and ironically, if you look at what's happened with the hyper focus on measurement, we're doing a lot of the same really dysfunctional things, you know, the sort of management by results things in Agile and Lean that we made fun of the waterfall people for. No, we don't measure lines of code. We just obsess over velocity or our flow rate. Is your flow rate below 70%? Test coverage, you know, if you want to bring in technical practices. What's your test coverage at? Is it 100%? Oh, you suck. Under that, you just suck. You should probably pack up and go home. No, no. We definitely, both sides of the equation are guilty of that focus. Now, there's another key one that really hits home to me, which is the neglect of the primacy of culture. I tell folks, when I'm coaching new clients and we're talking, you know, and I'm going through the initial orientation, I have like this two, two and a half hour session and I typically, I'll arrange this session in such a way. I always insist that the attendees not be all the same team. I don't want the same team at the same time. I want you to put executives and developers and QA people and DBAs and business intelligence folks and business analysts together. I want them, I want different people in the room together when I'm talking about these things because invariably when I get to the underlying cultural elements that are necessary for a successful or healthy Agile or Lean implementation, that is the most uncomfortable time in the entire material because I start talking about motivated self-directed people and I start talking about a failure-tolerant environment. And invariably, I'll have sessions where they just stop, they just stop me and say, question, question. So if we can't have any of that, can we still do this? Because I can tell you like half the stuff that you're talking about culturally, we don't have that here and we're not going to. And I always tell them, yes, you can still do it. Just realize that there's going to be a ceiling or a limit on how far you're going to be able to take things. Because I don't care how hard you try. I don't care how good your task boards or your con bon boards or your burn down charts are or your velocity. If you have an environment where management does not trust people to be able to do the job they were hired to do, or if you have an environment where people who fail are punished, then you're not really going to be able to do that much with this. Because part of the whole point with both of these disciplines is we're taking people and saying, hey, I know we've treated you like idiots for decades, but what we're going to say now is we actually think you know how to do your job. So we're going to talk about goals and priorities, but then we're going to leave it to you to structure the work. So do that. We're going to trust you to structure your own work, but commensurate with that trust that we've conferred to you. What we ask in return is a degree of accountability and transparency that you're not used to. Because frankly, we've allowed you to hide a lot because, well, we're asking you to lie in the first place about how productive you are. But now we want you to be super accountable. Well, if you've got a culture where somebody comes to a stand-up and says, hey, so I'm still working on that DAO for the new order path, I ran into some problems yesterday with my link query. It doesn't work like I thought it would. And the next thing, you know, that person's being pulled aside or dinged or that comment that they made actually shows up in a quarterly review. Guess what? You just destroyed every, you know, when that happens, when you have that kind of culture, it undermines and destroys and erodes the very trust and effectiveness you're trying to build. And I think too often, agile coaches like myself and consultants face a bit of a conflict of interest where you have to make money and people are paying you to show up. But you come into places where they say, well, we don't have that kind of culture and we're not even going to do it. So at that point, you're faced with kind of an ethical dilemma. Do you just say, well, I don't think we should even do this? Or do you say, well, let's go ahead and try it. It should be fine. It'll be fine because you have to pay your bills. And I think many of us have caved to that pressure and we've sort of minimized the necessity of having a healthy culture. And that leads me to the last and perhaps the most critical of the shared things, which is a shared challenge. If you talk to any agile coach or consultant who can be honest with you, who's courageous enough to be honest with you, they'll tell you that a startling number of the companies that we work with, after we do that work and we establish those foundations and get those teams going. The number of times we come back three, six, nine months later and it's all but gone is depressing. It's actually why we drink alone in our hotel rooms a lot at night when we travel. And that's because, you know, very often that's because we're brought in by a subsection of the organization to make this process, you know, happen and take root. But the management structure that's in charge overall has not really embraced this. It hasn't really embraced this idea about working this way. So they might think that this is cool for a while, but the minute it starts to infringe upon their approach to doing business or it might threaten their budgetary amount, which is basically their power base, you will see they will be your biggest champion one minute and as soon as it threatens any of that, they will shut you down. They will yank the plug out of the wall unreservedly. And that's really unfortunate. So there's a shared challenge and I think that one of the unfortunate things about some of this bickering and infighting between Agile and Lena's, we should actually be banding together because we have a shared heritage that allows us to address this challenge or begin to address this challenge and we almost never talk about it. And the ability to address that challenge and that shared heritage comes through the work of a man named Edwards Deming. And Edwards Deming is a statistician and consultant who did an amazing amount of work leading up to and during World War II, he's an American, and during wartime, he had great success in introducing his principles and approach to quality and to product quality and to management in the production of wartime munitions, and he had incredible productivity gains, trained thousands and thousands, the estimate is like 31,000 workers in the states at that time at various plants who had incredible rises in productivity. And it involved a different way of doing things and when the war ended and the folks who had been off fighting came back and the United States was flush with demand and the whole cheap U.S. products, all of the work that had been built up was largely dismissed like, nah, it's too much trouble, it's the whole thinking and empowering people, we don't need to do that, we just got orders to fill. And the next thing you know, Edwards Deming couldn't find work in the states but guess who needed help? The Japanese because they just lost. And their whole economy was in the crapper, they had a negative net worth, so they had a clear and present crisis that needed to be addressed. Well, they had worked with Deming some during the time that the Allied Forces occupied Japan and he had actually caught the eye of an engineering society in Japan and they invited him to come and speak to them. And it was actually Deming's foundational work, once you look at them, you'll see that everything we talk about in Lean today comes from, his work was the foundation and the philosophy and the mindset that was handed to these Japanese industries, which they then took and developed into a set of practices and approaches largely to manufacturing that led to the dramatic turnaround in productivity for Japan and ultimately the absolute turnaround of their economy. And so Deming, there's so much about Deming, we don't have time to go into it, but I think he's one of the most under acknowledged influences in Lean software development and agile sort of by association that we have. There's a ton of things that he's known for, but one of the key things is called Deming's 14 points. He has a book called Out of the Crisis, which was published in 1982. Ironically, so Deming is a national hero in Japan for his work that began around like 1950, late 40s, 1950. Deming was unheard of in the States until 1980, 1980, 1982. And you know why? Because one of our television networks decided to do a documentary and the title was something like, if Japan can, why can't we? And it was addressing how Japan was doing so wonderfully in the automotive industry and we were losing our shirts. I think it was between 1979 and 1982. Ford Motor Company lost $3 billion and that was still a lot of money back then too, even more so really. And so this documentary was published and it aired and the day after that aired, Deming's phone rang off the hook. Deming was born in 1900. So at the point that the United States finally woke up and decided to pay attention to this man, he was 80 years old. I was a management and organizational behavior student in college and we studied Deming. In fact, Deming died the year I graduated from college, my undergrad degree. It was really amazing because he really didn't become known in a widespread fashion until the 90s. So you've literally got this grandfather who's this late discovered management guru in the States. It's kind of unfortunate. But in his work, I think there's a lot to offer. So let's look at the 14 points, believe it or not, and I'm going to cover 14 points in the time that remains. So our shared challenge and an approach to addressing that shared challenge. One of the first things that Deming says is management has to create a consistency of purpose. When Ford first brought him in, they're like, hey, saw the documentary? Awesome. Good job. So we want you to come in and work some magic here. And what we have is a quality problem. He listened to them and then responded and said, no, what you have is a management problem. In fact, 85% of your problems are based on the way you manage things, not on your quality process. And these 14 points started with him saying, it needs to be clear to everyone in the company why we're here. And we're not here just to make money. We're here to make compelling and excellent products. And we're here to develop the people who work with us. We're here to develop them professionally because in developing them professionally, we actually provide the structure and the mechanism for us to continue to make amazing products. And until you make that clear from the janitor to the executive floor of management, nothing that we're going to do is going to ultimately succeed because there will be all these competing interests and misunderstandings about why we're here. Accounting will think we're here to drive costs down. And so part of what they do to drive costs down is things like not pay our vendors on time. Look, we're saving money. Yes, you are. And you're completely annoying our vendors. In fact, we got rejected by three and now we can only buy the crappy iron ore because the really good companies are sick of us because guess what? We don't pay our bills. The purpose and the holistic nature of what we're trying to accomplish as a company has to be clear. Adopt the new philosophy. It's funny. Some of the things that Deming says look obvious and if you read the actual prose around these points, he's pointing out, he's saying, I'm laying this out and now I'm reiterating the fact that you're probably not listening to me. And you are going to have to. But what he's saying here is, look, the world has changed. While we were sleeping, while we were asleep at the wheel, things got pretty dire and now we're sucking wind. Now we're seriously behind and it's going to take deliberate and dedicated action on the part of every one of us to make this happen. And if we're not seeing it at the leadership level, at the management level, I can assure you we will not see it in any great degree at the levels below that. Seize dependence on mass inspection. So in this context, he was talking about, he was initially talking about manufacturing, but in a subsequent book he talks about how this actually relates not just to industry, but to education and government as well. But mass inspection, what he's saying here is basically producing massive batches of things and then manually going through and going, yep, looks good. Yep, looks good. What sucks about that approach is, you've waited until you've gotten all this sunk cost. You've got this big batch that you're doing. You can't go, ooh, these all sucked. Let's unmake them. No, you can't because they're made. It's what we call sunk costs. You won't get that back. What he's saying is, when you decide to produce everything and then have somebody go in and look at it one by one, you've forfeited all opportunities to correct yourself at the time these errors were introduced. And so you see that in lean manufacturing with the way that they construct their manufacturing lines and the way that they have controls for being able to stop the line. If you've ever heard of that, as soon as any employee on the manufacturing line spots a defect, they're able to stop the line and say, whoa, let's look at this. Let's fix this. Why did this just happen? In software, our analog is embracing a culture of testing, unit integration and functional testing, and also working in small batches so that QA is able to regress things in smaller change sets. When you build all that up and then look at it, first of all, the brute force attempt to verify that much stuff is so error prone. But second of all, you've got all that sunk cost. Don't choose things on price alone. He was primarily talking about raw materials in that context, but I'm seeing this behavior still espoused and championed today under the banner of globalization. Well, we have our QA being done in Sri Lanka because it's more cost effective. What do you mean cost effective? It's cheap. It's really cheap, is it? Cool. How do you communicate with one another? Well, we have a lot of Word documents and they're 12 hours apart from us, so we never actually see each other unless on video conferences, someone works late one night. Okay, guess what? You're going to fail. The money that you save in whatever the bill rate is, you're going to forfeit in the ineffectiveness of the works not even getting done and the communication overhead that you're suffering by virtue of the fact that you've basically taken your spleen out of your body and shipped it to England and said, I'll periodically ship you some bodily fluids to process and then you get those back to me and I think we'll be okay. Improve product and service constantly. You hear people talk about continuous improvement. That sort of grew out of this statement, but what he's saying here is we don't have this destination of, okay, we've got everything worked out. Now let's lock it down. This sort of approach and this sort of culture realizes that there actually is no optimum, there is no optimum nirvana or perfection that you arrive at. Systems by their very nature and when I'm talking about a system, I'm not talking about a machine. I'm talking about the composition of tools, process and living beings that all work together to produce whatever it is we make. That system, because it involves people especially, is imperfect and so imperfections and flaws and waste while we will be eliminating it and seeking ways to identify and remove it at all times, we're also, because we're imperfect and flawed, we're also on an ongoing basis inadvertently introducing imperfections and waste into the system. And so it's an ever, it's an ongoing cycle of improvement. Institute training, meaning hey, when it comes to skills and what your folks have to know to do in order to successfully perform their job, stop hiring people and just dumping them in and saying, figure it out. People don't just figure it out. People try and guess and sometimes they have to just pretend because there's a ton of domain knowledge and nuance to what it is you do and frankly, if you can't come up with training, what that tells me is you actually don't know that much about your process yourself. You're still immersed in those classical management methods of, well, I've prescribed a method of action and all these people are cogs in my little machine and I just plopped them in and I've made their work sufficiently stupid and humiliating that I don't actually have to train them that much. Bravo. Good job. How's that working out? No, you have to have training. You're training because you've identified what constitutes excellence within your organization and within the practices and the crafts that are involved in you creating what it is you make. So training has to be there. You have to care about training. Adopt and institute leadership. Oh, leadership. If there's anything that suffers from operator overload, this certainly does. When Deming's talking about leadership, he has so much to say about this, but he says, leadership is not supervision. It's not just you walking around and looking over people's shoulder. As a leader, as an effective leader in management, you are the steward of the vision of why we show up here every day. You need to be a key instrument in having everyone understand, remember the constancy of purpose, the first of the 14 points? One key responsibility of leadership is you are the steward of that vision. You are the storyteller. You're the bard, if you will, who's passing on to each member of your team. This is why we work this way. This is what we do. These are the things we care about. That sort of leadership is what we need, but if you look at what we have currently, that's very much not the case. Yeah, I probably shouldn't go off on that tangent at this moment. Drive out fear. One of Deming's key assertions is that people can't do their best work if they don't feel secure. In Agilent and Lean, this is one reason we talk about a failure-tolerant environment. I told some folks yesterday, in America particularly, you'll hear people say, failure is not an option. Failure is not an option in your company, then you can be assured that it won't happen. You will not fail. Now, the reason you won't fail is because we're going to hide stuff, because whoever gets caught failing gets punished. As a result, what I need to do is I need to pad the crap out of estimates in case I fail to estimate something properly. If I cause defects, I need to do a really good job of hiding those. As is inevitable, if the things that are being hidden explode, I need to be able to find a new job fast so that I can get out of here before the axe falls. That's what happens when you allow fear to remain and to be built into the way that you're managing people. Deming's saying, that's got to go. If you want these hyperproductive, ultra-collaborative people, then they need to be not punished for taking risks. If anything, they need to be rewarded. It needs to be okay to say, hey, periodically, if not somewhat frequently, because I'm pushing the envelope and I'm trying new things, some of the new things I'm trying won't work out. That's not necessarily a bad thing, because learning is occurring. I'm learning what doesn't work. But that's all too rare. Breakdown barriers between departments. Deming's emphasis on this is what leads us to make such a big deal about cross-functional teams and cross-functional participation. Because it's when you have all of those folks involved, that's when you can start to spot things proactively. That's when you can address potential concerns. You're in the best position to address potential concerns when you have all of the people involved and you have that breadth of perspective looking at the problem and looking at the process. Decades of our work in the corporate world have celebrated, if not pursued, continuing to entrench and separate those barriers. Phased development was one of the greatest. If you look at the little boxes in those phase development diagrams, it's departments. This department is going to do everything they're going to do and then they're going to throw it down to you. You have to operate on all their assumptions and then you're going to add in your own assumptions and throw it down to these guys. We're going to keep doing that until we spit a few million dollars out the back and hope we did a good thing. I love this one. Deming has a – if you read his books and you look at some of the diagrams he includes and some of the dialogue that he writes into his examples, he's got – he's pretty dry sense of humor, but eliminates slogans, exhortations and targets. The thing that he usually really slams on are those motivational posters and motto is like, do good work. Do your best job. And posting that everywhere. Do your best – did I need to be told to do my best job? Oh, really? God, because you know what? I totally came in here thinking I'm going to suck today. Wednesdays are always suck day for me and I usually come in and break as much crap as I can. No. And when you put that stuff all over the place, what you're telling people is, I think you're an idiot. You're so stupid I need to put in large posters, large demoralizing posters on the wall to remind you to do your job. No. That has – it's so moronic. It has the diametrically opposite effect of what it's intended to do. That's why – I don't know if you guys get them over here, but thedyspare.com in response to these successories line of motivational prints, where all the hands are in there and teamwork and collaboration and the snowball and we all work together and all that crap, they basically take every one of those types of images and then make an anti-pattern of it. Like there's a ship that's sinking in the water and it says that it may be that your life is just to serve as a warning to others. They kind of – they counteract those posters. It's a response. It's a natural and frankly healthy response to, hey, I'm tired of being patronized and talked down to as if I needed to be reminded to that I should do good work. Eliminate numerical quotas and goals. Again, this is one that – numerical quotas to staff and production workers and goals to management. I hate if you've ever had to do this. Sometimes in management, the overall regime will impose upon you. We had the following goal. You have to have – there are a certain number of people who can't – you can't have more – oh, God, what was the one? You can't have more than 25% of your people get excellent on their performance reviews because then you're not being critical enough. What the hell does that mean? So if everybody's doing a good job, I can only award one fourth of them. How fun is that to tell your group, hey, everyone, so good job this year. I'm glad we all pulled together and helped each other. But it's review time and I can only let four of you be acknowledged for your excellent contributions. So it's going to be a little bit like Hunger Games, so I hope you're okay with that. And let's have a great next year. What? No. Stop doing that crap. That's that whole management by quotas and management by results stuff. It's so undermines what we're trying to do. Remove barriers that rob pride in workmanship. One of my favorite things, one of the reasons that I keep – that I have kept doing what I do for years is because I have – year after year – I'm not saying it happens with everybody who's on teams that I work with, but I will have someone or someone's from a team come up to me and say, you know what's so cool about working with you is I never knew what my work did. I always just wrote this code and I checked it in and it went somewhere and QA handled it, but I never knew what the crap it did or why people even cared about what it was that I wrote. It wasn't until I got in this team with product managers and QA and business analysts and business intelligence folks that I actually started to understand what my contribution to the whole actually meant. I now actually care about what I do. I can actually point out and say that right there, that part of what we do as a company is because of me. That's what I do. When you break down work and you compartmentalize it into these stupid barriers, these stupid little buckets that allow you to dumb work down so that you can prescribe everything to everyone, you actually tell someone, guess what? I want you to show up here every week and bust your butt, but I am not going to give you any idea of how that contributes to what we're doing. I'm not going to give you any sense of fulfillment in a job well done. Deming is saying anything that gets in the way of that, you've got to destroy it. For example, like the insistence, no, you can't use tests. You can't do tests. We're in a hurry this time so you can't test what you're doing. Great. So what you're telling me is I need to bust my butt and work a bunch of hours every day, but I can't validate any of the assertions in all of this stuff that I'm coding in an absolute mad rush. Fantastic. I feel so good about what I'm doing. What you're doing there is you're creating a system that's enforcing cognitive dissonance inside those motivated people every day. They want to do a good job, but you're telling them I am creating a structure for you to work within that no matter how good you are and no matter how disciplined you are, I have created a framework that will ensure that you fail at that and that you can't apply your skills and your passions. Wrong. I can't do that. Encourage education and self-improvement. I can remember one assignment that I had where I had a department and they had a training budget. The training budget was they allocated a certain amount per team member per year. I saw that and I was like, okay, good. I started telling folks, let's pick out conferences throughout the year. There are conferences that you all care about. We obviously can't afford to send everyone, so I want you guys to divvy up delegation. Two of you will go to this conference, two of you will go and you can come back and we'll have brown bag luncheons and give experience reports so that you can share that knowledge with the rest of us who couldn't be sent there. I started sending people out to these conferences and then I got pulled aside by a C-level exec who's like, hey, I'm seeing all these expense reports come in. People going to conferences and stuff. I'm like, yeah, yeah. What's your question? Well, we really don't have that money to spend. I'm like, what do you mean? It's in the budget. We put that in the budget, but typically halfway through the year we're out of money overall anyway and so we just tell them, well, the budget's used up. Sorry, we can't send you. But we have that allocated so people feel like we're caring. And he wasn't kidding. You know, I was like waiting for like, ah, you know, but he was dead serious. Like he never flinched. And he wondered why we had turnover. You shouldn't just be planning for people to attend conferences. You should be encouraging people to attend conferences. Don't just send your people to come. Encourage them to submit talks. Maybe they won't get selected, but they go through the process of what it means to actually come up with a topic or a presentation and submit it and go through that feedback process and, hey, maybe theirs doesn't get selected the first year. That certainly happened to me a bunch of times. But then maybe in a subsequent year, one of their ideas does get accepted. Guess what? The presentation skills and the training skills that they develop as a process of participating in that conference can be used to train people within your organization. You're training and mentoring and improving people just by virtue of letting them go to things they want to do. You've got, you know, companies have got to embrace that. And we in the Agilent Lean community, I mean, we're in a position to emphasize and underscore that. The last of the 14 points, I told you Deming has that sense of humor. His 14th point is this. Take action to accomplish the transformation. He says, management and authority is going to have a problem with every one of the last 13 points. But the way that we're going to overcome their problems is by ensuring that we develop a plan of action to start doing this now and make sure that it happens. That's what makes me so bombed, really, when I see the lean and con-bond, or not lean and con-, the scrum and con-bond fights. I mean, even some of the leaders in those movements have, you know, turned on the flame jets on their blog at each other. But, you know, within the industry, I mean, if you look at the traditional phased approaches, there's no hope of this ever, any of this ever happening. If you look at the way that the work is structured and if you look at the way that the management is structured, there's nothing to serve as a catalyst for making this sort of thing happen, barring like them inducing a very large crisis. But if you've been around long enough, you've seen that we humans have developed a large capacity for being able to ignore crisis. So those of us in the Agile and Lean communities that are working, you know, that are embracing at least part of these principles, we're the ones in the best position to be creating the companies and environments that allow this sort of thing to take place and to serve as the examples and the proofs of concept to management at large to say, look, this is not just something for us. This is the way we need to work as a healthy company as a whole. So it's my hope that rather than begin to diverge as a fragmented community of practice with factionalism and infighting, it's my hope that we will converge and start to look at, hey, what are the ideas and the principles that led to us coming up with this in the first place? And what else, you know, what more is there? What do we need to be doing in addition to these technical practices and these technical processes? How do we need to be developing our people and structuring and creating and fostering a culture that makes it so that this stuff is not this one exceptional little group that just happened to come about through a set of weird circumstances, but instead have a company where there's an entire ecosystem of teams that are taking these creative and proactive approaches to structuring their work so that people can have joy in what they do for their job. You spend far too much time at work for it to suck that much. Think about it. So that's pretty much all I have. I publish with all of these talks. I have a companion document where these topics in this outline is written out in narrative form. It also contains some bibliographical citations, points for further reading and exploration if you're interested, and it's at that URL. And my Twitter handle is Barry Hawkins. Feel free to hit me up on the Twitter or if you have feedback either way, feel free to, there's the tag to use that guy sucked. And you just put that right there and it'll show up to me. But thank you for your time and be sure to use the feedback mechanism on the way out. And enjoy the rest of the conference.
For years, one could assume that any development group trying to improve its approach to software was applying some form of Agile Software Development methodology. In the last couple of years, Lean Software Development has seen a surge in popularity. Some groups adopting Lean perceive it as an alternative to Agile processes like Scrum, and express a sense of elitism and superiority, having "moved beyond Agile".While Agile Software Development and Lean Software Development originate from different sources, they are actually closely related and share a great deal of overlap. In this talk attendees will discover where each came from, and just how much of Agile and Lean are shared in the space between.
10.5446/50958 (DOI)
Trying out different ideas you climb several hills to see how high they go and Find out which one goes the highest Some people like to express this as having a design funnel where you pour in lots of ideas at the top Lots of different possible ways of doing it Now understand a lot of people hear this and they go we don't have time for that because that sounds like it takes a long time No, this is a process the entire process. I'm talking about Consumes a few days at most this isn't a process spread out over weeks This is something that could be accomplished and the clients that I work with in less than a week It's never taken more than a week It takes as long to go through the understanding phase as it does through the storyboarding phase So the design funnel is one way of looking at it pouring lots of ideas You'll look and you'll reject some of them and then hopefully the best ideas will survive and come out of the bottom of the funnel In fact in the real world. It's a bit more complex than that They're typically our phases you start with a few ideas and then some of them branch out into more ideas So you just throw a lot of ideas in the in the funnel But then you begin to evaluate and you realize well some of these really don't work very well But this looks like it's going in the right direction That's fine, but this one might have a couple of possibilities and now let's expand some more and then of course We'll cut off some of those and continue to expand some of the ones that still look good So in reality our funnel tends to do that over time you pour stuff in Screen down then you expand again screen down, but again this this doesn't take a long time This is a process that happens typically in days This process goes much much better if you are doing collaboration I Have a wonderful team back in Nashville. They are they are exceptional people and in many respects There are many parts of the design and development process for which they are better than I am I'd like to think there are a few things that that I do better than they but in general they have extraordinary strengths and I find that if I have the opportunity to work with them On a project and I always suggest this to our clients then as a group We will generate 10 times as many ideas as one of us operating alone There's only three times Three times as many people, but we get 10 times as many ideas because what happens during collaboration is that a person Typically comes up with an idea that's not very good And it's immediately recognized that it's not a good idea And it's immediately recognized that it's not very good But it sparks a related idea that is good in another person and That's where we get the multiplication of ideas as people combine their ideas and we have a bigger pool of experiences and and The things that people have been exposed to to draw upon Now one of the most important aspects of this to understand especially when we're talking about the funnel and and collaborating to get a lot of ideas One of the difficulties that people in the development world tend to have And it comes out of this one prototype thing that they don't get to produce as many ideas as they'd like So they tend to get emotionally attached to their ideas You had people on your team that did that they come up with an idea, but other people didn't like it very much But boy, they were just You must not do that and the best cure for not getting attached to an idea is to have a lot of them That way you can if one of them turns out to be bad what you still have lots of good ones over here In fact, I find that of the ideas that I take I produce for my clients Typically about 30% of them are good and applicable and For whatever reason the other 70% don't make it through into the final product They look at it. Maybe they like it. They think it's infeasible. Maybe they just don't like it Maybe they like it, but they don't think it applies to them My batting average so to speak and the American baseball, you know in American baseball 30% is a good batting average Well, that's the way you need to look at idea generation If 30% of your ideas are good, you are a star in this space If 20% or 10% are good, you are contributing a lot of value to the process So don't get overly attached to your ideas and the easy way to do that is to dump lots and lots of them into the funnel and Especially early on when you're doing some collaboration with people To stress this to people who don't understand when I'm working with clients I often I often start off the explanation of an idea I have with you know I don't know if this is really a good idea for this situation, but what if we did X? Just throw it out there with the presumption that maybe it's going to be rejected and That way people don't feel like you're trying to force it down their throat. They'll evaluate it and give you an honest Feedback on whether it's good or not. So come up with lots and lots of ideas now coming up with ideas is of course in the category of what some people would call creativity and there is unfortunately Some severe misunderstandings about creativity in the world of of In the world of software development, where's my notes on this there it is We our educational system at least this is true in the United States. I do not I cannot tell you whether or not it's true here But when we talk about creativity in English in America, we have a pretty big problem The word creativity means two different things And there isn't a careful distinction between them so you can tell me if this is true in your school system as well Creativity is usually talked about in the age of educational system as something that artists do Songwriters and painters and sculptors and novelists It is not discussed in terms of people solving business problems Now certainly creativity is a very important thing Now certainly creativity can can apply to any area of life But our educational system tends to think that creativity is the province of those You know artists sort of folks the flaky guys that just come up with these black-brained ideas Is that the impression that you come out of school with? That's the way it is in America too. I don't know why that is that is true First of all, I want you to understand that that I I go out and talk to teams all the time About the design process and I get them involved and in my own personal experience among the developers I work with 80 to 90 percent of them have tremendous reservoirs of latent creativity ready to be used and And what they need is number one to realize that they can and number two to be kind of led and encouraged into it and The results can be amazing. I just finished a project For a very large company a company that produces software that has 9,000 users and they have struggled now for eight years on the most recent version of their product and Never really matched the software to what the users need to do and it was the usual sort of case where I mean they had agile And they had all that stuff But there was the disconnect between what the developers were doing and they weren't given the freedom to innovate and they weren't given They were kind of told what to do They were told what the goals were in a fair amount of detail and they accomplished them as efficiently as they could But there was no room in their process for creative thinking or design and Realizing that some new managers brought me in to spend about six weeks working with them at the end of that process the two the two main leaders of the team were spewing ideas out like you would not believe and After eight years they had they had a system that nobody liked they spent a year and a half doing a Wpf XAML version that nobody liked They would take it to the users and and and the users were going no no just leave it the way it is we After six weeks of producing new prototypes and rethinking their entire process They took the new version to the users and the first three offices all said could we be the pilot projects for this? And That's these are just these aren't trained designers These are just Framework level developers. They're advanced developers, but they don't really have it. They didn't really have a design background but in I vividly recall the last three days that I sat in a room and worked with them to develop ideas for the second round of prototypes, you know how we had already gone through one thing in the funnel and now we were bringing it back out again for another phase and The Trent the difference in that time from the very first time I worked with them was beyond belief in terms of how they embraced the process and generated ideas, so I tell you that 90% of the people in this room can do it And if you think you can't you're selling yourself short So that's the most important point about creativity. The second point. I want to make about it is this Think back for a moment over the times that you've done something in your code You've created some innovative idea That you came up with yourself a new way to apply something How many of you can think of something like that you've done in the last three or four years hands, please? Do the rest of you just grind out code turn the crank and that's all you do Because I would normally expect to see three quarters or 80% of people do that Okay, let's see those hands again because I want to address you people. Okay, hold those hands up. I want to ask you Are those the things that you look back on for the last few years as The times that you were most satisfied happiest most proud of in your job. Yes, is that true? Okay, now imagine Imagine that you could multiply the number of such experiences by a factor of 10 or 20 or 50? Well, I tell you you can do it. I've seen people just like you do it Creativity is not just for white brain flaky designer types It's for everybody and there are advantages you have over those people that they will never be able to emulate You will you understand your business's processes The things that make it go better than any designer is ever likely to Designers work extremely well in the consumer space because they are consumers in that space they tend to turn out extraordinary results Apple's designers would be a classic example But designers seldom have the patience or the background to understand the business domain with the completeness needed to do good design there You have to do that. You already have that up here so when it comes to creativity, I tell you that you can do it and If you believe that you can't well, I think you're just holding yourself back See if there was any more points. I have to make about that. I have an entire half an hour rant about this Okay, I can preach about this for half an hour, but that's that's probably That's probably Enough for this particular section. Yeah Yeah, I want to stress that design creativity is a skill Look, how many of you do snow skiing? It's popular here, right? Do you remember the first time you went? I? Do I vividly remember it. I was 32 years old first time I skied I Went on the slope it was 30. It was just above freezing two or three degrees Celsius But it was good skiing weather But because it was warm and I didn't bring gloves I Didn't know I was stupid and I went out to try of course. It doesn't matter what temperature the air is The snow is always freezing So I would fall and my hands were cold. I fell 17 takes straight times getting off the lift it had a very short Turn thing and I didn't know what I was doing and I fell going down the hills I Was about halfway down the hill I've been doing this about two hours And I was ready to give up my hands were cold and And some six-year-old kid went by me just screaming Down there and I just gritted my teeth and got up and did it and I didn't fall again the rest of the day There was just the turning point I got over that now I will never be a great skier, but I do it well enough to go out on blues or not too bad But do you have the the green blue black thing here? Okay, I ski blues almost all the time blacks every once in a while, you know the noobs on the greens, right? You don't want to be you don't want to be there So I will never be great at it, but I can do it and I can enjoy it and I can be good enough to Say that I am a skier That's the attitude you should go into this with you do not have to go into it with the attitude that you will be an Olympic level designer you just need to get down the slope You just need to be able to get enough speed to do it reasonably well Keep on your feet. Don't blunder too much. Don't hurt yourself That's the that's the level you need to be at in the design world Believe me just being at that level will put you above other developers Head and shoulders Above what they can do in the creative space So don't think you need to be a pro at it you just need to be okay and Everybody almost everybody can learn it at that level just like almost everybody can learn to put on a pair of skis and ski to the bottom of the slope All right, that's that's enough preaching I think So in the create in the creativity space one of the things I like to stress is the need for Rituals there is a great book on if you're interested in the philosophy of creativity as Integrating as part of your life There is a woman who is a dance choreographer in New York her name is twyla tharp Pharp It's an interesting an interesting last name you put her name in Amazon. You'll see her book Twyla tharp talks about rituals in doing creativity that is I Find that it helps me a lot To put myself in the right frame of mind to know that now I'm putting aside my other work and I am doing I'm doing creative thinking. I'm doing storyboarding I'm doing sketching my preferred tools are here But this is just me I Like index cards blank unruly index cards of this size When I get out my stack of index cards and My wooden pencils See in every day you in every day work at my desk all I use are pens for everything I do When I get my blank cards and my pencils That's telling the part of my brain to light up that Does that kind of work that does that creative thinking I? Even go to a different part of the house. I get away from my monitors. I Go to anywhere. There's a flat surface because I don't need anything else when you're doing design This is all you need and you can do it anywhere as long as somebody is a pestering you You know the English word pestering So She talks about coming up with your own rituals if you find yourself Tending to do a certain thing to get in the right frame of mind you should encourage that I find that that's extremely helpful and And in making me more productive during the design process We'll talk more about the index cards a little bit later So let's get into the storyboarding in some detail here. What is storyboarding in general? Well the the discipline of story of storyboarding goes back to movies. Have you seen storyboards for movies? Where the filmmaker says okay in this scene here's a quick snapshot of what that scene looks like and a description And here's the next one and he does it linearly through a comic book is a commercial version basically of a storyboard Well in the development space a storyboard contains crude sketches of the different screens That your application will have or if you insist views and What the relationships are between them? It's not really about the detailed layout of the screens You typically don't put all the elements that the screens will contain just enough so that you can Easily see yes, this is the screen that looks up customers or this is the screen where we Validate an order or something like that just enough to see what the screen is supposed to do It's more about the interaction than the layout you want the layout to be minimalist But you want to be very explicit about the relationships between the screens. Here's a typical Level of a storyboard might yours might be more complex than this But this would be a fairly typical example because you don't necessarily want a storyboard the entire gigantic application out at once You are typically focusing in on a part of the application for storyboarding You may have a early if your system is complex enough You may wish to do storyboarding at the level of how do we manage all of the pieces? We don't even know what all the pieces are yet, but what is our overall navigation? Experience like and do storyboarding of that and then when you get to individual pieces do storyboarding of them as the first stage In designing them so notice that these are pretty rough Just basic ideas about what you're doing and what screens you can go well how you can go from screen to screen That's the typical level of storyboarding now. There are several types of storyboards And it's important that you do try to get the right type for that fits the particular Problem that you're trying to solve now remember you're doing more than one of these when you approach the problem do one Go yeah, that looks like that might capture the essence put it aside Do some do another one try to come up with a different way of looking at that problem Put that one aside try to get to three if you can or four You're pouring ideas in the funnel You're filling the funnel up. Don't worry about whether they're any good or not at this phase just generate a lot of ideas Now there are as I said several types of storyboard the movie storyboards are simple sequential storyboards There are software situations where you would use such as such a storyboard a wizard would be an example ah stepping users through Booking a flight on a travel site Would be in what would be a problem space where you would likely use a Sequential storyboard because there are certain steps that they must go through There might be one or two options. So usually I find that even when I do a linear storyboard There's maybe one or two places where I go. Well in this case, there's this one extra thing that they have to do But if I go very far beyond that then you get into branching storyboards That depending upon what they choose at early stages, they might do different things or there might be loops This this is an example of a branching storyboard here Where there are different activities different kinds of relationships different paths that the user might take to accomplish something And if that gets complicated enough, then we go all the way to a state transition diagram that I mean you guys do state transition diagrams now for for coding. Don't you well this that you take that same Pattern thought pattern and impose it on the UI and you have a state transition diagram for the user What view are they on right now? Why are they there? What other views could they go to and under what conditions could that could they go to those views and you storyboard that out as a state transition diagram and Then there is the narrative storyboard I like to use these by the way just because they have a human field to them for all kinds of software But they're especially important for software where the software has to deal with something in the real world Think of a situation where? We I worked on a system where? The user had to scan checks it was for medical offices and things About ten years ago when we first went to electronic images for checks before that They had to take the paper checks to the bank and deposit them and then they changed the rule so they could just send the images So the software that allowed them to scan the checks and hook it up to the patient and all of that had the scanner Beside the computer therefore the process of storyboarding that app needed to capture that External environment that is a part of the application flow? How many of you think you have software systems where that's the case? There's something outside the computer that people are that's that's involved in the the process that people are doing Yeah, that's 20 25% of you in that case these kinds of storyboards Are very very helpful as I said we call them narrative storyboards. That's the last hype down there now notice These are crude man. These are these are matchstick drawings here You're not going for artistic merit in these things This is to just enough to get the idea across both to your for you to develop the idea yourself To gain a complete understanding of it and for you to communicate the idea to other people so they can evaluate it refine it and improve it So it's not to it's nobody has to see these outside your group for the most part So they don't have to be pretty now if if you're in a situation and sometimes you are Where you have to go to a higher level of person to present the ideas Now you might want to invest a little bit more work in it, but you still don't have to be an expert artist You may tell you what you do The things that go in that are in the environment that are involved take photographs of them and then put some tracing paper over them and sketch the outlines on tracing paper no artistic talent required Then take that piece of paper and scan it and now you have a nice looking sketch of that object But you do want it still to be handwritten To look handwritten. There's a reason for that and we're going to talk about it in a few moments Okay, so is this clear now questions you guys you guys okay with what I'm telling you Four different kinds of storyboards use different ones at different times depending on the kind of application that you're tending to do Now what what are the techniques? What are the what's the medium that you use? Well, you can do storyboards on index cards as I said this is my favorite way to do it and Typically what I'm doing see some people like smaller screens I like and there are of course different sizes of these you can go to to the smaller size if you wish for the for the individual screens But just this seems big enough to me to sketch out on a screen But not so big that I'm going to go into too much detail. So this feels right for me But you might like something smaller or you might not like index cards at all But that's completely up to you. You should try some things and see what I like about the index card thing and I carry a supply of these everywhere I go do some sketching on it. They're cheap if I you know if I get like the halfway through the thing and realize now This is no work. Then I just throw that one away and don't worry about it. I Pull out another one and do the next one It doesn't bother me to Dispose of one and that's the attitude you want to have That everything you do Is potentially disposable That means you don't want to invest too much in it You don't want to put a lot of effort into making the artistic merit high That's too much investment for something that you may get halfway through and throw away So index cards are one of the the mediums that that I like to use and as I as I mentioned earlier I far far prefer pencils now I don't like telling anybody what to do and I like being dogmatic about anything I'm suggesting that you try pencils out and see if you like them instead of pens And I'm not the only one in the design space that feels this way I put away pencils when I finished high school and didn't use them again for 30 years All I used was pens Until I began doing a lot of sketching and design there's something about pencils is just better So I encourage you to get yourself a supply of pencils and a sharpener And try them out for design. I encourage you to do that Probably the next most common way of doing it It what's what's wrong here you off the wrong thing there is storyboarding all over the world This is especially common when you are doing it collaboratively as part of a group You can do storyboarding on index cards as part of a group too Gathered around a conference table and and there's something tangible about the index cards where people can move them in relationship to one another By the way, what I'm doing that one of the other tools I like to have is sticky notes because the sticky notes I can use for relationships to kind of stick them together a little bit So But if you if you're doing it as part of a group some people do like to do it on storyboards My partner prefers this so usually when the three of us are getting together We do our storyboarding on whiteboards We're just marking the thing out again at a very very crude level And then when we get done we take make sure we take a pretty good resolution picture of what you have Pretty good camera there a cell phone camera if you've got a good cell phone camera you might get a picture good enough But if you've got a better camera, it's even better. I always keep a real real camera handy. I'm very fond of my travel camera You're in there aren't you? Or did I leave it on the desk? I may have lifted on the desk because I took a couple of pictures for this this presentation with it Some people do like to storyboard with nothing but sticky notes they get some large surface poster board or something And they draw their their screens on the sticky notes And and you know they have lots of sizes of them So there's a size about like this That some people really like to do their screens on almost the size of an index card But it will stick to where you put it And then you can move them around and you can draw on the poster board the relationships between the sticky notes I've seen people who like to cut the sticky notes into shapes To indicate things we talked about getting away from the rectangular thinking Some in my last session. Well, if you want a curved shape in your storyboard, you can just take out a Probably you know you haven't done that since what second grade? It's okay. It's sorry. You can still do it. You know how I'm pretty sure that they do teach that here, don't they? Cut or cut pieces of paper with scissors So you know how to do that they taught you that school So you can do that with sticky notes and then stick those shapes all over the place The fact that it's easy to move them around but they'll stay where you put them is good And what what the the times that I've done this? I like to have two different colors one for the screens and one for the comments So if I'm trying to put a notation of Not sure if this is whatever then I'll write that on a different color Sticky note that helps people interpret them more easily because That's that that that's the gestalt principles we talked about Yesterday The color helps people group naturally. So storyboarding with sticky notes is another another good technique When you go back, I would I would suggest that you try an exercise You got something coming up you can do this by yourself so you're not embarrassed Nobody has to see okay, you can be like it's if you see the movie space balls where Rick Moranis is doing the little thing with the With the action figures and somebody walks in on him and he's very embarrassed because He completely improvised that scene. I don't know if you know that Mel Brooks told him, you know, we just want you to be doing something that you'd be embarrassed and he just he came up with the dialogue of seducing the princess with the action figures and all of that And then the guy walks in well, just make sure nobody walks in on you or Cover it up or something if you want the first time you do it You can try it alone and not be embarrassed. Just do an exercise pick some task flow That you are going to work on pick one of those storyboarding types that you think matches it Pick one of the different kinds of mediums that I just told you about and do at least two designs Put the produce the first one put it aside and produce at least one more Just try it How many of you will go back and do this for me that you will try this? Come on I'll feel much better if I see more hands. You'll really you'll do this for me. Okay. Go back and try to see you've promised me now I won't know But you will know That you are a dishonest person if you go back and go Yes, I just said that to make Billy feel better Okay, so go back and do an exercise now There are tools for storyboarding and this is where developers like to go As soon as somebody says, you know, we need to do more design. We need to do more storyboarding First thing they want to do. Well, let's get a tool for that so I can run it on my screen Look man There's a whole world out there. You don't have to do that There you don't have to look at the entire world through that little monitor that sits in front of you so These are okay, especially for the later stages of storyboards Also, when you have a geographically distributed group The tools are an excellent way to do storyboarding in that case I think the tool that people use the most is called balsamic. I bet we've got some balsamic users in here. How many? Yeah balsamic is not a bad tool at all and I don't wish to dissuade you from using it But I would like to suggest that you try the more tactile Doing it with real paper and pencil at the earlier phases of storyboarding before you immediately jump into this thing But balsamic is pretty good At as I mentioned, especially when you have distributed Teams that you need to do storyboarding with because it is a web application You go to balsamic.com and you can start using it right away. I think it's still free, isn't it? That's not much. Is it not free anymore? Not free anymore. It used to be back when they first started um My partner is a huge fan of balsamic. He really likes it and uh So so usually when we reach a certain point in the storyboarding if he's going to go forward He will then jump into balsamic The other tool that some of you may have access to is sketch flow, which is a part of the expression blend product or at least some versions of it Uh, and it's it's an interesting storyboarding alternative in that it's much more functional when you actually do a a mock-up in In sketch flow, it's more operational. You can click things and it goes from screen to screen very nicely It's got a lot of that that thing right there will scroll up and down when you run it. It actually is a list box It's it's the same. It's actually the same list box That would be in a real application. It's just got a template on it to make it look handwritten That's all they do is make make it look handwritten. So I don't suggest going there at first I don't want you spending too much time figuring out figuring out the technologies To make this stuff work, but at the later phases of storyboarding this can work pretty well Now notice what both of them have in common is they have that crude Handwritten kind of look to them. They don't look too finished That's very very important. It's not an accident that they both look that way the reason is That when you put handwritten sketches in front of somebody the reason you want to be fairly crude the reason these tools look that way Is you want your users to understand that your ideas are disposable That if they're not good, they can be thrown away You want to make sure that their their mental understanding is that there is not much investment in this They haven't spent a huge amount of time making This thing for me because the more time they believe that you spent doing it The less likely they are to tell you i'm sorry. This is trash get rid of it and start over Most of you have experienced that situation where you produced a finished screen In your application and you took it to the users and what did they think? What did they think Well, if presuming that they liked it, okay. Yeah, many cases. They do think it sucks But if if you produce the screen that matches their understanding, but there's no logic behind it whatsoever When they see the screen, what is their mental understanding of where you are in the project? That's your about done You may be only three percent finished, but they Psychologically they think you're much further along you kill that with this When you when you show it to them in a in what obviously looks like a hand-drawn way They no longer believe that you are very far along mentally They now know that you're in a different place. So you don't run into that problem Also, you don't run into another problem That's very very common. Uh, I find when you do the single prototype Let's finish and show the user of the screens, etc. Have you ever been in the situation where you demonstrate your single prototype? and now the users are Arguing over colors and this thing should be moved three pixels over and these two should be flipped and they get into fights over this Because one user says once who has experienced that? Okay Would you like to avoid that forever? Because if you take people multiple ideas in which it's clear that there's not huge amounts of investment They cannot and will not argue over trivial things anymore. They are now discussing Which approach matches their job? See if you take them a single prototype and you ask for help, they're going to give you some help You've asked them for it. They're going to say something, but you haven't given them any choices Except for very limited ones. So that's the ones that they take If you give them a bigger range of choices a bigger space to choose from They are more likely to add more value by choosing something that will work well for them. They can recognize. Yes, this approach actually that feels good In terms of the way I do my job But this one just doesn't They're adding value at a higher level. They stop arguing about minutia. It's um, I can tell you the evaluation experience Is far more pleasant with multiple ideas done at this stage than the single prototype with the screens that look finished and the users argue over trivia So that's a that's a big side benefit right there So, uh, yeah the the storyboarding tools Can be used. I don't as I said Don't do them at the first part of the process and I will be honest with you. I don't use them at all My partner as I said likes balsamic. I tried sketch flow. It kind of imposes certain kinds of interaction patterns Kind of a website navigation pattern that is suitable for some applications and isn't suitable for others So I like to be more flexible about interaction patterns, but for the kind of software you you do these might be a pretty good fit Okay, but I find that they're more about layout than interaction actually and that's I guess that's why I don't worry about them because I'm so so concerned about I'm much more concerned about Layout or interaction than I am layout Then we talked about this a little bit before that that when you first start this You stare at that blank sheet of paper and don't know what to do So you need to force yourself using constraints to develop and try alternatives One of the primary constraint that I really strongly suggest you use is that when you do Any single if you have explored and I Goal and come up with one idea never ever be satisfied with that one idea no matter how good it seems It may be really good, but take the time to put it aside and constrain yourself to develop a new one that isn't like that one You might be surprised at how often you come up with something even more difficult than that You might be surprised at how often you come up with something even better um, and then there are lots of other constraints that you can use I talked about one in the uh session Yesterday where you take just pick a shape at random and say I'm going to create a design around that shape uh, and then I also talked about my my deck of cards And I I need to get I'm going to get this on my website So you can download the word document and put it on business card stock and print out your own little deck of cards The deck and this is just a game I play when I teach week-long classes People are doing part of the process I will ring a bell at some point and now everybody has to pick a card out of their deck and do whatever the card tells them Just to force them out of whatever rut that they've dug themselves into Do something different So that's that's my card deck with various kinds of ideas to challenge them Telling people to increase white space by 50 percent that's That's a really good one. I find among developers because Everybody wants to crowd everything on those screens forcing people to rethink that is is often really good and then That's maybe my favorite right there You've just acquired users who are six years old What would you do to your design to help them? It's surprising how often that ideas about helping six year olds turns out to help adults too And then Um, where is it? The one there that is an invitation to you to put yourself in another person's mind and look at your design From the the perspective of another person Now I told you 80 to 90 percent of you are able To do this sort of thing the 10 or 20 percent who can't I can tell you what the primary characteristic that unifies them is They lack the ability to look at the world through another person's eyes You know people like that, don't you? They think that whatever The way they look at the world everybody ought to look at it that way That is the antithesis the opposite Of the way a designer must think a designer must get inside the head of the people that he is designing for Look at the world through their eyes So that's the constraint of forcing you to say evaluate your design through another person's eyes and see what you can see And then once you've finished going through storyboarding and generating lots of ideas and going through your funnel You do some storyboards evaluate those pick the best Maybe expand the idea space a little bit more at the end You'll end up with a small number of things that you wish to push into the next phase of design Three is fairly typical three approaches that we find That we want to push forward from this stage Now we want to find out something about how is this actually going to look in the real world? And before we begin coding we go through an illustration phase We find this is much much faster than going through a coding phase first We use a tool like photoshop or correll draw. I've used correll draw for a long time. Adobe illustrator. There's also a product from um Who makes autocad autodesk? They have a product called sketchbook pro That is not bad And all of those products can be used to generate things that are going to look a whole lot like the final screen With the you don't have to work too hard on the colors We usually make them fairly neutral But you do get a pretty good idea of the overall design and and we go through one cycle of that of illustration Of pretty quick cycle before we get into coding. We find it saves us a fair amount of time in doing coding later You don't want to over invest here just enough to get the visual field since the storyboarding is not doing anything for visual field Visual field. It's primarily about interaction to a limited extent about layout. Now. We're more interested in layout And visual field So this is where you invest some time worrying about a little bit about detail layout of some of the screens But don't go too far. Don't get too much psychological investment there because you'll run into that problem where people think it's too far along To change now the last piece of advice. I want to give you here And this in some respects might be the most important at least I've learned this in the process of working with many teams The most natural Tendency you will have is to be timid when you start To do little things in design to make small changes in your application If you do that Your risk of failure is much much higher You want to be bold You want to do something really different It's a famous quote about that nobody really knows where it came from. It's often attributed to Gerta But boldness has genius power and magic in it What I have discovered and there's a design principle that I found after I discovered this called most advanced yet acceptable design Push the envelope innovate When you just fool around the edges of a product Now the inertia that people have for the old product is very difficult to overcome They see that it's a minor change But they hate change because they assume if you make a change you're going to screw something up They assume that because let's be honest That's not uncommon now is it So They they will have a natural inertia a resistance to change You also have to realize that for people who are already using the system Even if it's difficult to use They've already learned it. Do you do you know the English term sunk cost? Money that's already been spent. You can't get it back. It's already invested They have a sunk cost in learning the existing system Therefore they lot they don't want change In fact, if the system is really hard to use That's kind of a barrier to new users, isn't it? It makes them more valuable As much as I try to get inside the mind of my users I recognize that the desires of the user are not always the same as the desires of the business So there is a natural inertia on their part for old existing systems Comfort if you will And you have to overcome that It's hard to overcome that with small changes In my experience doing something dramatic and different Is a much easier way to overcome it But your innovation must match what they do It must solve problems for them It must be clear when they look at it That it's better than what they've got And you'll only get there by being bold By trying to stretch your design into some really new directions Alright, so we have time for maybe one or two questions I think I have about two minutes left Let's get over here and step out of the line so I can see the audience better Questions? You guys are just brain dead, aren't you? You're zombies at this point You have no neurotransmitters left at all So why am I even asking? If a couple of you do, then you may come up afterwards I hope this was helpful and I hope NDC has been wonderful And I'll be back in Oslo next year to see you again
Other creative disciplines, such as film, have long used storyboarding during the design process to explore the space of possibilities for their users, and to guide the production of compelling user experience. With user interaction patterns becoming more complex as UI technology improves, storyboarding has now become an excellent tool for user interface design, even in typical business systems. This session will introduce basic storyboarding processes, sketching and illustration, and alternatives for team-based creative storyboarding. No artistic talent necessary - even if you can’t draw anything more than a matchstick drawing, you too can add storyboarding to your toolbox for designing compelling and productive user experiences.
10.5446/50959 (DOI)
Yeah, I guess it's time to start. Ah, good morning. Morning. It's actually not morning, but it kind of feels like it. Hi. We are going to do some closure today. So my first question to you is going to be, how many of you already know closure, at least a bit of it? Okay. The rest of you are really being adventurous today. I'm not going to try and teach you closure. I'm just going to write some code from scratch. I'm going to make a useful application in closure, a web application, and we'll see how that goes. And the rules are like this. I'm going to screw up. I'm going to type mismatch parentheses or just stupid typos, I get a function name wrong or something like that. And you are going to help me. If you see me do something wrong, you shout, okay? This is half the fun of it. It's interactive. That's just a nice way of saying, if I screw up, it's your fault. I should, it's a picture. Good for you. I should introduce myself. My name is Boudel. That's my Twitter handle. I'm going to take a phone site and follow me. And I just started working for the Norwegian stock market as a JavaScript developer. That's really been my thing. I'm a web developer. But I really love closure. And combining the two is particularly sweet. I'm going to show you how to do this. This is going to be our stack for today. I'm using the noir web framework, which is a very high level and very lightweight framework, kind of like Sinatra on Rails. And I'm going to be using MongoDB on the back end, because it's so easy and it works great. And we are trying, we're going to try and see if we can do some closure script at the end. I'm going to start by writing the web app in Clojure, completely server-sized, you know, a web 1.0 application. And I'm going to then try and extend it to be a more AJAXY closure script thing. If there's time, I never know. Right. So the application I'm going to write, actually, I don't even know who this guy is, HP Lovecraft. A couple. That might work on a few years at least. He was an American horror writer from the early parts of the 19th century, whose theme was, he had these kind of elder gods lurking in the shadows, huge horrifying monsters transcending space-time, which just lie waiting for the stars to become right and reclaim their rule over this planet. And the most famous of these, you might actually have heard of this one, is Cthulhu, who sleeps in this sunken city just waiting for the stars to become right, so he can rise up and consume us all. And also, there's Nyarlathhotep, the black pharaoh. There's Yogsothoth, the render of the veil, and of course Larry Allison. Oh, that one might at least. I was afraid this being a Microsoft conference, he wouldn't know who this guy is, but Cosmic Horror apparently is the universal. That's great. Sorry about that. How's the puppy's in the sand? Okay, I think we're ready to go. And, you know, I've seen so many people today pulling out the visual steed here, and if the hipsters want to develop to do the live kerning, I think that's, yeah, who does that? You should write your own idea, and that's what I've done today, because that was a lot more exciting than writing slides. And, yeah, basically here it is. My little in-brow IDE, I call it Cuthnip. It's basically, if you know Clojure, you will know the build system called Lightningen, and this is just a Lightningen plugin, and it's not very advanced yet, but it's great for live kerning, so I'm going to see if it wants to behave today. Let's just start writing. I have cheated a little bit. For one thing, I do not want to bore you with writing CSS today, so I've got the styling ready. This is, I just put my static files in a folder called resources slash public in my projects. Some fonts, some pictures, jQuery, and the CSS, and I've written a little Clojure file which just helps me connect to MongoDB, which is also very boring to watch me write. And that's it. Let's start at the project file. This is project.clj. You can see the name at the top. Can you actually all see the font? All right. In the back there, that's great. Right. So this is kind of the make file for Clojure projects, and I also pre-written this because this is just a lot of boilerplates. We have a bit of boilerplate for Clojure script for automatically compiling it. We have some dependencies, Clojure itself, MongoDB driver, and some handy Clojure script libraries that we're going to look at later. And here's the just, this tells the build system, which is the main class. Clojure actually runs on the JVM, which might be alien to a lot of you, but it is very oriented towards object-oriented programming. Anyway, that has just got started. First of all, we need a web server, a running web server. So this is interesting. I'm actually, I'm not a Windows user normally, but in solidarity with you guys, I'm trying to do the presentation in Windows today. So these backslashes are going to surprise me. I'm not even sure if they're going to work. Let's see. We're going to create a server file like this. We go with a namespace, package declaration kind of. And we are going to have to require a few things here. Yes, I am consulting my notes. First of all, we are going to load that helper file that I talked about. Like this. This imports it into my scope. And we get noir.server. We call it server. And that should be enough to get us up and running with at least HTTP. First, we define that main method. Just boilerplate, really. Inside that, we connect to our MongoDB, just so we have that done. Like so, we have to MongoDB, localhost. That should be enough, right? Sorry, I think I'm going to just make sure. Like so. And then, start our server. And that's really as easy as that. I'm going to add the port number, and that should be enough. Save this. I'm hoping this compiles. It does. Then I'm just going to run the main method. This should start the server, and it looks like that works. Let's open that. Now we see noir responding with a 404 page, which is kind of expected because we haven't defined anything. But we see that we are now up and running. Notice, by the way, the workflow cycle here. I'm just writing code. I'm saving it, catnip it, automatically, compiling it, and inserting it into the runtime. In fact, my editor is right now running in the same process as the application I'm writing. So this is all completely seamless. I just write new code, and Clodra knows how to replace it automatically. It's just a hot-swapping kind of. Okay. Now we should probably make some web pages. I'm going to create a file containing my views. I got some point of play for that, but I'm not going to want it to type out. Just various imports and requirements. Yeah, nothing that really needs commenting. So I'm just going to leap right in and define a page. It should be as easy as that in Noir. I call the deaf page function. I give the path of the page I want to create. And does that take parameters? I think it does. And at this point, we are just going to basically return our HTML. Unfortunately, this being Clodra, we have macros to take care of writing the HTML for us. We have the DSL in which we can write our HTML directly. I invoke that by going HTML5. That's a function which creates an HTML5 web page with proper doc types and everything. Just put a hashtag in there with a title. Did I mention the application we are going to write? To-do apps are really boring. How about if someone is interested in these great old ones that I mentioned earlier? Let's create a great old ones spotter to do this. Your input, the huge monsters you want to make sure you have seen, and you check them off as you see them if they don't eat you. We are going to call that my little Cthulhu. That's the title for our web page. We are going to include the CSS. There is a function for that. I call it screen.css. I should probably do this. Now I want to, very quickly, with a header, my little Cthulhu. Let's see if that works. I'm going to compile it. Look at that. That is our web page. Styling ready to go. Magically. We should probably start with an input box for this. Let's just read quickly. Is there any unbalanced parentheses? Let's say that we have an input box that posts to a page called slash new for adding great old ones. It takes, let's see. There we go. Text field. Let's call that goo for great old one. There we go. Now we should actually make that do something. We just create that new page. It takes a parameter now because we are passing in goo as an HTTP attribute. We are going to pick that out. Go. There we go. At this point, we start talking to MongoDB. That is just ridiculous, simple in Clojure. There is a function called insert in our driver which takes the database we are inserting into. Then we just create a map or a hash map kind of structure which takes the key goo and we input the name that we got. Let's have a flag to check it off, just a boolean. That is actually it. When I created a record, there is no schema to worry about or ORMs or anything. Finally, let's redirect back to the main page because this is a post and this is how you are supposed to do it. Let's see what happens. Actually, I am not sure if this database is empty. If we input Cthulhu, it seems to be working. We should probably do a database query. Actually, I am going to style this a bit for you because this looks kind of boring. My little Cthulhu. I thought maybe at this point we should have a list and do a database query. I am just going to do that directly in the template here. The connection just melts down, allowing me to read and pay attention to how that goes. This is called find flaps. This contains my interests from last time I ran through this as well. So we should probably make a template for rendering this. Let's do that. We have a function called def partial, which kind of creates this is a macro, which creates a function which will turn its inputs into HTML. And the first let's name it, we're going to call it guntring. And it will get one of these maps as its only parameter. So Clojure has a very nice feature called destructuring, in which we can just pick elements out of the map and put them in variables. Let's see. We do it like this. We can see already what we need to pick out. There's an underscore ID property, which is just MongoDB's ID. There's a name. Let's just put them in as we go. ID, goo contains the name of the great old one, and done, which is the flag. And that's it. Now these three are defined in our function scope. So let's do a list item. And quite simply, just put a span in there with the name of the goo. And instead of just outputting our query directly, let us do some fancy functional programming and take that function that we created up there and map that onto the list of database objects. Map, goo entry. Oh. We have an unmatched, yeah, we need. Unbalanced parentheses. Didn't I tell you to warn me if I did that? List of compilers. Oh, look at that. Still doesn't work. See you, Al. And out this one, there. There we go. Don't render this. We're going to do it twice, but yeah, let's pretend it. It's perfect. So now we need to be able to check these off as we go. So let us extend this template with a very simple, well, first let's do a class. Put a class on the list item. Class. And if done, it's done. We're going to make it the class done, otherwise open. Quite simply. Of course, they're all open now. Well, let's do one thing at a time. Oh, and I promised you I would put a checkbox here. Just going to do that as a form so we can post it when we click on it. And at this point, we are going to make use of that ID parameter. Just send it to done slash ID. And the actual input. Input.check. That will be the class name. Time should be submit, obviously. Value, we're going to do a trick right here. Because I'm really shit at drawing icons. I don't know about you, but I can't do that. Unfortunately, there's something called UTF-8. So I'm going to steal some icons from way out into the character sets for this. Once again, if done, it should be checked. So I've got a shortcut for that. This should be a checkbox that is checked. Otherwise, just the box without the check. There you go. Now, we want to be able to click this and make a checked. See how that goes. Yeah. Remember that page done slash ID. We haven't defined it. Now we're going to do that. And how do we pick the ID out of the URL at this point? I think this is pretty much what was synodered us. Just code an ID. And it's going to become a parameter that we can just destructure. Like so. Problem is, I mentioned this runs on the JVM. And our MongoDB driver falls back on Java's MongoDB driver, which is just a lot of stupid things. Like, for instance, the ID is no longer a string. It is an object containing this string. And we actually have to pick that out manually. So let's spin a closure and do that. Actually, we are getting a string in here. We have to create this object type out of that string, which is fortunately very simple. At this point, we're calling out into the Java environment. Like so. And this is real simple. At this point, we can just go, oh wait, let's do the query. We need to pull the document we're changing out of the database, obviously. Find map by ID from GUI and ID. Quite simply. And save that back. This would just write it back without making changes, but we should make a change. In particular, we want to change the done key to its inverse, like so. Then as usual, we'd write back to the root. See if that works. Yeah. Nice. So we are 20 minutes in. I am just going to, I want to get rid of that extra cthulhu, so I'm going to implement the delete button as well. That is fortunately very, very quick. And as I screw this up, just copy this one, make delete, send all this fancy stuff, just being x, oops. Like this. There we go. Change that to delete, probably. Yeah. Now we just delete this one, too. Delete. We don't need to fetch the document out of the database this time, because we are just monger.remove by ID, ID, and database name. That's it. Unbalanced. There we go. See if that cthulhu will disappear now. That works. I'm going to toggle, of course, and then that is our application on the server side. Now let's try and do something a little more interesting. Actually, last night, just before the party started, I discovered a very nice little binding library for Clojus Script. I just had to include it in this talk. So I ran home and implemented it. So this is going to be the first time I'm going to try and do this on stage. And I haven't practiced at all. This might blow up in my face. Let's just see. I wish my IDE managed to compile Clojus Script as well. I haven't gotten that far. So we're just going to fall back on the boilerplate that you saw in the project file, which if I invoke lining it like this, it should just start up and watch for changes and just compile into JavaScript. Yeah. It's probably going to start. So first of all, then I think we should try and just, instead of rendering this to-do list on the server, we should be fancy and do Ajax instead. So let's actually just get rid of this template. Get rid of the whole thing, except for just the URL tag. Empty. OK. Now, let us create a Clojus Script file. CLJS. Got some boilerplates for the namespace. And just to check that this works, that this compiles into JavaScript and runs it, let us do the classic JavaScript alert box. Like so. This gets more fiddly because it's actually having to compile it outside of the IDE. So I'm probably going to have to reload. It compiled successfully, apparently. Let's see. It doesn't seem to work. Now, of course, there we go. Of course, we have to include that JavaScript in the web page to make it run, which is very simple. Include JS. Client. JS. Still doesn't work. Let's take a look at the console. JQuery is not defined. Well, that makes sense. Perhaps we should include JS. JQuery, not JS. Oh, that is CLJScript, ladies and gentlemen. Really youthful CLJScript. Right. So instead of fooling around with alerts, let's do some Ajax. Now, I'm using a CLJScript library at this point called fetch, which is kind of like SoccerTio, only not quite as advanced. But it's very useful. It abstracts away all the technical bits and just lets you write, basically, CLJScript functions on the server that you can call like CLJScript functions on a client. I'm going to show you. Actually, let's return to the view and define one. We do that by the remote, like this. How about a function that just returns the whole list of great-none ones? What did I just do? Right. There we are. Which should be as simple as monger slash find maps like earlier. I just returned that. It would be that simple. If not for that idiotic object ID thing. So I created me some boilerplate for that. There it is. This is completely unreadable. This is the kind of code that scares people away from Clota forever. So pretend it's not there. What it does is instead of monger.findmaps, it calls that and maps it through a little per-like function that takes the ID and replaces it with a string. So we don't have to worry about that bit. So all we need to do here is call getGooList. This is now going to be returned to the client when we call it on the client. Client's still doing JLs. Go away. Now we should probably just go ahead and call that. We have a macro on the client that got remote, which just takes. Did I call it Goolist? Yeah. Sorry. This takes some code that we want to run on the server. So we're at it in parentheses. The joys on lisp. And we put the result in the variable called newGoose. And let's log that. Just to see if it works. Keeps doing that. It's compiled. So this time it should work better. In fact, if I bring a, something's happened. Yeah, this is a ClotrGIS script object containing a list of something unreadable. It's probably what we want. It's just not very readable. In fact, let's make it readable string. Now reload, and we should get. This looks legit. All right, go away. Now, data binding. This is the cool stuff. This is kind of like backbone and ember and things like that only for ClotrGIS scripts. Let's return to our HTML code. And in the URL, we just define a template for the data binding. We return to our list item. But instead of putting that in a partial, we define a binding. In this case, bind a sequence to follow.client.goose. This will call the function goose in our client code when it wants to render. And once again, the span to which we bind cthulhu.client.title. So this calls a function called title, which just returns the title of the database object in question and puts that into the text of the tag quite simply. So this does nothing now. Except now we see we have an empty list item. Return to the client. Now we are going to keep track of the list of database items by creating an atom. We're just going to list in this case. So instead of just logging this, we're going to put the result into this atom. And then hopefully the binding framework is just automatically going to update that for us. This should work. Let us hope. It should break completely, in fact, because we need to define those. Remember title takes argument item. And we return the name after that. We have to make sure cladr script exports this to JavaScript. That's just like this. Still doesn't work. Because it doesn't. Interesting. Yeah, how come that doesn't work? Didn't I forget the method? OK, so we want to title this. It should work. Demo effect. Are you thinking? Yeah, I saw that. I got that while developing, too. I'm not sure what that's about. But it actually doesn't need it. Sorry? Yeah, that's the J. I know, there it is. But that's not it. This is interesting. Let's fall back to console.log. Like that. So at least I know if the code is running. Because, yeah, it is compiling. It's not running. It's just weird. And of course, the output here is completely unreadable, as you can tell. So this doesn't tell me much. OK, strange thing is it doesn't seem to run the code at all. There it is. Not outputting anything. OK. Let's see. I might have a typo, of course. Sorry? Oh, didn't I do that? Makes sense, yeah? Of course, no, no, no, no, no, no. I shouldn't have to do that. No. That isn't it. So this is the problem with Clojure script. It's so new, and the tool chain is horrible. And you never quite know what's happening at any one time. I wonder if it's even compiling, in fact. Save it. It does compile. And it totally seems to be running. Right. Just checking my notes. Because, of course, it might be that we're just not receiving anything. Like so. Run that. We are so receiving something. Reset is just the function that replaces the value of an atom with something new. So that chump is set. So we're just going to run that. So we're just going to run that with something new. So that chump is set. See if we are actually putting something in there. Yeah, we are. Oh my god, I am such an idiot. I just realized what's wrong. So you should have told me this. We wouldn't have this problem. We're going to run this in a new framework. That needs to be actually initialized and run. And we'll do that just by calling the function bind in the binding frameworks. Namespace. Yeah. You're actually applauding when I fuck up. I like that. OK. I mean it wouldn't be fun otherwise. That's what I find. So, of course, afterwards I'm absolutely going to claim that I put this bug in on purpose. Okay, carry on. We are going to, I'm not going to bore you with a complete re-implementation, but I did want it to be able to check, right? For instance, I'm going to keep the input box as it is as opposed to request. But let's do the check. Let's reintroduce that checkbox by this time. I'm going to not do it as a form, but just as a span..check. And then let's use one of the fancy features of this binding framework. First of all, cthulhu.client.check. This will be a function that just returns which kind of checkmark we want to put in there. Because that depends on the database object. And then we add a click handler, and that is as easy as this. So this line is going to be long. But there is no longer closure syntax. This is just a string. Let's make that a function called toggle. So the idea is that this span that should now be visible, shouldn't be, isn't rendering because I haven't implemented the functions. But the span is going to have an event listener on it now, which catches clicks and calls toggle. So let's implement toggle and what's the other one, check. Check is very simple. Check. And instead of gu, we want done and we want an if statement for that. If done, that magical checkbox. Otherwise, just the empty box. That looks legit. Shouldn't actually render now. Doesn't, it's probably missing. Yeah, of course, I can't bind the event listener. I have to implement that. And that one gets a little tricky. Right. And this one gets the item as well. But this has to return a function, which is called when you click. That doesn't take any parameters, but we still have the item in the closure. So we should be fine. So quite simply. Like I did earlier. Flip the done flag. And then we have to save this back to the database. I'm going to create a function save gu on the server. Just going to write this out first. We just call it with the item. That's going to return our new goose. Just like this one. I would just do the same thing. Like so. Something's starting to happen. I wonder why it's not all rendering. Probably, yeah, it's working just fine. Let's go back to the view and let us create save gu. Let's call that a doc. And save that back. And we are going to have a problem again with that cursor object ID thing. I'm going to add one more function. I'm going to call it edify, which turns the string ID into an object. And it takes a course and documents. And I just basically switch the ID with that object with the ID wrapped into it. That way, yeah. And then I identify this doc and that should be it. And of course, I promised the client that I would return the modified list. And now for the moment of truth. This is all rendered client-side. Now, you believe me, right? Good. Let's see. Click that. That's not right. Interesting. Anyone got an idea? Yeah, that means that's when you prefix a Java class name with a dot, it means create a new object, like call the constructor basically. So that is correct. I don't think we have a problem with this one. No, we don't. Interesting. Oh, look at that. That is when, I mean, get gulist. This is the function that both of these remotes calls. And yet at this point, it returns just garbage. How strange is that? Why do you want to do that? I wonder if we are just rendering it wrong. Let's see. Toggle calls remote-savvy-item. Gets new gaze. That looks right to me. We should log.log.js.concern. I'm noticing that one of the things with Clojus script that isn't great is how you call out into JavaScript. Instead of console.log, it's like this. It's messy. Right. Okay. So now when I click, just make sure. Now when I click, it should log what it gets from the server. Togtype. What is that I don't even... It has an error. Yeah, that's fetch doing its magic. Wait a minute. It shouldn't be throwing an error because that shone it. Yeah, you're right. That's an internal server error. So theoretically, if we check the network... Go away. Yeah, well, it won't... It's a no-frame. But that means something broke. So this doctype is actually probably just the start of the error message that we're getting. Very good. So I'm guessing that if we just skip the saving, just command that out for a bit, it's going to work but it's not going to actually check off anything. Yeah, I'm clicking now in case it can't tell. So this is the line that produces error. So I'm starting to wonder if Elify went wrong after all. Let's see. Let's take a look at the other save. No syntax error. Let's compare with my notes. Looks completely correct. Oh, my God. See that colon? Of course. Lisp is to be completely free of syntax and then Clojure comes and introduces these things. To confuse me, I think they're doing it on purpose. See. Works. That is client-server communication in Clojure and Clojure scripts. Practically in the same language. And I created a to-do app from scratch in 45 minutes with errors, with me on stage saying stupid things. I think that's kind of cool. Well, I'm not going to keep you anymore, in fact, because that's what I got. I'm just going to post a URL for you in case some of you want to take a look at this. I've already pushed this code to my GitHub. So this is the URL. My little Cofuli. So you can check that out if you like. It's even got the signs for the pretty puppies and the realasen. Right, thank you. And unless you're all completely bored out of your mind by now, I'll take some questions. Looks like you are all asleep. Well, at least I tried. If you do think of some questions, you can come with me later. I'll be around. Thank you for coming.
Of all the strange new languages gaining popularity today, Clojure, with its roots in the alien world of Lisp, may well be the strangest. It is also, its proponents insist, by far the most powerful. They'll show you weird and incomprehensible proofs of this—macros, lazy lists, monads, what have you—that may well send academics into orgies of rapturous debate, but the question always remains: "does this have any real world application at all, or are you all just geeking out on us?" Let's find out! In this presentation, you'll learn how to build a simple web app using ClojureScript on the client side and the Noir web framework on the server. You'll see how Clojure can help you tie your client code and your server code together, giving beautiful interoperability and code reuse. We might even have a go at a macro or two outside of the lab.
10.5446/50963 (DOI)
I am losing my voice, but I will do my best to get through this next hour for you. I'm a program manager on the ASP.net team at Microsoft, but today I'm going to be talking about a project that me and another guy on the ASP.net team, David Fowler, started in our own time last year called Signalar. Signalar is a persistent connection abstraction for.NET. So how many people have heard of Socket.io on Node? Okay, a few people, almost everybody, which is good. So Signalar is kind of like Socket.io for.NET, but we also have elements of Now.js, which is another library for Node that builds on top of Socket.io in Signalar as well. But again, it's all for.NET. And so if you're using Node already for real-time stuff, that's fantastic. Socket.io is a fantastic library. I'm not here to tell you that we're better than Node or anything. I'll leave that to tomorrow in the cage match with Rob Connery. But if you're a.NET developer and you want to do real-time development, then Signalar, I would hope, is something that you want to have a look at. So without further ado, I don't have any slides. I really don't like using slides these days because most of the talks I do are full of code. So let's just write some code and we'll learn how this thing works. So I'm going to start a new empty web application. And then I'm going to sort of new get in the world. So I'm going to say manage new get packages. And then from a local feed, I'm going to install Signalar.sample. So we have this working sample package for Signalar, if my machine is there. So here we go. That you can pull into your project. It pulls in all the required dependencies, obviously, that are needed to get Signalar itself working. And it pulls in some working code. So we can see here we have the beginnings of a stock ticker. Now, this is not a real stock ticker that connects to some service in the cloud. It's just some background thread that gets kicked off and fakes out of stock market. But it's a good indication. It's a good sort of learning tool to see how something might work. So let me go ahead and run that up. We can see that I've got that running over in one browser. I'm going to launch another instance and put that over here because real-time demos are always that much more impressive than we have two browsers. So hit me open market and you'll see that the changes are synchronized on both sides because we have two browsers connected to the same Signalar endpoint on the server and receiving information from the server and updating the UI as appropriate. So obviously in this case the client is JavaScript. We do support multiple clients and we'll talk about that later on. And the server, of course, is.NET. And I can go and hit close market here and you see it synchronizes on both sides. So while the majority of this is the server pushing to the client, when I click a button, obviously the client is pushing a message to the server and then that's resulting in the UI changing in both of the connected clients. So that's great, but let's go ahead and build something from scratch. That's perhaps a little bit simpler so that we can get an idea of what this looks like. So I'm going to add a new folder and we'll call it move shape. And inside this folder I'm going to add a class and I'm going to build my server-side logic for my move shape application. So I'm going to call this the move shape hub. So we have two levels of API server-side in Signalar. We have a low-level connection API, which is very similar to Socket.io. It is literally mimics a socket. So you open a connection, then you can send messages on it and you receive messages on it. You get an event when someone connects and you get an event when someone disconnects. And that's about it. So if all you care about is sort of the raw socket level stuff, you can program at that API and good luck to you. But most people like to use what we call hubs. We have this higher-level API called hubs, which lets me do reverse sort of procedure invocation from the server to the client. And vice versa, obviously the client can call the server. So I'm going to go ahead and create a hub. I'm going to derive from hub. I'm going to decorate this with an attribute that lets me change the hub name on the client. So I'm going to call it move shape on the client, because I like this word hub appearing there. I'm going to say public void, move shape. I'm going to take in an x and a y from my client. And whenever that happens, I'm going to go and tell the world that someone has moved a shape. And the way I do that is by accessing this client's member. And if I sort of zoom in here and have a look at clients, what you'll note is clients is dynamic. So clients has a property on hub, and clients represents all the people who are currently connected to this hub via signaler. And I can now go ahead and invoke a method in my client from my server-side code. Now that client might be JavaScript, it might be a.NET client, it might be iOS, it might be Monotouch. But in this case, it's going to be JavaScript. So this is dynamic, because obviously this isn't.NET code that's going to be running here. This is going to be something that's sent over the wire and invoked on the client. So I'm going to say shape moved, and I'm going to pass in the current connection ID. So that's going to be context.connectionID, so that they know who did it. And then the x and the y that was passed through. So that's my server code done. Let's add some client-side code. So I'm going to have JavaScript file, move shape. Now I need to pull in another library here, so I'm going to go back to newget, go to my Dropbox feed. I'm going to look for jQueryUI, install that. Wait a moment, pause for dramatic effect, there we go. I'm going to drag in jQuery, drag in jQueryUI, and I'm going to start writing some signaler client code. Actually I may as well drag in signaler in here as well. This is just for IntelliSense reasons. So I'm going to go ahead and get my hub. So that's dollar.connection.moveShape. We'll talk a little bit more about how that turned up in just a moment. And I'm going to have some type of UI on the screen that people can move around. So let's just say it's going to be an element called shape. So let's go ahead and find that. Okay, so now that I have my client-side hub, moveShape, I have to add that method that I want to be invoked. So if we go back here, we saw that I'm going to invoke a method called shapeMove. So let's add that method. I'm going to use jQuery's awesome extend method to do that. So for those of you who haven't used extend before, all it does is say take the object here and add anything that's on the second object to the first object. So I'm going to add a method called shapeMove. And that method has to take a number of parameters from the server. So connection ID, x and y. If we go back to here, you can see I'm passing connection ID, x and y. So I have a matching method signature on the client. And then I'm going to check to see that this wasn't invoked by myself. Because I'm going to be listening to all messages. And when I broadcast something to, when I send something to the server, I'm going to get that same message back again. So I'm just going to do a little check here to make sure that this isn't a message that originated from me. So I'm going to say if dollar.connection.hub.id is not equal to connection ID. So it's not from me that I want to go ahead and move the shape on the screen to this new position. So I'm going to say dollar.shapescss. Left is equal to x and y is equal to top. Oops, get my bracing right. Okay, so now that I have my client side method, I need to go ahead and wire up the client side logic and then call the server whenever I move this shape on the screen. It should be top equals y, I think, very much. And I've already got my first correction from the crowd. Fantastic, price for you, sir. Left, x. So now I have to go ahead and wire up the client. So first I need to go ahead and start the connection. So I'm going to say dollar.connection.hub.start. And then when that's done, I'm going to pass in a callback here that will get invoked once my connection has actually started. I want to go ahead and enable the sort of dragging behavior so this can get moved around. So I'm going to say shape.dragable, which is that jQuery UI function that I brought in. And I say when that gets dragged, go ahead and invoke this function. And when that's dragged, I want to call the server and tell the server, hey, what's the position of the shape? So I'm going to say hub.moveShape. So again, moveShape is this method here on the server. So this is my hub. I'm going to call hub.moveShape. And I have to pass in x and y. So that's just this.offsetLeft, this.offsetTop. Like so. Okay, so that's my client-side code done. Let's add a page to actually show it on. Let's add all these JavaScript packages that I need. So I need that one, I need that one, I need that one. Let's drag those into here. And I need to add an extra magic script reference that we'll look at in just a moment. That's going to pull down some dynamically generated JavaScript to make this sort of signal magic happen. So signalr slash hobs. I'm going to add my div with an ID of shape. And so that you can actually see my shape, let's add a style block. And we'll say shape. Width is 100 pixels. Height is 100 pixels. And background color is some new... Let's have a look. This time blue over here looks pretty good. Okay, so that's my app. Let's run that up, see if I made any mistakes. Okay, so it's showing there, that's a good start. Let's open another browser instance, put that over here. Paste that in there. And if I pick this up and move it, you can see that it moves in both sides. So really, really easy to get going with signalr doing these type of broadcast scenarios. So let's tease apart this demo a little bit now and try and get an understanding of why this even works and what's going on under the covers. Okay, so let's have a look. Let's bring up the network tab here and we'll hit F5. We'll see a few things that happen when zooming down here. So as Remember I said, signalr is an abstraction. It's an abstraction over various ways that we can in HTTP mimic or actually achieve a full persistent connection between the client and the server. So how many people have heard of WebSockets? Everybody. Okay, good. WebSockets is kind of synonymous with this new move to the real-time web. The problem with WebSockets is they don't work. They might work if you have a browser that supports them. If you're a.NET developer, they'll work in ASP.NET if you have Windows 8 on your server and using.NET 4.5. They'll work if your hosting provider supports WebSockets through their reverse proxy and load balancing infrastructure. And if your client is behind a firewall or a proxy that also won't trample on WebSockets. If all of those things are true, then yes, you'll get a WebSocket connection from your client to your server. Now, we know all those things aren't going to be true for quite some time yet for the vast majority of people. And so libraries like SignalR and Socket.io exist to help us bridge that gap. What do we use when WebSockets isn't available? They also give us a much nicer, higher-level API. Who's actually programmed with WebSockets? Okay, only a couple of people. It's hard. WebSockets is sockets. So you don't get any of the niceties I just showed you. It's open a socket, send a message. So you have to do all your own dispatching. You have to do your own invocation logic. It's basically just sending strings or binary data back and forth. If you've ever done socket programming before with raw sockets, it's basically similar. And the server-side programming, and that's just the client side, the server-side programming is even harder. There's a specific way you have to do WebSocket framing. You have to ensure that messages are sent in order. You have to ensure that you're only ever sending one message at a time on a connection. So there's all these types of buffering and synchronization code that you have to write if you're not using a framework that does that for you. So it's nice to have a high-level framework so that I don't have to worry about that type of stuff. So if I don't have WebSockets, what can I use? So we fall back to a number of different things. So if we come down to this request down here, you can see that there's a request to an endpoint called Negotiate. So the first thing that SignalR will do is make a standard Ajax call to the server saying, hey, I need to negotiate a transport. I need to figure out what can I use for me, client, to talk to you, server, in somewhat of a real-time fashion. Now, we'll start out with WebSockets, and then we fall back from there. Now, we'll only attempt to use WebSockets if the server says it can support WebSockets. If you're running on top of ASP.NET, that means you're on ASP.NET 4.5 on Windows 8 with IIS Express 8 or IIS Express 8. Now, most people aren't there yet, so most of the time you're not going to be using WebSockets just yet. You'll then fall down to something called Server-Send Events or Event Source. Server-Send Events is a technique or actually an API that you can use in the browser in all browsers except IE. That lets you have a persistent connection from the server to the client such that the server can push information down to the client. So if WebSockets isn't available, we'll try and use Event Source. That will generally work in all browsers except for IE. So we skipped Event Source for IE, and then we go down to the next level, which is Forever Frame. Anyone ever tried using Forever Frame? Ever heard of that technique? A couple of hands go up. So Forever Frame is the idea of having an iframe embedded in your document that makes a request, loads of document from the server, and all that document in the iframe does is spit out script tags, and it never finishes. So it's just an ever-growing document full of script tags, and the great thing about browsers is that they'll execute script tags as they get them, even before the page is finished loading. So you can have a hidden iframe with a document in it that gets script tags that contain the messages that you want to be sent to the client, and it'll happily keep invoking them for you. So Forever Frame will be used generally in IE because it supports Forever Frame quite well. And then if none of those things are supported, which is unusual, but it can happen depending on the infrastructure between the client and the server, perhaps the router or something doesn't like a persistent HTTP connection, then we fall back to the lowest level, which is long polling. So the idea of long polling, when we all know what polling is, right, polling is, I call the server, have you got data? It replies no. So I say, okay, after some period of time, I call the server again. Do you have data now? No. So I wait a little bit longer, and they say, hey, have you got data now? Yes. Okay, so now I have data, and then I wait a little bit more, and then I poll. Long polling is the idea of saying, let me call the server and say, have you got data? And the server doesn't return until it does. And so that connection stays open until there is data available to sort of fill that request. And then once that request comes back, I process it on the client, and then I send immediately another request saying, hey, have you got any more data? And then that stays open until there is more data. So long polling is advantage over polling in that while there is no data flowing, you have an open connection, but you're not really using any resources other than those taken up by an open connection. You're not constantly hitting the server with a request that doesn't result in any data, which is kind of nice. It also has very low latency compared to polling because you're going to get the data as soon as the data is available, assuming you have a connection open. So we had a negotiate request, and after that we had a connect request. And if we look at the URL for this, zoom out a little bit, we can see that over here, we have signalize slash connect. So now that I've done my negotiation, I'm going to connect to the server with the connect verb. I'm going to say transport equals server-cent events. So the client and the server negotiated server-cent events because WebSockets wasn't available. And then server-cent events, as I said, is this one that Chrome, Firefox, Opera, Safari support. And then once that's open, what we'll see is that thing stays open. So we should see over here, here's that server-cent events message over there. You can see it's been over open for two minutes, been receiving for two minutes. And if I come over and open another browser now, let's do this one over here, so let's push this one back. And if I move this around, we'll see that that bar changes if it doesn't make a lot of me. Let's see if I can make this a little bit easier to see. It might be easier if I just refresh. There we go. So here's my connect down here. As I move this around, see how that gets longer because it's currently receiving information from the server so that load time for that request just keeps getting longer and longer and longer. And that request will just stay open while it's receiving data until either the timeout kicks in. So there's a default timeout of two minutes at which point it'll reconnect or something goes wrong, obviously. Okay, so that's sort of the intrinsics of negotiation, negotiated transport, and then get a connection as a result of that. So what else happened in that negotiate request? Because there's other interesting things that happen in here. We zoom into here and have a look at what was returned. So we get a JSON payload back from the server when we negotiate, which gives us a URL, and we get a connection ID. So that's interesting. We know that every connection that connects to Signaler gets a unique connection ID. Now, in this case, it's a GUID. You don't really have to worry about it. We'll have a look a little bit later on when you'd need to worry about mapping this connection to something useful. But this means that you can address people individually using Signaler because you have a connection ID through which you can identify them, which is kind of cool. You can also see that we sent back a flag saying whether we should try WebSockets. So on the server, we determined whether WebSockets was even going to be supported. And if it wasn't, we tell the client, don't even bother trying because it's not going to work. Okay. So that was a very, very simple demo. Let's move on to something a little bit more convoluted. We do support something, we do support more clients than just JavaScript. So let's open up a slightly more complete version of this application. Let me go ahead and find it. And let's open this one here. So this is the same demo that we just saw. It's move shape. It has move shape. It's basically the same thing that I just showed you. It's just a little bit more UI and a little bit more logic in the client. But other than that, it's the same. So if I run it up, you can see here it's pretty much the same as we had before. And now I'm telling you how many clients are currently connected as well. So I've got this number in the top right-hand corner. So how many people are currently connected? So what we also have here, though, is a different sort of client. I have a WPF client. So I'm going to come in here. I'm going to show up this move shape desktop. Come back and open this one. Let me pin this to the right. And now when I move this around, you'll see that I've got my WPF client also receiving this information from the right-hand side. And similarly, I can move from the left to the right. So the code for this looks pretty similar. Let's go back and have a look. So let's have a look at my... I'm not the greatest WPF programmer in the world, but I can sort of model my way through it. So it's just a canvas with a shape inside of it and a text block for the number of clients. And then the code behind for this canvas is... I have an async void method go because SignalR is async from top to bottom. In order to scale in ASP.NET, it needs to be asynchronous. If you're going to keep connections open for a long period of time, they have to be asynchronous because you don't want to be using a thread when that connection is doing nothing because you're just going to run out of threads and then your server won't be able to serve any more connections. And so SignalR is async top to bottom, both on the client and the server. And so you can see I go ahead and it looks kind of similar to the JavaScript code. I knew up a hub connection. I go ahead and create a proxy for that hub connection. Now in JavaScript, that was done for me automatically. I'll go back in a minute and show you how that happened. But when I imported that JavaScript slash SignalR slash hubs, that was a hub proxy being created for me. So I have to create it manually over here in.NET world. And then rather than doing method invocation, I'm handling method events. So I'm saying when the hub raises an event called shape moved, go ahead and invoke this delegate using these strongly typed parameters. So connection ID XY. And look very, very, again, very similar to what we had in JavaScript. Although now it looks like WPF. I check if it wasn't myself and then if it wasn't, I go ahead and invoke this delegate back on the UI thread to go ahead and move the shape on the canvas. And now that I'm also tracking connected clients, I get a message whenever that client count changes. So when I connected and I got two clients, I got a message and I go and invoke that as well. Then I go ahead and start the connection. And I await that because it's asynchronous. And then once that's finished, I go ahead and enable the draggable UI. Now this draggable is just my own sort of ghetto extension for WPF that makes something draggable on a canvas. So don't worry too much about that. I've just sort of emulated what jQuery UI does. So I've said it's draggable and then when anyone drags it, go ahead and invoke this lambda and tell the hub to invoke the move shape method and set the left and top based to what this is here. So kind of the same stuff. So there is an interesting thing that was different on the server though. Let's have a look at that. So move shape. So here's my hub, very similar to the one we had before. But I've got two interfaces that I'm implementing this time that I wasn't before. I connected and I disconnect. So signal as a connection abstraction gives you connect and disconnect semantics. And so you can get alerted whenever someone connects to your hub or your connection and alerted when they disconnect as well. And so for a hub, the way to do that is to implement these two interfaces. I connected and I disconnect. And then we will invoke your connect method whenever someone connects to the hub and we'll invoke your disconnect method whenever someone disconnects from the hub. Now we also have a reconnect intrinsic, which you can handle as well if you want to. And that can be useful to ensure that when clients reconnect, they are going to say that they're a member of a period of groups and you can check whether that's actually true. So I'll talk about that a little bit more in a minute. So the interesting thing here is that I'm showing how you can go ahead and track connections. So one of the questions we very often get when talking about SignalR is, how do I get a list of clients? And the answer is you don't, because there is no list of clients. SignalR at its core is built on a PubSub mechanism. In order to enable scale out, which we're going to look at a little bit later, we don't store state. So if we were able to give you a list of clients, it would be very difficult for us to scale that out in a web farm because obviously you can have a thousand clients connected to one server and a thousand clients connected to another. And if your code is running on the second server and says, give me a list of clients, how does it know what clients are connected to the first server? Unless you're sharing some sort of back end state. And so there is no list of clients. What you have is a list of signals, not even a list, just arbitrary keys, arbitrary signals that you can broadcast to at its very, very core. Every client gets its own signal, that's what that connection ID is. Every endpoint gets its own signal. So this move shape hub will have a signal called move shape hub. And whenever you send a message to move shape hub, we literally send a message through the message bar that says, broadcast to move shape hub this payload. And the payload for hubs will include what method it is that you want to invoke and the arguments that you want to pass to that method. And so it's up to you to track connections and who exists on what connection. So in this case, I just have a static concurrent dictionary, because this is asynchronous. I could have multiple threads operating on this hub at one time, so I have to have a concurrent dictionary of connections because it's static. And then whenever connect is called, I go ahead and just add a record to that concurrent dictionary keyed by the connection ID. And then I go ahead and broadcast out to all clients the current counter to that dictionary. So I'm just using this dictionary as a way to track how many people there currently are in my app. Now, obviously, this wouldn't scale out because it's a static in-memory dictionary on this class. But you can imagine in a proper application, this would be stored in a database or in some type of distributed cache so that you could actually do proper how many people are currently connected sort of intrinsics. So similarly on disconnect, I go ahead and try to remove that and I now update the counter appropriately. Now, what isn't shown here, which is interesting to note, is that the relationship from users to connections is one to many. So if you have the concept of authentication in your application, then obviously people will log in using some type of credential and then they have some type of username, so it might be Damian in my case. And then when I use SignalR, I'm logged into your app as Damian, and then I'm going to connect to your app with a connection and that connection has a connection ID. So I need to store somewhere the fact that Damian is currently on connection 1, 2, 3, 4. And so that should be persisted away in your database or persistent mechanism so that when someone says, hey, I want to broadcast a message to Damian, you can look up the connection ID that Damian is currently on and then send a message to that connection ID. Now, I did say it was one to many though. I mean, I can obviously open more than one browser, or I can open a browser and iOS app or a Windows phone app and connect to the same thing with the same user, but I'll have different SignalR connections. So the recommended practice is to ensure that you're storing one user to multiple connections. Now, for a good example of how this works, we have sort of our smoke test app up in the cloud, this chat app called Jabba. So Jabba is sort of an IRC clone written on top of SignalR. It's hosted up on Windows Azure, but that's kind of incidental. And it's the sources available in GitHub, so you can go and see what it's like to build an ASP.NET application of this nature that's real-time using SignalR. And this uses SQL Server, it's underlying persistence mechanism, so you can see how all the user names are persisted, how the user names are mapped to individual connections, how we handle connect and disconnect. You can see as I'm talking here, people are typing and we're getting little speech bubbles that come up when they're typing, so it's more than just messages flowing back and forth. We actually have things like status indications as well. Jabba has some really interesting stuff as well. It has content providers, so if I keep scrolling up here, I should be able to find an example where someone has posted a JPEG, like so, and then we have sort of built-in content providers that will then go down and pull down that JPEG and throw it in the room for you, which is kind of cool. So it's a really nice application for learning the type of things you can do with SignalR and what you can do with real-time. So, well worth pointing out. It does get a little bit wacky in general chat sometimes, so I tend to stick into the more focused rooms, so to say. Okay, so let's go back to the JavaScript client for a moment because I talked about this creation of the proxy that I didn't really elaborate on. So if we go back to my index, we can see this SignalR slash hubs request that I put in here as a JavaScript file. Let's have a look at what that actually is. So let's bring up my network stack again. It's at F5. Let's find that one in here. So here it is, SignalR slash hubs. So we can see what we pulled down was a JavaScript file, but this JavaScript file was dynamically generated on the server. And if we scroll down, there's a whole bunch of wire-up stuff which is interesting, but not what I really care about. What we'll see down here is this part. This call to $.extendSignalR with myHubs. And so you can see MoveShape was my class, my C-sharp class on the server got turned into this JavaScript object that has a method that I can invoke that goes ahead and calls this server call function to actually make the call on the server. So it kind of appears like magic, but really all that we're doing is generating some JavaScript. It's really generating a proxy, just like you would have done back in the old days when you added an Asmx reference or WCF reference using Add Service Reference in Visual Studio. And then it would generate you a JavaScript proxy if you added it to the script manager in webforms. It's kind of similar except we do it at runtime. And we also allow the server to call back into the client. And so we get this really, really nice sort of back-and-forward remote procedure call paradigm that we can do between the client and the server, which is kind of cool. Okay, so that's how that works. Let's have a look at some other interesting things. So WebSockets. So I said that WebSockets wasn't running in this demo and it isn't. So the version of SignalR that's available up in Nuget right now is 0.5.0. The next version, 0.5.1, will come with WebSockets support in the box. So I actually have the source application open right here so we can have a look at what WebSockets support looks like just to prove to you that it does work. So it does require Windows 8 on the server, so I'm running Windows 8 Release Preview here. I'm running Visual Studio 2012. Oops, that's a known problem. Let's go back and close that and try that again. Rebuild. I'm running Visual Studio 2012 RC, which comes with IS Express 8. And so because I'm running IS Express 8 on top of Windows 8, I get WebSockets. So if I F12 in here and I hit F5, what we'll see down here is a different sort of request. So I'm in a browser that supports WebSockets. I'm in Chrome. And we can see the negotiate happened. If we look at the negotiate this time, we can see that Try WebSockets is true. So we're going to get an attempt for WebSockets, which is good. And I get this, Connect Raw. We can see that the URL there says transport equals WebSockets. And then you can see the status from the server was 101 switching protocols. So this is how WebSockets works. WebSockets is an upgrade request. A standard request is made to the server. It's a get request. You can see that here, get. And then it says in there, there is a header that says, hey, I want to upgrade this request to a WebSocket request. And then if the server replies with the 101, that means, yes, party on, we now have a WebSocket. We can now do persistent connection. And so we won't see any other, if I go ahead and start broadcasting stuff now, there are no other requests happening because I have a WebSocket. This thing is just there happening underneath the covers now. Now, I do have Chrome Canary installed here. And in theory, ooh, okay, see what that does. Chrome Canary is supposed to give me WebSocket inspection capabilities. Let's have a look at this works. I've actually tried this. I'm going off script here. So let's go network. Let's F5. Here it is. There's my WebSocket. And if I go to here and I go to WebSocket, look at that. There's my WebSocket frame. Sweet. So if I broadcast stuff and I come back to here, there's the WebSocket frame. So you can actually see the messages flowing over the WebSocket. And it even shows you the direction. So this is saying coming from the server, that right to left. And this is the JSON payload that Signaler was sending back from the server to the client. So you may have noticed that Signaler has its own protocol. You connect to a Signaler server with a Signaler client. It's much like Socket.io. This is our JSON protocol. So we wrap whatever message that you send, which is we serialize whatever you send as JSON, you can just send random.net objects. We'll serialize them with JSON. And then we wrap that with sort of a message framing protocol, which itself is JSON as well. And so that's what you'll see flowing over the wire. So that's kind of cool. We've got WebSockets running there, which is nice. So we have some other samples in our sort of sample project that's up in source. That's my daughter, Elizabeth. That was raw connection that I just showed you there. We have an example of just streaming. So in this case, all we have is there's a background thread on the server that is just pumping messages to the client. So rather than any type of interactivity, I just have a dumb client that's just listening passively to messages. A bit similar to the stock ticker sample I showed you originally. We have this mouse tracking one, which is kind of cool. So every person who connects gets a cursor and with a little number on it. And you can see that update in real time, which is kind of nice. We had that up to some ridiculous number and it does get a little bit strange after you've got dozens and dozens or even hundreds of people in there. We have this one called ShapeShare, which is sort of a more advanced version of the MoveShape demo I showed you. So I can put this one, I can actually add a username in here. So I have left, right, and then I can add a rectangle and I can move that round and I can add a picture. And I can move that round and I can sort of resize that. And you can see that it says change by right. And if I move it, it says change by right over here. It says change by left. So the source for that one is quite interesting because it shows mapping usernames to connection IDs. So that one is worth checking out when you go and get latest from GitHub. And then we have a drawing pad. Someone contributed this one. So this is using Canvas to go ahead and do sort of multiple people drawing in the same pad at the same time, which is kind of nice. And we have chat, but chat is basically what turned into Java. So I'm not going to bother showing that. So connection status is interesting. So I talked about the fact that we do connection intrinsics for you. We manage connect and disconnect. We also have the concept of reconnect. So some of these transports aren't truly persistent. So WebSockets is a persistent connection. It's bidirectional, it's open. You can send stuff both ways and it stays open. But all of the other transports aren't actually full duplex. Service and events in Forever Frame will keep a connection open over which the server can send stuff to the client. But the client, when it sends to the server, it just makes an Ajax post. And we sort of hide all that under the layers for you. So what happens when the underlying receive channel goes away? I mean, it's the internet. Sometimes things go bad. So we have built-in retry logic. So if the connection goes down for whatever reason, we'll reconnect and we'll continue trying to reconnect until the server comes back up again. Now, by default, well, until last night, I was just looking at the check-ins and we've actually changed this behavior, we used to have, we do have the concept of a time-out, so you can actually set a value that says, keep the receive channel open for this amount of time and then forcibly close it, forcing the client to reconnect. That can be useful in certain situations, depending on what your load balancing infrastructure is like. But now we actually have the idea of Keep Alive. So by default now, from source anyway, not in 0.50, but in 5.1, we have a Keep Alive packet that will be sent. So while that connection is open, we will ping from server to client an empty payload just to keep the connection open. And that will trick load balances like those used in Azure and App Harbour and those type of things to not terminate the TCP connection. Because what we find is that a lot of hosting companies, their infrastructure for sort of handling connections, if they see a connection that isn't doing anything for a period of time, they'll just abort it. And so you have to be able to keep sending information over that, like a heartbeat ping, to keep that connection open. So we do that for you by default now in 0.5.1, which will be out very shortly. So because we have Keep Alive now, this actually won't drop. This will actually stay open. What you would see is if it reconnected, if it did drop, we'll go ahead and reconnect for you. And we will raise that event to your code so you can handle reconnect if you want to, but most of the time you don't have to worry about it. The thing will just reconnect. And you can see here that as soon as I disconnected on this side, I got the red saying that I left. So we handled that disconnect for you as well. Disconnect is a really, really interesting issue. Detecting when someone disconnects from your web server is actually really, really difficult, because there is a lot of different ways people can disconnect. So the reason that worked so well this time is we recently added support for graceful disconnect. So when we actually listen to the events in the browsers that support them that say the person is navigating away from this page, like the window on Unload, and if it does, we will try and send a packet to the server saying this connection is about to be closed. Because if you hit F5, what you're basically doing is killing one connection and then reopening another one. Connections don't last beyond pages, of course, because they only open as long as that JavaScript is running. And so when I navigated away from that page, you can see that I got disconnected on the left-hand side because we sent a packet to the server saying I disconnected. If for some reason that packet wasn't able to reach the server, then we have a background thread that runs on your application on the server that does know about all the connections for that server, because you might be running in a web farm, and will every 20 seconds or so check every single one of those connections and see whether the client is still connected. So IIS does have the concept of is client connected? There's a property on the response object. You can check if the client is still connected. And if that's false, then we look at some threshold, and if you're over the sort of the disconnect threshold, then we'll go ahead and fire the disconnect event, raise that up to your code, and you'll be able to do whatever it is that makes sense in your application for when you use the disconnects. Now, when talking about reconnect, that kind of suggests that there's a period where connections aren't open. And so what happens if messages are sent to the user while your underlying connection isn't actually open? It's a little bit hard to demonstrate, but you can imagine here that, let's see if I can actually demonstrate this. Let's see if I can put a break point. So the server is sending messages, and we can see on the left-hand side the one that isn't going to get broken here. So I'm going to come over to my...where is it? I'm going to come over to...resources, thank you. No, that's not what I wanted to... I'm in Canary, which is what's making this difficult. It looks like it's different. Let's come over to here. Okay, scripts is what I wanted. Let's come over to SignalR. So we can see this one. Let me just shift these around so I can see them a bit better. So the one over the left is going to keep running. Drag...no...oh, this is not looking good. Let's try that again. Where is that? Ah, now I've lost it completely. That one is that one. Let's put that one there. That one is that one. Okay, let's pull that one off. There we go. Put that one over there. Okay, so what we want to do is find the transport logic that's going to handle events. So here's service and events. I know that's what I'm using. I'm actually using WebSockets to process messages. Transport...da, da, da, da, da. Here we go. So I'm going to put a break point here. Okay, so I'm broken. And so over here, you can see the left-hand side is still getting messages. And the right-hand side isn't. Well, you can't scroll, but you can assume that it's not. And now if I hit play and I get rid of that break point, we'll see that I didn't miss any messages. I got all the same number of messages. So you can see 21, 23, 25, 27, 29, 31, 33, 35. It goes up by two every single time. I didn't miss any messages, even though that connection wasn't open at the point that those messages were coming through from the server. And that's because on the server, we buffer messages. So we have an in-memory buffer so that while connections are in their sort of their reconnect phase, where they're trying to reconnect again, we'll store those messages in memory. And then when the connection reconnects, it always says, hey, give me every message since message 47. Because that's the last one I got. And then if the message is since then, we'll send those messages down and the connection will go in its merry way. So that's how we handle reconnect, which is kind of nice. Okay. Scale out. So one of the questions that most people ask, first questions that most people ask, is usually two. How does it scale? Like, how many connections can you support? And how do you do scale out? Well, I'm glad you asked. Let's have a look at that. So we built SignalR with the ability to scale out in mind from the very, very beginning. So that's why we don't store state, like a list of connections which we enumerate over to do stuff. What we do do is base things on a PubSub mechanism. And so all you need to do is have some type of message backplane across your web farm that you can use so that if you've got one server here, another server here, you've got 100 people connected to this server and 100 people connected to that server, if I broadcast from server one, I want to make sure that all the people on server two get that message as well. You need to have some type of common backplane for that to work. And we currently ship two backplanes that we support. One is Redis. So who's used Redis at all, anyone? Oh, no one has used Redis. Wow. Is everyone here a.NET programmer, I'm guessing? Yes, that's why you've not used Redis. So Redis is an in-memory database that's very popular in sort of the Linux world. But MS Open Tech, the new open source subsidiary of Microsoft, has released as its first open source project a port of Redis for Windows. So we have a Redis provider for SignalR to do scale out and we have an Azure service bus provider. So if you're deploying SignalR in Azure with multiple nodes, you can plug it into Azure service bus as the backplane for the messages between your nodes. So let's use Redis because I'm just running on my local bus, my local machine here. So I'm going to come up to my app start. And you'll note that I already have an app start method set up here ready to go. I just got this commented out. So I've already installed a package called SignalR.Redis. So I just go to newget, install SignalR.Redis, which is going to pull in the bits that I need to use Redis and SignalR into my project. And then it's going to add an extension method hanging off our I dependency resolver type that says useRedis. And so all I have to do is pass in the address to the Redis server, the port, the password if there is one, and the event key I want to use to sort of segregate my messages for this app. And that's it. And so if I build that, I now have to go off and obviously start Redis because my Redis server isn't running. So let's do that. So this is PowerShell. I've literally gone to GitHub. I've cloned the Redis for Windows repository from GitHub and I built it using Visual Studio. OK, it's C++. I just upgraded it from 2010 to 2012. And then I built it. And then I went to the debug folder and I'm running start process Redis.server. So now Redis is running on my box. It's running on port 6379, which is good because that's what I configured it for, 6379. So that's good. And then what I'm going to do is I have another command over here, which is going to kick off, where is it? This one. I'm going to start my web farm. So it's not good doing scale out unless I have more than one web server. So here I have two web servers. I've got IS Express on one side listening on port 8090 and then IS Express on another side listening on port 8091. So let's kind of reorganize all these windows so this makes sense. So here we go. This is sort of a visual idea here. We have one server on the left, one server on the right. Redis in the middle. OK, that's kind of cool. So let's go ahead and connect some clients to this. So let's get my browser running. Let's get one browser connected to... Let's see if I can do the resize shuffle here. Get one browser connected to a local host. I want this to go to 8090 slash move shape. So that you can see over here that IS spat out some stuff saying that things are happening. So let's open another browser and move that one over here. And we'll connect that one to the other web server. So you can imagine that I was in front of... I have a load balancer, right? And I'm actually just going to the same address. And now when I drag, you can see that it's working. So a little bit lagged this time. And it looks like I killed something. Demo fail. Oh, there we go. It's just lagging. My machine is getting a little bit wacky. You can see it did catch up. So even though it lagged, it did actually eventually catch up. All the messages got through. So now that it's warmed up, it's actually working quite well. So I now have two completely different web servers. So my application is running twice in two separate processes. So there's nothing shared between these processes. And all the cross-process communication is happening through Redis, which is kind of cool. Now, the one thing that isn't working, as you can see, is my connection count is wrong. So can anyone tell me why my connection count is wrong? I didn't saw that in Redis. Remember, it was just in a static dictionary. So I've got a connection count of one in each server, which is correct for each server. But obviously, if I was writing this app to be scale-out-friendly, I'd have to put that sort of information in some type of back-end, either in a Redis store or a C-consever or something like that. But that shows you just how easy it is to get scale-out working with SignalR using something like Redis. Okay, so that's kind of cool. So that's one of the questions answered that people always ask, which is, how do I do scale-out with SignalR? Well, you can. The other question is usually around load and scale. How many connections can you support? Well, I can tell you that, well, and we take this very, very seriously, where we obviously want SignalR to be as performant as it possibly can be and support as many connections as it can. With the current build on my machine at home, which is a Core i7-920, which is like a first-generation Core i7, it's about three years old, so it's not a new machine by any stretch, but it's quad-core, hyper-threaded, so it's a decent machine, but you could build one today for four or 500 bucks. I can, with 5,000 clients broadcasting at 10 or 20 messages a second, get about 30,000 cents per second in that infrastructure. So that's pretty good. And a three-year-old machine getting about 30,000 cents per second for 5,000 clients. Our bottleneck at the moment isn't CPU. In fact, the CPU doesn't max out during that test. Our bottleneck is contention. We have a fundamental flaw in our Core architecture. In 0.6, we're going to be re-implementing the Core of SignalR, excuse me, to change how we sort of do our Core level PubSub infrastructure, and we expect that number to drastically improve. So we've got some very smart people on the ASP.net team, much smarter than me, building custom data structures for doing message management at the moment. So as I said, we have an in-memory message store, and at the moment we have some issues with garbage collection because messages live for a while before they go away. So while it works very, very well, and when you get under really, really heavy load, like crazy heavy load with thousands of messages a second, you can certainly run into issues. We have new message stores, infrastructures, which can do millions and millions and millions of messages a second without breaking a sweat. So over the next four to eight weeks, we're going to integrate that into the Core of SignalR, and in 0.6, you should see the benefits of that. And as I said, we'd certainly expect the throughput numbers to at least double, and if not triple. So I'm certainly shooting for hopefully six-digit sort of messages per second throughput on my setup at home. But that's all good. That's just talk. Let's get a low test up and running here right now so you can sort of see how this works and what tools we use to do our low testing. So we have a performance harness for SignalR that we wrote called Flywheel. We have these crazy names. I don't know how it started. I think it started when David wrote a load generation tool called Crank. And so I thought, great, Crank's connect to Flywheel. So I'll create a harness called Flywheel. And then within Flywheel, we'll have a class called Shaft, because Crank connects to the Shaft, which is on the... Anyway, so we have these silly names. We're running out of engine parts to call stuff, so it's not going to scale very well. So we have this thing called Flywheel. And if I run that up, what you'll see is it's a dashboard, essentially, that spits out real-time information about what's happening on the server, like how many messages are being sent through the message bus at the moment, and graphs sort of the number of sends and the average sends per second. So that's kind of cool. And then from here, I can control how many broadcasts I want to do. So obviously, there's no one connected to this at the moment, because it's just me. I haven't created any client load yet. So setting broadcast rate of 5 per second isn't really going to do an awful lot. But we'll see how we do that in just a moment. So I can set the broadcast size, I can set the broadcast rate, and I can say what I want to do when the Flywheel receives messages. So we also have in Flywheel something called Mini Crank. Let's try that again. Mini Crank. So Mini Crank is a page that connects to Flywheel, connects to Shaft, and then says, I can say go ahead and send a message every 100 milliseconds. So I'm now sending a message 10 times a second. And so what we should see now, if I say, well, on receive, I'm going to echo, we should see our send per second rate go up. You can see it did, because it's echoing back messages as it receives them. So that's kind of fun, but it's only one client, so that's really not that interesting. Mini Crank is really not very useful for doing load, it's just useful for testing that the harness works. What we need to generate load is a real load test generator. And so it's likely enough the IAS team builds one of those, it's called WCAT. Anyone used WCAT before? Okay, like three hands, let me quickly show you WCAT. So WCAT stands for the Web Capacity Analysis Tool. Great name. Let's make the X64. Make sure that most people are measuring and running 64-bit machines now. So when you search, make sure you search for X64, because the first page results otherwise won't show the 64-bit download. It's kind of strange. So you want to download WCAT 6.3 and install it. It'll just run on any Windows box. It's a native XE and some scripts that you can use to generate load, HTTP load. Now, so while this isn't a signal or client, I can make it do enough to make it generate load like a signal or client, or at least listen to signal or broadcast, which is all I really care for doing load testing. And so let's go back here and let's look at my WCAT setup. So this is all available as well. So we have a repository up on SignalR. So I didn't actually tell you where it was. Up on SignalR on GitHub. So here's github.com. So I go back up to SignalR. That was way too many SignalRs. There is a, here's Flywheel, which I just showed you, and here's Crank. And in Crank, we have a folder called WCAT, which contains all the settings files, which I'm about to show you, that you can use to generate load against SignalR using WCAT. So here's my WCAT folder. If I look in settings, most of this stuff is just copied from the WCAT samples and then tweaked. So I'm going to emulate 1,000 clients. So I'm going to have one physical client and then 1,000 virtual clients. So 1X year is going to be 1,000 people. So WCAT has the ability to have a controlling WCAT server and then multiple WCAT nodes. And then each WCAT node will be a client. So you can make that client 10 if you have 10 client machines. And then you can say each one of those client machines is going to emulate 1,000 virtual clients. And then that one WCAT server will control all the client machines. It's actually a really, really cool little tool. But I'm just doing this all in one box for the sake of a demo. So then I have a scenario file. So I have this shaft scenario file because I'm going to connect to shaft. And you can come down here and you can see that I'm going to connect to port 29573. And this is the URL I'm going to connect to to make it look like a signal receiving connection. So here's that connect phrase that we saw before when we looked at the network stack in Chrome. And I'm going to use the forever frame transport because it doesn't matter. I mean WCAT doesn't support WebSockets. So I just have to use either service end event or forever frame. So I use forever frame. And then I'm setting the connection ID to whatever the current virtual client index is. So vClient index is just a function provided by WCAT to get the current number of this virtual client. Okay. So I have a batch file that sort of runs all this together. So it runs the WCAT script with all the parameters that you need to make this thing run. WCAT isn't a user friendly tool. Let me just tell you that. It's quite finicky to get running. So it is well documented. It does have a fairly large word document, but it is a learning curve. So it's nice to be able just to get some scripts that work and then run it. All right. So we have, let's find my WCAT. There it is. So we have flywheel running. Let's see if it's still there. There it is. Okay. Let me pull that out. Let me pull that down there. Let's minimize that one. Let's put that up there. Let's come here. So currently I have no clients connected. You can see over here, no clients connected. Clients is zero. So let's start up WCAT. Okay. So WCAT is starting. So we can now see that connected clients are coming in. You see it's starting one virtual client every so many microseconds. So I'm going to go up to a thousand clients. So now it's sort of in its listening mode. It's still starting up clients. I'm going to get to a thousand. Okay. So now I'm going to start broadcasting. So let's broadcast at five messages a second. And what we should see is that I get about 5,000 cents per second because I have a thousand clients. I'm doing five broadcasts a second. So that's 5,000 cents per second going through the system. And indeed, so far, this machine is able to cope with that, without any problem. We don't start running into that contention limit that I talked about in our core of our architecture at the moment until you get to about 30,000 cents per second. And this machine wouldn't be able to handle 30,000 cents per second anyway. So you can see the CPU use, you know, is pretty high. It's actually, funnily enough, a lot of it is drawing this graph. If I look at the process, that's crazy, I know. If you look at the CPU, look, Chrome is using 38% of my CPU. So test, just a little lesson. Don't run everything on the same box when doing low testing. Don't run your dashboard on the same box as your server because drawing pretty graphs with Canvas on SVG takes a lot of CPU cycles on the browser. So you can see my web server, IAS, is actually only doing like 40%, 30 to 40%. And so it's actually the browser that's taking up most of the rest of the CPU at this time. So that's going to run for, I think, 90 seconds, is the test duration. So we'll just let that run because I want to show that what's going to happen when the test finishes as well. So you can see there's been no errors. WCat will keep spinning out all the stats every 10 seconds from the last 10 seconds. Now, because these connections never end, nothing comes out. Usually WCat just makes requests, and then as soon as the request comes back, it makes another request again. It's designed to generate load, right? But I'm using it to simulate persistent connections. So it just makes the connections, and then because I never end them, they just stay open. And WCat just sits there going, OK, connection's open. I'll just keep it open and just keep spinning out empty stats. WCat's fantastic because it has virtually no memory overhead. It's a native client. It happily receives everything I send to it from the server and then just throws it away again. It just uses no memory. It's fantastic. Incidentally, I did manage. I'm using a 32-byte payload here. On my infrastructure at home, I have a dedicated gigabit switch, and then I have a separate machine that I use to emulate the clients. And so when I ran that machine with 1,000 clients against my server through my dedicated gigabit switch with a 4K message size, I was able to saturate the gigabit nick on the client machine. So that's always a really, really good indication of your low test. If you can saturate the nick, that's good. That's really, really good. It means that your bottleneck isn't CPU or memory or anything else. So the test is finished now. You can see nothing is sending anymore. No one is connected. So if I scroll over, we should see I still have people who are connected. So remember I said there's the idea of a graceful disconnect and a non-graceful disconnect? So WCat finished, but it doesn't know to send that packet that signaler understands that the connection went away. So the connections were still alive on the server until such time that our background process came around and realized that they were dead. And you can see that that happened. So at this point here, our background process realized those connections were no longer there and would fire the disconnect event for all of those connections. So even though WCat weren't gracefully disconnect using a signaler, graceful disconnect, we came ahead and cleaned up those processes without any problem. So interesting point in terms of scale and RAM. I've talked about CPU and I've talked about connection. I've really talked about RAM. Where's my IS express process? Do it that way. Okay, so that's probably the one. I'm assuming the one that has all the memory. 100 meg. ASP.net has out of the gate about 40 to 50 kilobytes of managed heap size overhead per connection. I'm not going to lie to you. It's fairly heavy. So once we fix the sort of contention problem that we have in 0.5, which we'll fix in 0.6, your major limiting factor for scale is going to be memory. So we have actually driven signal up on a single box to 100,000 concurrent connections. And I think it used 40 gigabytes of virtual memory or something crazy like that. Because of this sort of limit in ASP.net. Now we are working on in 4.5, things have improved. And if you're using web sockets on 4.5 on Windows 8, that overhead is drastically decreased to like single digit kilobyte overhead per connection. And so it does get a lot better. But you can host signaler outside of ASP.net. Signaler is not an ASP.net library. It's a.net library. And we are host agnostic. The default setup when you new get in signaler is host on ASP.net. But you can host on whatever server you like. And indeed in 0.6, we're going to standardize on OWN as our hosting layer intrinsic. So if you want to host on a different server, like self-host using hdb sys, or a socket server like flywheel or kayak, which are open source C-sharp web servers, you can absolutely do that because we'll support OWN and those things support OWN. So you'll just be able to host signaler on top of whatever it is that you want. And there are people in the community who have already written custom signaler servers in order to do their own hosting from within their own XE. Now again, I think 99.99% of people who use signaler are going to use it in ASP.net and they'll never have to worry about that. But if you're going to be doing really, really high concurrency, lots of clients per machine sort of deployments, then a custom server might make sense if you're not going to be using WebSockets just because of the memory benefits that you'll get. You'll be able to support more connections per server, which is kind of cool. Okay, so we're nearly out of time. I've pretty much shown everything I wanted to show. So we have two minutes left for questions. Anyone have any questions? Yes? Mobile clients. What's that? Sorry? Mobile clients. Mobile clients. The question is mobile clients. Do we support them? Yes, we do. So if I go up to GitHub and we look at the repository, you'll note that we have a bunch of clients by default. So there is clients for Windows Phone 7, clients for Silverlight. The JavaScript client is just in the signaler library. There's a client for Windows 8, Metro apps. And there's a client for.NET itself. So if you're just using like a WPF app like I did that's an audit client. The community has contributed an iOS client. There's an iOS client that someone else maintains. There's a MonoTouch client that someone has built. There's a Java client I believe someone has built. Someone built a Node client. I have no idea why, because Socket.io is great. Use Socket.io if you're on Node, but they did. And I believe there's a MonoDroid client as well available. Now, our current thoughts for version one of SignalR is we would like to have all those clients sort of built by the team and supported. Because we think SignalR is much more attractive the more clients it has. So look to see, ask, pull those clients in or build new versions of those clients for version one whenever they get down. Any other questions? Yes? So the question was what about T4 templating instead of generating the script dynamically? So one of the problems with generating the hub proxy at runtime is you don't get IntelliSense on the hub in Visual Studio. It's kind of nice in JavaScript if you can get that. We do have some prototypes of that and it is on our backlog to support design time slash compile time generation of that file so that you'll get IntelliSense. I do believe actually something has already checked in that does that. I'm not sure that it uses T4. I think at the moment it uses an XE so that you can just run that from inside your build file so that it'll just build the JavaScript file and jump it into your project. And you'll just reference that JavaScript file rather than that magic route that I showed you before. So yes, we are working on that. Yes? I've seen on iCat for instance you get the constant loader. Yeah. Is that something you know? So the question is on some clients with some of the transports you get the little loader spinny to stays there forever. It's a registered issue. We're looking at it. The current fix that we have that we're testing out is just to delay the creation of the transport with a set timeout. So rather than doing it immediately just say set timeout 250 milliseconds and then create it and that seems to fool the browser enough. It seems crazy that a browser would support service end events and then show the spinny while the thing is open but they do. Some of them, iOS in particular. So we are going to fix it. Yes? Yep. Right. So I think, let me rephrase, the question was about graceful disconnect and IE and IAS and about what combination is required to support the graceful disconnect? Right. So IAS has this ability to asynchronously notify your code when a client disconnects, which is nice. Now that's not graceful disconnect. Graceful disconnect in a signal our terms is when the signal our client tells the server I'm about to stop the connection. So any browser that will run our code on window unload and let that Ajax request go through will support graceful disconnect because then we'll just receive an Ajax request on the server that is a disconnect command and then we'll disconnect the client in our server. If that doesn't get received and there's a couple different ways that we can detect the client is disconnected. IAS 7 and above gives us this is the, IAS 6 and above gets us this is client disconnected property on the response object that we can essentially poll. So every 20 seconds we check the connection and say is client disconnected false? Oh it is. So the client is now gone. Let's see if it's over the disconnect threshold and then we'll disconnect them. In ASP.net 4.5 we've added if you went and saw my talk this morning there is a new asynchronous notification you can subscribe to to get told when the client goes away because IAS 7.5 or above supports this and so if you're using ASP.net 4.5 on IAS 7.5 you'll be able to subscribe to a cancellation token on the request object that will fire when the client shuts down or closes the connection. Assuming that that gets to you know because underlying the TCP level connection there will be a message sent up to the server and IAS will bubble that up asynchronously to your code. So we do our best whether it's graceful or disgraceful, no that's not right, graceful or not disconnect to try and notify your code that a client has gone and do it in the most timely manner that we can and try and ensure that we only do it once. That was a big problem in disconnecting web farms that we didn't fix until 0.5 is how do you handle the fact where I connect to server 1 and then I reconnect to server 2 now server 1 is going to fire a disconnect event. So we had to handle we have to have cross node collaborations such that when you connect to server 2 server 2 can tell all the other servers hey I now own connection 1, 2, 3, 4 everyone else just forget about it don't fire the disconnect event. So that's what we added in 0.5 to make disconnect work in web farms properly. So I'm out of time but I'm happy to keep answering questions while people are still here because it's 15 minutes until the next guy so. Oh actually it's only the party now so I'm just keeping you from drinking so if you want to know more I can do that but thanks for coming along and try it out.
Learn about the library that is making ASP.NET developers everywhere gasp in amazement. What is SignalR? It’s a persistent connection abstraction library for web development on .NET. Inspired by socket.io and now.js on Node, SignalR is allowing .NET developers to create experiences not previously possibly in a browser, with an easy to use high-level API that looks like magic.
10.5446/50964 (DOI)
OK, so I'm very excited about this. So I'm going to spend the next hour or so bouncing around. OK, so I've done a couple of talks so far at this conference. The main thing that I've been discovering since I left the world of enterprise agile consulting and joined a small trading firm writing software and then helping the organization work better is that I knew nothing about writing software, which is really irritating because I thought I was quite good at it. So a bit of background. Who's been at any of my other talks here? OK, put your hands down. Who hasn't been at any of my other talks here? Right, this is for you. So by the way, for those of you who have been at my other talks here, there's going to be some repetition here. What I'm going to do is go through those bits quickly so we can get to some fun stuff. OK, I mean it's all fun stuff, but it's fun stuff that you've already heard. So maybe the second time over you'll hear different things from it. Let's see what happens. So I was working at a company called ThoughtWorks, which is an agile software delivery consulting firm, quite big all over the world, very good company. What they do, or what they specialize in is enterprise delivery. What I realized was when I joined this little trading firm called DAW and started writing trading software with them, I had what I can only describe as culture shock. I went in there and I saw two things. These guys were delivering software incredibly quickly, incredibly for me. I literally couldn't believe how quickly they would turn around a new system. They would turn around a system in days that I would imagine would take at least weeks if not months. And they weren't just hacking on these things. These things were good quality software. You look at the software and it was well written. It was obvious how it was working and what was going on there. And they were good naming and separation of concerns and all that good software engineering stuff. And they were just churning these things out and experimenting and iterating and all that kind of thing. And I didn't really know how to cope in that environment. So I decided there was one of two things to do when you're in an environment you can't cope with. One is to understand that environment and learn to adjust to it. And the other is to turn around and run like hell. I really did consider that one a couple of times. But I decided I'd stick around and see if I could understand it. And what I realised is the thing that we've been calling agile for the last 10 odd years makes certain presuppositions. So presupposes typically you're working in a big enterprise. You've got all the constraints of a big kind of hierarchical organisation, that kind of thing. And so things like Scrum, less so XP, but things like Scrum and DSTM and the kind of these big plan E type methodologies. They make what I think is slightly ridiculous claims. They talk about things like hyper productive teams. We can create hyper productive teams. We can create a team that can produce software in mere months that used to take years. For a start that's brilliant. Being able to produce something in months that used to take years is brilliant. But I would just call that productive. There's nothing hyper about that. When you suddenly turn all the dials out and you're producing these things in days or hours even, that's hyper productive. The Scrum message, I rag on Scrum quite a lot, but it puts itself out there so it's kind of easy. But they say we can produce these things in weeks and months. If you think that most of these environments that people adopt these kind of early agile adoptions into are typically big shops with a heavily gated process, and maybe they haven't delivered anything for years. Maybe they've just come to the end of another one of those two or three year master plan things that failed. So their delivery record to a first approximation is zero. If you can get them delivering anything, you've just created an infinity improvement. That's pretty impressive. But it's still not hyper anything. What occurred to me is maybe all this stuff, all this TDD and XP and BDD and all these other things, maybe they're a local maximum. So maybe they're a really good way of incrementally getting to some level here. But if you took away some of the presuppositions and some of the assumptions, you could get to a whole nother level of productivity here. And we weren't looking at that because we were too busy optimizing locally here. And that got me thinking. So I'm looking at this team. There's one team in particular, but I noticed it a few places around the RW. And I was thinking, OK, there's a number of possible reasons, or another possible explanations for these guys producing this stuff so quickly. For a start, there's probably three or four of these guys. And maybe it's those three or four guys. OK? Perhaps it's a fluke. Perhaps it's just a product of those guys. And there may be other groups of three or four guys in the world that can do this. But for each one of them, it's just how they roll. And maybe there's nothing else there. And I'm just really lucky because I get to hang out with them. Maybe, though, there's some stuff they do that I could catalogue and document and capture and share. And there's a huge kind of warning sign that comes with this. These patterns are not for beginners. OK? The people I've been watching do this are very experienced developers. And they've screwed up more ways than most of you have screwed up, because that's how you get experience. You get experience through screwing up again and again and again and figuring out how to screw up less. OK? So, and what I've found, I've been doing this probably for about a couple of years now, is I've found I've got groups of patterns that have fallen into basically three categories. We've got kind of technical patterns, technological patterns, if you like. So patterns of how to arrange software and how to develop software and sort of design ideas that help you move very, very quickly. Then there's sort of organisational patterns, ways to structure your teams and yourselves and your communication channels to help you move very, very quickly. And then there's behavioural patterns, ways in which you can interact with each other and things you can do differently that, again, help you go very quickly. So, what I want to try and do this afternoon is go through some of those, maybe sort of five or six of those, and see where we get. And there's a few at the end, and if we've got time at the end, then we can just start calling some out and I'll start describing some of those as well. So, OK, let's just unpack this statement. Patterns of effective delivery. What does delivery mean? Why do we write software? Pardon? So that we can deliver. So that we can deliver what? A product. OK, what does the product do? Why do I want the product? Sorry? So people can consume it. OK, so we're delivering ideally something people want. Right, they want the value. Hooray, thank you. They want the value of your software. There's a famous, I love this guy, Professor Ted Leavitt, Theodore Leavitt, a Harvard Business School. He said, people don't want a quarter inch drill. They want a quarter inch hole. OK? They're not interested in how that hole gets there. Aral mentioned this yesterday, they opened a keynote yesterday. He's saying, he doesn't want a washing machine. He wants clean clothes. If he could get clean clothes with no washing machine, he's already won. If we can deliver business capability without any software, we've already won. We can all go home now. Hooray, bars open. Right? That's what we want to be able to do. So we need to stay focused on the outcome, on the capability we're delivering to people, not on the software. And it's so easy as software professionals to think that software is the product. Software is not the product. Software is the drill. OK, the hole is the product. That's what I'm interested in. So delivery of what? OK? So delivery of software is business value. Delivery of software is utility. So when I deliver software, it allows me to do something I couldn't do without that software. Depositing money into my bank without leaving my house, or transferring money between accounts without leaving my house. That's a software problem. It might also be a phone call problem, but someone somewhere is moving some money around for me. OK, so that gives me a capability. Enabling me to do things faster or more conveniently. So it may be a thing I can already do, but having some software makes it easier. Discovery is a fun one. So you look at things like, if I deliver software that allows me to experiment on lots of different things, I'm in the domain of trading. A lot of trading is experimentation. I've got a bunch of different models. Some of them might be good. Some of them might be bad. I don't know. I try them on a load of financial data and I see which ones seem to give me good results. Now, the faster I can iterate over those, the faster I can learn, and the faster I can hopefully home in on something that's going to be useful and lucrative. We like that. So you've got lots of different reasons why you might want software. All of those are about delivery of something that isn't the software. It's either convenience or utility or discovery or something that isn't the software itself. What does effective mean? Someone told me, I think in Sweden, the words, is it effective and efficient are the same word. Is that right? That makes me a sad panda. Effective and efficient have very different meanings in English, and so maybe there are analogous words in Swedish. But effective is whether or not you're doing the thing that gives you the outcome you want, whether you're doing the right thing. Efficient is how well you're doing that thing. So you can be extremely efficient at doing something, and if it's the wrong thing, it doesn't matter. All you're doing is spinning your wheels. Likewise, you can be extremely effective at something that's really wasteful. It's not very efficient, but it gets you there. There's an example this morning of concurrent set-based engineering, which is how big aeronautics firms deliver life and safety critical pieces of kit. They'll ask several different groups to go off and solve the same problem independently, and then they'll choose whichever one they like best. But they'll fund all three or all four, and what they find is that each of them will approach it completely differently, and they'll do some learning by looking across these different experiments, and then they'll choose one. Now, that's really wasteful. It costs end times as much as doing it once. But you get there much faster. You don't have to iterate on each one and then have meetings and assessments and all that nonsense. You just say go to a whole bunch of people, and at some point in the future you say that one. So it gets you there quicker. So effective is about getting to your goal more quickly. Now, what's interesting here is what your goal is. So what are you optimizing for? Might be optimizing for time to market. The thing we want is to get this software out as fast as we can. It doesn't matter if it's the wrong software, we can iterate really quickly, but we need to be first to market with this thing. It might be user experience. We're hearing a lot about user experience at NDC this year. So it might be about delighting people. Now, if I need to wait a little bit more to deliver something that delights people, but that ends up taking all the market share, then it's worth waiting a little bit. So let's optimize for that. And maybe again, as I say, optimizing for discovery, for iterating many, many times over different experiments. Now, depending on which of those things you're optimizing for, effective means very different things. The kind of software that will give a beautiful user experience and will make people happy and joyful may not be the same as software that gets to market really quickly with bare bones functionality and is ugly as you like, may not be as written in the same way as software that is designed to be iterated over very, very rapidly and thrown away, or most of it thrown away. It gives you different forces that you're operating with. So what I'm going to give you today isn't go away and do this stuff, it's here are some things to think about. So if you look at your behavior, if you look at what you do when you're on your projects, you wear software projects, and you've got your things like your iterations, your sprints, your time boxes, your estimation, your planning, what are you optimizing for there? There's a clue in my previous talk. What are you actually optimizing? What's the thing you're trying to be most effective at when you do, when you work in iterations and sprints and planning and whatever? You're optimizing for certainty. You're optimizing for predictability. I said to someone recently, I said, the only time you should be optimizing for predictability is if you're a clock. Other than that, it's probably not the most important thing. I love my new iPhone, it's so predictable. No, the software is really predictable. I look at all the software and it's like, no, no. Or rather, it was delivered in a very predictable way rather than software being predictable. And now patterns. The other thing I want to mention here is patterns. Except my computer just stopped working. There we go. So patterns. A pattern is a strategy, a technique that works in a specific context. So a guy called Christopher Alexander in the 70s came up with this idea of patterns. And the idea, he talks about resolving forces. So you have a certain number of forces in the context of building architecture he was talking about. But in software as well, you have certain forces and you introduce a thing, a technique, a strategy, some sort of change and it resolves some of those forces, it makes some of them go away, but it introduces new forces. So there's not like a pattern that does X. It's there is some combination of patterns whose balance of forces is the thing you want. So patterns are quite subtle. So, okay, onwards. So I decided I wanted to get some kind of landscape to put these patterns in. And these axes, they're fairly arbitrary. I've just found them a useful way to reason about these patterns. They may not be how they end up being represented. I don't know. This is a work in progress. But on one axis, I've got difficulty. How hard it is to be good at this thing. On the other axis, I've got effectiveness. How much better at delivering value or delivering software does this thing make you? So something, ideally what you want is stuff in the bottom right corner. Stuff in the bottom right corner is easier to like and makes you super productive. Guess how much stuff is in the bottom right corner? None. None at all. Okay, well, there might be stuff that's kind of moving that way. And those are the things to maybe exploit. You tend to find, I guess, I would expect to see a clustering up the kind of diagonal axis there that things that make you more effective are also more difficult to learn. I've not really been finding that. So I'll just use this as a space to kind of store patterns on for now. Does anybody know, I'll be very surprised, but does anybody know what the picture at the back there is? If you can just make that out. It's not a map of Florida. It is a map. It's a map of Treasure Island from the original print of Treasure Island. I thought it was a great thing to have at the back there, because these are all kind of nuggets of gold, I think, and so they're all kind of hidden all over this island. So without any further ado. I've got some notes here. I should be talking, referring to you guys. So, yeah, the first principle of lean is think. So everything else you've read about lean, that's interesting, but the first rule of lean, the first principle of lean is think. Don't just take stuff as wrote. So again, the stuff I'm going to show you don't just say, oh, Dan said do this, so I'm going to do this. Because Dan said a bunch of other stuff as well, and it's kind of some of its contradictory, and that's the point. Is that you guys need to make decisions for yourselves. Something I've realised while I was putting these together as well, is it seemed to me that the way agile is presented, and particularly the way agile methodologies, if you like, are presented, they seem to be optimising for ease of learning, or ease of knowledge transfer. So if you think about all the different agile practices you might be doing, or you've heard about or read about, you see that, don't they all package up neatly? Isn't that odd? Isn't it surprising that they all take about a page, and that you can maybe go on a one-day course for each one? I would say odd slash lucrative if you're an agile consultant. Dave Thomas, the small talk Dave Thomas, not a pragmatic programme with Dave Thomas, he's come up with a number of definitions for agile that I'll kind of pepper into this thought. But one of my favourites is agile is just by my stuff. He talks about agile consultants and they go, oh yeah agile is just by my stuff. My stuff is better than their stuff. So as an ex-consultant I can say that. Before I dive into the patterns, I just want to say one other thing, which is when I was about seven, I picked up origami, no paper folding, and there was a fantastic book called Origami by a guy called Robert Harbin, and I made all the little things in origami. And then there was another book called Origami 2, because Robert Harbin was very good at origami, but not very original when he came to naming books. And I started making all the things in origami 2, and there was one thing I couldn't make, and it's called a jackstone. Has anyone ever seen an origami jackstone? Okay. This may or may not work though. A jackstone is one of these sort of six-pointed star things, and the idea is that when you play jacks, you pick up balls or some game that you have, but it's basically a little six-pointed star. Imagine like a little cross with a star coming up top and bottom, and you can fold one of these out of a single piece of paper. And I did all the folds in this book except this jackstone. I kept going back to it. And the way you fold a jackstone is this. You start by doing this really intricate fold, and there's pages and pages of things. You end up with this tiny little piece of paper, and then you unfold it all, because that was just creasing the paper. Now you start the jackstone, and you fold it in here, and it involves this bit, and this bit, and taking this bit, and put it. And it gets quite intricate. And I was seven. And I was really enthusiastic, but not very good. And so anyway, this thing bugged me for years, because then he brought out origami three and origami four, and I did everything in those, and everything, and I kept going over them, and I couldn't solve the jackstone. And then I just stopped doing origami, because I turned nine and I was cool, you know. And when I was about 15, I realised I was a bit of a geek, so I picked up origami again. And I remembered this thing from like seven years ago, and I was like, eight years ago, and I was like, I bet I could do a jackstone now, I could do everything, I'm 15. And I went to fold a jackstone, and I did the thing, and I folded it up, and I opened it out again, and I got stuck. I was like, no. And it turned out that about every five years, I would just revisit this thing. It wasn't deliberately, oh, it's five years, I should go and try it. It was just like, you know, I'd be clearing something out, and I'd find my old origami books, or someone would mention something. And anyway, I was like 25, I think it was. No, maybe more recent, I was 30. So every five years, I've fold it four or five times, I'd just turned 30, and I found, again, I was moving house, and I found this origami book, and I was like, right, I'm bloody 30. I'm going to fold this, and I did it, and suddenly something clicked, and it was like the shading of the way that you did this fold, had always, I'd always assumed it was going one way, and it turned out it was going the other way. And that suddenly meant that I could fold this jackstone. And so, I was like, the relief, I'm 30, I've folded a jackstone, it's been haunting me for 20 odd years. So what did I immediately do after that? Folded another 20 jackstones is what I did. This is not a fluke. I will show you how good I am at folding jackstones. I will make it so I can. So, in a lot of these patterns, my experience has been adopting them is like that. You might have heard of it, and I've been doing it, and I've been doing it, but folding them is like that. You might approach one of them several times, and it just won't click, or it won't make sense, or Dan's talking rubbish, and that's fine. Eventually, at some point, when the context is right for you, it might click. At that point, you'll go, I get it, and you'll go, bu-da-da-da-da-da, and you'll be able to do it whenever you like. So that's just a thing to bear in mind. So, okay. I mentioned this pattern yesterday. Spy can stabilize. You want to get something out rapidly. You want to experiment, maybe iterate over a domain that you don't know very well, but you need to do that in a production environment. I can't test my trading software unless I'm on a production trading platform. So I've got this conflict now. I need to get software out quickly. I want to write production quality software and do my TDD and my solid and all my good practices. And what we tend to do is at the beginning of an iteration, or a woman writes in a story, we tend to give it a name. We say we're either going to deliver this feature or this story, or we say we're going to spike out this idea. And a spike is a very clearly defined thing. A spike is a piece of work that you're going to do that you promise you're going to throw away because the only reason you're doing it is to learn. And Kent Beck describes it as driving a spike through something. So, you know, it doesn't matter how much mess you make, you'll get one side to the other. Okay. And it turns out that if you think in terms of spiking versus production code at the point when you're doing your planning or the point when you're deciding what you're going to do next, you're actually committing early to the quality of that code. You're saying, before I write this code, I'm going to determine whether it's production quality or whatever, how long I'm going to keep it around for. And you don't have to. So what you could do instead is write the code, maybe write several different versions of it and then call out into production. And what it will give you is some kind of idea of whether that code is useful. And the stuff that is useful, you can make that stable later. You can stabilise it later. So you can defer commitment, embrace uncertainty of whether that code should be production code until you've got some data. So it stops being theoretical and starts being empirical. And so much that we do around software in terms of design, in terms of architecture, in terms of stuff, is theoretical. Is, well, I've done this a bunch of times and I think this is the right shape. Well, maybe you're right. But maybe you could try two or three different shapes if the, again, you can do the concurrent set-based thing, if you can do lots of things simultaneously and the cost of each one is low, you can try a bunch of things, get actual data in this context and see which one's going to work and then invest in that one. So you unlearn this idea of TDD and test driven development and that being your design. And you just kind of write code. And again, this is not for beginners. This is you just kind of write code once you know what good code looks like. TDD is a fantastic way to learn what good shaped code should be. Okay? But if you unlearned your kind of, for me it was a dependency. I could only write code test driven. And the couple of guys I was working with would just start coding and I'm like, you can't do that. Haven't you read? Look, it's here. It says TDD, you can't do that in production code. They would just write code. Once this code, once we decided we were going to keep this code and that actually we needed a bit more governance if you like or a bit more assurance about this code, we would then start writing tests around the stuff that we thought was risky. And the tests would cause us to maybe break the code up into smaller units. And it would cause us, it would start because the naming in the test might give us some domain names that we then pushed into the code base. And so the act of testing would affect the code and that would affect the test, so you have a system of feedback. And so you end up with what I was calling test driven testing. So by writing these tests and exploring the software you've written with tests, it pulls it into a TDD kind of shape. Which is really cool because you end up with all the goodness of TDD without having invested that across all of your eight different experiments. You only do it on the one or two that survive. So it's a really nice investment decision. The first time I gave this talk, it's about the third or fourth time I've given this talk, there was a student called Camilla who came up to me afterwards and she said why did you spend so long talking about that spike in stabilizing? I said, I've never sort of speak to our old developers and all these guys, they really struggle with that. And she said but it's really obvious. I was like, oh. Hmm. Maybe it's really obvious and we've just been learning something that's dragged us off in a different direction. Maybe this is obvious and the other stuff is not. Okay. So maybe it's only not obvious because we've taught ourselves to think in a limiting way. So let's leave that. Ginger cake. Ginger cake I love. I think this is probably my favorite of these patterns. Who's heard me talk about ginger cake? Okay. A few of you. Who's not heard me talk about ginger cake? People who've heard me talk about ginger cake, sorry. I'm talking about it again but I think it's quite important. So this goes back to a story that Andy Hunt of Pragmatic Programmer fame didn't tell me. Because I thought it went back to a story that he did tell me but I checked with him and he said he's never even heard of it. So someone who isn't Andy Hunt told me a story and this is the story. So he has like a mother-in-law, his gran or someone has a rolodex like a box with index cards and on the index cards are loads of recipes and just a little recipe folder thing. And in there is a recipe for chocolate cake and the recipe for chocolate cake says take these ingredients and this quantities and a whole series of steps for how you make a chocolate cake and how long to bake it for and sizes of tin and all this kind of stuff. And it's got a little index card. And then behind a few cards back from it there's a card that says ginger cake and the ginger cake recipe says like chocolate cake but with ginger. Okay. Now and she can make a really good ginger cake using this thing. Using this little card. Using this little card. And the reason she can do that is she knows what she means. Okay. That is an expert level description of how you make a ginger cake. However it presupposes a very deep intimate knowledge of how you make a chocolate cake. It also presupposes that you know you're not just going to swap the word ginger in an s slash ginger slash chocolate or whatever in the in the recipe because you might use different quantities. Ginger is a very strong flavour, chocolate not so strong. Ginger is sticky. Chocolate is either hard or runny depending on the temperature. They have different viscosity. Chocolate's viscosity changes with temperature. Okay. And it burns. So there's always different subtleties between chocolate and ginger. But she knows what she means. She means the same kind of things that I do when I make a chocolate cake but I know when and how to substitute ginger and what quantities and what that means to its consistency and all those kind of things. I can make a damn fine ginger cake using just those instructions. So how does that apply to software? The context is here. I want to get up and running very quickly with a new app that's similar to some stuff I've already written. So what do you do? You copy all the code and you paste it into the new application. That's illegal. They throw you out of the agile club for that. They certainly throw you out of the refactoring club for that. So the first time I saw this, I had hives. I was like, you can't do this. So it was a web app and a chap was sitting with Neil Dunn who's a terrifyingly good programmer and he was writing a new web app and he said, let's just open up the other one that's a bit like this. And what I thought was going to happen was this. I thought the next few minutes were going to go we can reuse that and reuse that and reuse that and then we're going to spend a bit of time factoring out some stuff and then we're going to start this new thing and use these things with factored out. No. Select all. Alt-Tab. Control-V. And then delete, delete, delete, delete, delete. Okay, right. I've got the shape of one. It's just fun. Sorry. Sorry, you can't do that. And the thing is he can do that because he had a really intimate knowledge of the shape of this other app and how it was going to be useful in this new app. So some constraints for Gingercake. This is not the equivalent of what's called post-modern programming where you just go around the internet and pull down bits that are going to be useful and slap them together until they give you an app. Okay. So post-modern programming is the idea that all the software that we'll ever need has been written. We just need to rearrange it into new shapes to get different applications. And someone had a really nice idea of an automated trawler app where you could write an interface, write a test against that interface and this app would just go and trawl code bases all over the internet until it found some that matched the interface and when you ran it in past the test and it would just put it in your code. Which I think is kind of cool. I'd like someone to write that app. So you know, the post-modern the post-modern code trawler. Don't do that. Okay, I'll do, I know. But so Gingercake presupposes you are already really, really good at making chocolate cake. Otherwise you don't get to do this. It also presupposes that you know what it means to delete the stuff that you're not going to use and to know what you want. And that takes writing a ton of code. There's no shortcut here. Once you've written a ton of code you get a sense of what shape the next thing is going to be and what things are going to be useful. Now this doesn't say don't refactor and certainly as we ended up with a number of web applications, example I used a couple of days ago was web sockets. So we're doing a lot of stuff with web sockets and data over web sockets. And it turns out the whole thing about web sockets is that they're complicated edge cases like when something connects or disconnects or times out and it gets a bit messy. And we've abstracted some of that into various libraries. And that's a really useful thing to have and move around different projects. So there are things that we factor out but we factor them out based on repetition and based on utility not based on speculative or we think this might be useful. Okay, they're coming for me. I'll be there be quick. So another example I was doing some extract, transform and load ETL. I had an app that was backed by my SQL database and I needed to pull in various sources of data from around the organisation. Mostly were like JSON over HTTP someone with database connections. But I had about half a dozen of these things and what I did is each one of them I took the code and just copied the last ETL script that was Python and rewrote it and gutted it and changed it a bit. And did about half a dozen of those and got into production very, very quickly and it's been working fine for ages. Now that was about a year ago. A few months ago I went back to revisit that code to make it do a different thing. And with fresh eyes and with a lot more domain knowledge and understanding of how these things glued together I deleted all 10 of these now scripts and turned it into a single script with a single table of 10 things that say what the name of the data source is, where to find it, what it's... So you know, it will coalesce later. And you have to trust that it will coalesce when it's a useful time for it to coalesce. And that might be never and that's okay. All right? So you've got to be allowed to leave these things. So these are both kind of I guess this is a design architecture thing. This is like how to write software. This is more about a behavioural thing, what to do with software. This is an organisational thing. Shallow silos. So you get this whole thing about silos are bad, right? Siloes of people are bad. Teams of specialists are bad and what we want is cross functional teams, yes? Is that reasonable? Yeah, okay. And so we want to share knowledge in the team and what we do is we do pair programming. And there's a lovely phrase promiscuous pairing where you're rotating, you're switching pairs quite often. And that can work quite well in larger teams. You want people to get to know each other and work together and understand the code base and all this kind of thing. It's a really, really nice model. What we found though, so the constraints we had is we wanted to share knowledge in the team. There's a small... That's eight people, no six people in the team by this point. But what we would have is we typically have three quite independent pieces of work going on. So one piece of work might be connecting to a new financial exchange, which is quite a complex piece of work. Another piece of work might be a new web app that shows certain trading information. Another piece of work might be some reporting function or back office integration. So very different domains, very different areas, but all part of the same stack, if you like. And we were doing the whole pairing and rotating thing and we were doing the whole pairing and rotating thing and that was going kind of well. And then we kind of... someone called time and said, just a minute. What I found is I keep moving between these three pieces of work and every time I do, they've moved on so I have a complete context switch. I've got to like just, you know, page full. I've got to dump all this stuff, load all this stuff and then off we go again. And it's not terribly useful. I know roughly what you guys are doing with the exchange thing and I know roughly what you guys are doing with the web app thing and I'm over here doing this thing. As long as we all kind of sync up, I don't need to be in that code. We all pair with each other enough to know that we code in a similar way. We value similar things. We're likely to turn out similar looking applications. Plus, again, once we've got several of these things, we've all got, you know, we've created our idioms and our culture and our subculture and we all know the way we do stuff. We don't need to work in... We don't need to be rotating these pairs. It's not helping us. It's actually hindering us. So what we'll do instead is we'll work in pairs on these each individual things, but we'll stay in those pairs until that chunk of stuff is done and then we might swap around. OK. But now what you have is silos. Yeah. So now, you know, these guys only know about the exchange stuff and these guys only know about this other piece. So how do we manage that? And the way we manage that is we make these what I call shallow silos. And so twice a day, the team, all of the team has a stand-up. At the beginning of the day, it's a technical stand-up. What are we going to try and do today in terms of technical delivery and where are we going to go and that kind of thing? By the way, I really hate the formulaic stand-up. What did you do yesterday? What did you do today? I think that's not a stand-up. That's just a really pointless waste of everyone's time. Don't do that. A stand-up, think of a stand-up is like a huddle at the beginning of a play in American football. All you care about is the next ten yards. The entire game is the next ten yards or the next play rather. We're trying to make ten yards. And so you huddle and you go, what's the best thing we're going to do with this next play? And you know that as soon as you've done this thing, you're going to have another huddle and plan the next play. So that's what a stand-up is. It's simply a very short conversation that says, what are we going to try and do? And what's the most effective way we think we can do it? Anything other than that is just a nice conversation you can have over coffee. And you should have those conversations but not in a stand-up. So well, at the beginning of the day we have a stand-up that says, what are we going to try and do today? What are we going to try and do today? What are we working on? Oh, okay, so we've got that far with the exchange thing. We've got this far with the back office thing. Oh, actually I can do a little show and tell with some of this web stuff. It's pretty cool. Oh yeah, cool, we'll do that after the stand-up. So everyone knows what's going on. We swap news, if you like. And we all know where we're going to try and get today. And also, at any point, if any pair is about to make a decision that affects the team, they just call a huddle effectively. So we're all sitting all kind of back-to-back to each other. And so I'm going to say, oh guys, the two of us, we're about to do some, we're about to look at some database type stuff, like data storage. And we're wondering, has anyone got a preference on like, we've got some Jason to store, we're thinking maybe a document store, like a mongo or a couch or something like that. What do we think? Should we just try a bunch of them? Should we stick it in flat files? Now that's a team decision, because what you don't want in a tiny little team is to have three or four of these technologies that are all doing the same thing, but slightly differently. So anything that's going to affect the team, we all decide. So that keeps the silos shallow. Having this stand up at the beginning of the day keeps the silos shallow. The stand up at the end of the day involves typically the trade, the trade head of desk, the kind of main stakeholder. And so that is necessarily less technical. He's basically going on going, what toys have I got? What new stuff did I get today? And so it's much more business-facing. And with that, just very small amount of ritual, we've managed to work really, really effectively. So we don't have the thrashing of context switching, but we also don't have the negatives of just being in these little silos. OK. So this was subtle. This took me a while to realise that this was an effective thing to do. So again, you may find this works, so you may find it doesn't work for you. You may find you're in a context where it doesn't even matter or doesn't apply. That's OK. Just have it in your back pocket for one day when it does. Moving on. How are we doing for time? We've got about another 20 minutes. Is that right? I don't know. I'll just keep talking until you all leave. So you're Norwegian. You weren't your old polite. It's great. We rise. We're going to be here all night. So OK. Create urgency. Now this is very much a behavioural pattern. Who here has read the goal? Eli Goldrath's the goal. OK. That makes me very sad. You should all turn around. Not now, but when you leave, go to the bookstore. They've got a big pile of copies of the goal and you must buy it. Tell them Dan sent you to buy it. I don't know what that'll do, but just tell them anyway. It's one of the most significant books I've read on productivity and effectiveness and delivery and all those things. OK. It's a story. It's actually a technical rebook about the theory of constraints. The theory of constraints is kind of like a lean operations theory where you look at a process and you see where bottlenecks are in that process and there's a way of managing and identifying and managing those bottlenecks. That doesn't sound nearly as exciting as it's this fantastic story about this guy called Alex who works in a factory that's not doing very well and his marriage is falling apart and it's the story of how he saves the factory and saves his marriage. Or he might. I don't want to give away the ending. No, it's brilliant. It's done as a novel. It's done as a story. It's very engaging and it's brilliant. Anyway, that's not the reason I want you to buy the goal. It's part of it. Read it, but at the end of it there's an interview, an extended interview with Eli Goldratt, who's the guy that wrote it. In that interview he said something and it really landed for me. So the interviewer says to him, so if this whole theory of constraints thing and this goal thing is so profound and so effective and so difficult and so useful and NASA uses it and whatever, whatever, why isn't everyone using it? How long does it take an organisation to shift to this kind of way of thinking? He said, in my experience between five and 15 years for all guys, and he said why? And he said, well, the problem with theory of constraints and throughput accounting and that kind of stuff, flow, is it's a paradigm shift. It's a genuine paradigm shift paradigm being your world model, your model of how the world works. And he said, and people will do anything before they'll shift their paradigm. That's the last thing they'll do. So you've got to really, really, really put someone in a corner before they'll shift their paradigm. And the problem is it takes different people different amounts of time. So in order to infect an organisation you're talking five to 15 years. And I thought that was a very honest answer from a guy who's trying to sell a book. And he said, well, he's trying to sell a book. But what he said is you need three things. In order to create a paradigm shift you need three things. The first thing you need is pressure. You need to have a deadline of some sort. You need to have your back against the wall. The second thing is you need to have no other options. You need to have tried everything else and be desperate. And the third thing you need is information, is knowledge, is to know that this new thing exists. He said, I can only give you one of those by my book. You have to create the other two. So create urgency is a pattern of behaviour where you paint yourself into a corner deliberately. My team lead, Joe Worms, did this. He wanted to learn Node.js. He learnt Node.js a year ahead of anyone else I know. He's really, really good at spotting trends. He wanted to learn Node.js when it was 0.1.2 or something. He did that by signing up to deliver a piece of core infrastructure inside our organisation. He made it very public, very visible. I'm going to take this thing on. I'm going to deliver it. He gave himself a date. I'm going to deliver it by this point. I'm going to use Node.js. He painted himself into a corner and gave himself no other options. OK. I need to do it really quickly. What that meant was he was now furiously trying to learn this technology. But he wasn't doing tutorials and coding caters and all that other stuff. He was trying to solve an immediate business problem. What that meant was that he learnt the bits of Node.js that allowed him to solve that business problem. Anything else that looked like it might be useful in the future. I'll come to that when I need it. What I need to do now is solve this next bit. How do you do that in Node? OK. That doesn't work. Let's try this other thing. That doesn't work. That worked. OK. We'll do that. This was his process. He tried very, very rapidly over loads of experiments to build this thing out. At the end of it, not only did we have this new piece of infrastructure, Joe knew Node. That was his thing he wanted to do. He said, I'm going to build this. I want to learn this stack. I think it's going to be useful in the future. He created urgency. That takes courage. That's a really hard thing to do. It's fine to say, I'm going to teach myself this thing. I'm going to put myself. But when you make it public, that's when you make yourself out of options. That's really uncomfortable. This goes right back to the whole uncertainty theme. You've just created a ton of uncertainty. I don't even know if it's possible to solve this problem using this stack. I've got a really good hunch. I don't know. I'll say, I'm going to put myself in the same way. I'm going to put myself in the same way. Creating urgency is a way to do this. A couple of things here. One is he didn't go out to learn Node. He went out to build a piece of infrastructure. Another name I've got for create urgency is what I call indirect learning. In order to learn a thing, I decide I want to solve. In order to learn X, I solve Y. I'm going to solve Y using X. Y is my objective, but I'm going to indirectly learn about X as I go. The actual infrastructure was Joe's objective. What he did was also learn this other thing. This comes back to how people learn. I don't know if you know about this. There's some great statistics about second children. How rapidly second children start walking, talking, interacting, all those kinds of things. Tends to be much faster than first children. The reason is they're modelling their siblings. When you're just about rolling in your back like this and this toddler is walking past, and you're looking at how I want to do that. Oh, there I go. Oh, I just pooed. I'll just wait here. No, I didn't. It's okay. You've got this thing to model. This is something that we don't do. We set ourselves these little tutorials or puzzles or whatever. What we don't do is model other people. Zed Shaw has a fantastic online resource for learning Python. It's called Learning Python the Hard Way. What he does, he's genius, is he dumps you in a hole. He shows you just enough to be curious, and then he goes, oh, and this doesn't work. Off you go. Help! You know it's solvable because he wouldn't just dump you in a hole and walk away. He might, right? Zed Shaw. But in this case, he doesn't. There's enough there that you can learn your way out of it. It's a fantastic way to learn. It's just by trying, standing and falling, and standing and falling, and eventually you stand and you stay standing. It's not here's the answers. It's here's a problem, and here's some tools. Go figure. It's a really, really exciting way to learn. This is something that I a whole theme that I want to talk about elsewhere another day. But it's the idea of deliberate learning versus deliberate practice. We do these things called coding dojos. Who has coding dojos? Loads of you. I'm quite popular over here. In those coding dojos, you do cata. Coding cata. You don't. The reason you don't is this. A dojo is a room in which you practice a martial art. A cata is a sequence of moves that you do again and again and again and again and again. The reason you do them is to make the muscle memory. It's a great way to learn a physical thing. I used to do jitsu and in jitsu you do lots of cata. You've got lots of movements and the same iterate. I know when someone hits me, I do that. Someone punches me rather. If I do that, their hand goes past me and I don't get hit. I don't even think about how to... I don't know if you noticed, but both my hands went up there. The fist coming this way, I was taught a million years ago you do that. The reason you do that, it turns out is that you engage the tracking reflex. You have a thing in your reptile brain, right in your amygdala. If you see something moving past like that, you track it. If it's moving slowly, it might be lunch. If you're moving fast, it might be a predator. It's always worth looking at something doing that. If you're trying to hit me and I just do that, you can't help following that. What now, is it a a dojo? If that's my block, I'm moving your fist out of the way and I'm distracting you. I own you. It's all over from here and all I did was that. The reason I can do that is because I've done that thousands of times. That's the repetition, that's the machine learning. When you're solving software problems, you're coming up against something usually for the first time. If you're not coming up against it for the first time, don't solve that problem, solve an interesting problem. Don't do something that's going to be valuable to someone. We learn... We learn to do that. We use these metaphors that are about mechanical repetitive learning. That is incredibly useful in certain contexts. Please, please, anyone in this room who uses a keyboard for a living, learn to touch type. If you're feeling brave, learn Dvorak. Don't just do that because that's ridiculous. That's the tool you use. Would you trust a mechanic working on your car who didn't know how to hold a wrench? It's something like... You'll be fine. I'm not sure how this... No. You're going to get off my car. I'm taking my car elsewhere. That's what you're doing. You're professionals. Learn how to use your tools. Learn how to use your ideas, build tools, use your programming languages. Understand how they work. That's really valuable. That's deliberate practice. Deliberate learning is developing the skill of being able to look at a new problem and understand a good way to solve that problem, to see through that complexity. You'll find... There's a difference between learning chess and learning piano. Learning piano, there's an end game. If you can make it sound like Chopin's second piano concerto, you win. With chess, it's an emergent complex problem because you don't own the other guy. That's going to be changing all the way through. You're constantly solving an emerging problem. After about the first three moves. You're solving an emergently complex problem. That's why chess is one of these intractable things. Please, please, please, if you take one thing away from today, deliberate practice, deliberate learning. Focus on deliberate learning. There is benefit in deliberate practice, but there's a really, really low... What's the word?...diminishing return. If I can type 70 words a minute, there's really not a lot of things that I can do. I can do that. If I can't type at all, if I'm a hunt and peck typeist, being able to touch type at 40 or 50 words a minute is a huge win. Do do that. Likewise, if you know the shortcuts in your IDE, if you don't know the shortcuts in your IDE, pair with someone who does. It's a great way to learn. Again, it's indirect learning. You're not sitting there to learn the shortcuts. You're sitting there getting work done and indirectly going, how did you do that? How did you do that magic? Oh, is that it? Oh, you're using Emacs. So take off the water wings. If you learn to swim with water wings on, you will learn to swim with water wings on. You don't learn to swim. So at some point, you take the water wings off and you go, oh, my body feels different in the water without water wings. So it's better to be a crap swimmer without water wings on. So it's better to be a crap swimmer without water wings on. It's better to be a crap swimmer without water wings and get the hang of that and then become a good swimmer because then it's a continuum. If you become a really good cyclist with stabilisers, the first time you go around the corner without stabilisers, you go, and it really hurts and you're five. OK? And you're not expecting that. I'm just projecting. So these are things to bear in mind. So creating urgency is about, and the deliberate learning is about creating an environment in which you will learn something by indirectly by solving something else. And this is one I wanted to get to. I promised yesterday I'll talk about dancing skeleton. Dancing skeleton, I think, is one of my favourites as well. Dancing skeleton, again, this is Joe Warnes. So basically what I'm doing is I'm documenting what this team or a couple of teams of people are doing because they just, I think, they're doing some quite interesting things. So dancing skeleton, then. This is named deliberately after, Alistair Coburn, who's a great, great academic and practitioner in the world of agile software development. He came out with a load of patterns a long time ago and still is. One of his that I really like is a thing called a walking skeleton. He says, when you're designing, when you're first starting out on a new piece of work, then try and create a walking skeleton. Just try and put the minimum thing together so you can see roughly what the shape's going to be. It doesn't need to be pretty. It's just a skeleton. It's not going to be all the flesh. It's just a skeleton. It's not going to be all the flesh or whatever else on it, or any of that. It's just, you know, if it's going to be a messaging system, see if you can set up your messaging stack. If it's going to be a web app, just get something you can see. Just so you can roughly see the shape of it and then you start iterating by kind of giving it layers and layers and layers until it's this fully flesh thing. I like that. Dancing skeleton is a bit different. Dancing skeleton says, don't stop there. Get something into your production environment, because what this is about is exploring your path to production and exploring what your deployment process looks like. What Joe would do and still does is he'll start with, you know, if it's a Java project, he'll start with public static void main, or maybe he'll copy a thing from somewhere else. Or if it's a a JavaScript project, he'll just start with empty file, because you can't do that. What it will do is it'll say, well, this is going to be a web app. It's going to be a node web app. It needs to sit on in this environment. From the word go, how quickly can I have something in production, in a production environment that has just a hello world page, but is the full stack and has monitoring and heart beating and logging and all of the things that I care about in terms of monitoring and all of the things that I care about in terms of monitoring and all of the things that I care about in terms of monitoring and all of the things that I care about in terms of monitoring a live application. What's the minimum possible path from here to there? And he timed himself once, and he did it with his pairing with this guy. And the two of them drove this thing through and it took 52 minutes. It was like, it's not bad. 52 minutes from empty anything to it's inversion control, it's checked into version control, we're using Git so it's been pushed to our internal repository. It's been, he's got a build, the build is all pushed this thing into production and it's deployed, it's running and he can click on it. And then when he kills it, they get an alert that says this thing just died. 52 minutes. He goes, that's 47 minutes too long. Because he's Joe. So that right. We need to get better at this. He wanted to shift the decimal point. 47 minutes now, 52 minutes now, 5.2. So he did. OK. But that's what he's optimising for. He's optimising for that feedback. So he wants to know as quickly as he can whether this thing is going to be in the right shape. And he wants to have something as he, because now he can iterate really, really, really fast. Because now every time he writes a new miniscule, not even a feature, a thing, a thing that makes it different from the last commit, and goes bam, it's live. He can see it where it's going to be. He can wire it into the things it needs to wire into. He's into it. He needs to wire into. His integration testing is continually using it while he's writing it. That's an incredibly powerful feedback model. So dancing skeleton sets you up to be able to do that. You'll notice it is in this bottom right corner. It's not hard to do and it's extremely effective. It's just not a thing that occurs to you. Or it's not a thing that occurred to me. I'm busy trying to get the damn thing working. Do you know what I mean? Write the first few tests and drive my BDD style, start at the outside, drive the feature all the way through. Joe's like, feature be damned, get something in production. Get nothing in production. So he's optimising time to first deployment is what he's after. So, I've got a few more minutes I think and there's a few more patterns that I'm not going to get to talk to you about this afternoon. So it fits in my head. Short software half-life I mentioned yesterday. You'll see, I've actually put TDD up here as a pattern. So I think it's enormously valuable. I think it's not very difficult compared to some of these things. I think it's not, it's medially effective compared to some of these other things. It's there, it deserves a place on there. But it's not the hardest thing ever or the most productive thing ever. It represents, in fact, I've just realised this. I love doing this talk because I realise things as I go. What I think putting it here means is that this corner here, there, represents the local maximum of that kind of design approach. Of that kind of development approach. Is that sort of where you'll go. You need to start breaking the rules in order to get really, really effective and that's quite difficult. So just bear those things in mind. I've got a couple more minutes. So, does anyone want to call out any of these that look particularly intriguing to them and I can quickly describe them? Or do we want some questions? Or what do you guys want to do? Or shall I just get out of the way and you can get beer? Sorry? Captain's log. Okay, someone just called out Captain's log. So Captain's log is this. I was working with, again, Neil Dunn, scary programmer. I'd been working on this new app and I was very excited about my new app and it's kind of cool. Neil came and sat with me and then we had a dancing skeleton. We had it in a production-like environment and we were just pushing this thing out. We were just going to see if it was a flash or if something failed. I said, I don't know what that is. Neil said, don't tell me. I said, why not? I want to find out for myself. Where's the logs? What logs? Surely, when the app breaks, I should be able to look at the logs and see what went wrong. I just had this face palm moment. What Neil's doing is he's modelling an operations guy. He's modelling the team and he's going to be alerted when this thing fails. He's sent out an alert which was cool and the alert said this thing's broken. He's going to find out how. He's got no context. He didn't write the software. He used his ignorance of that software as an advantage. He said, I want a log file that tells me no more and no less than what I need to figure out what happened here. All the time things are good, I might get one line item per good thing that happens. Or a thing that happens. When something bad happens, I want everything. I want everything. Ideally, I want a big arrow going it was there. That's a captain's log. That is a the kind of log that you can look at. I'm thinking captain as in the bridge of the Starship Enterprise. I've got everything going on here. I've got all the information available to me to see how to manage this thing. Think of your log as an API. Think of your log as an interface. It's a read-only interface, but if it's full of all your debugging crap, as well as the one message that actually tells me what's going on, or in the worst case without the one message that tells me going on and just your debugging crap, it's not a log file. It may not be a file. The captain's log may be a message stream that you're sending out to some monitoring service or something, but make it so that when stuff breaks, in your own development time, as you're developing this thing, every time something fails in a way that surprises you or that might surprise you, you're rather going, oh, I know where that code is, just stop and see if you can figure it out from the logs, because if you can't, and you know most about this thing, no one else can. So the captain's log is a duty of care thing. Again, not very difficult. It's just a habit. Once you get into that habit, you just get a good logging strategy. By the way, while I'm on this, the one thing I'll say with logging, what's the difference between logging at a warning level and an error level? Anyone? When do you log warnings and when do you log errors? Okay, here's my heuristic. I log an error when I wouldn't mind being woken up at 4 o'clock in the morning by an SMS with that message in it. Okay? If it's not that important, it's a warning. Okay? If it's not that important, it's a warning. Because most of you guys, when you send out the error, you won't be the guy getting the SMS at 4 o'clock in the morning. Some other poor bugger will, and guess what? You didn't put anything in the log file. Right? And it's just a warning anyway. But you've... So be very, very disciplined about what you log and how you log and the level you log it at. Okay? It's enormous amounts of value there. Again, it doesn't enormously aid your effectiveness. What I think it does is it massively increases the value of your software in production to the stakeholders who are the operations team. Okay? It makes their lives a lot easier, and it's not a difficult thing to do, say. Okay? We've gone 4 o'clock, so I'm just going to say... Where are we? That's my parting thought. Think about what are you optimising for in all these contexts. What you're optimising for, A is going to be different from what the next guy is optimising for, and B is going to change over time. Okay? So, that's my parting thought. I think it's a great way to be able to revisit your assumptions. Thanks for listening.----
Some programmers are simply more effective than others. Kent Beck famously described himself as "not a great programmer, but a good programmer with great habits." Over the last year or so I've been working with, and observing, some very good programmers with quite exceptional - and rather surprising - habits.Is there a better way than katas to learn a new language? Is copy-and-paste always evil? Should you always test-drive production code? In this talk Dan introduces the idea of programming patterns - patterns of effective programming behaviour - and describes some of the more unusual but effective programming patterns he's collected over the last year. These are not patterns for beginners, but then again, Dan argues that patterns aren't for beginners anyway.