doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/55292 (DOI)
|
I asked him if I could give an introduction for him because he will never remotely do a good job of introducing himself. Twenty years ago, Jim started working on the software that he came to. I'll tell a funny story about that during the lifetime. But it's amazing to go back and think about that. We had marketing brochures at the time, three big points. One of them was a database that feels like a file system. Still true today, still very different from everybody else in the market, still feels revolutionary. URLs you can read to your grandmother over the phone. Back then it was big yet and this was your year. Object publishing, still revolutionary today. And don't let your customers shoot you in the foot, which was hierarchical security and hierarchical objects. It's amazing to think about what Zope and after that clone, based on these ideas, have done hundreds of companies around the world that they're business on Zope back in the day, hundreds and hundreds and hundreds and hundreds of companies built their businesses on top of the clone. Based on ideas that still haven't been called up. Phil B. once I wrote a book about Zope and about Python saying the rest of Python doesn't even know what they don't know when they talk about Zope and talk about fun. So I'm going to use this as an opportunity to make fun of Jim. Tonight, as you buy me a beer, I'll tell you the story about what Jim said when the Victor capitalist said that everyone should work 70 hours a week. So instead I'll tell you a different story because he's talking about the ZODV. Unlike my other stories, this one's true. He walks into my office and my office is approximately the size of his chair. He sits across the other side and says, can we have transactions? Can we have persistence? Can we have transactions? Can we have multi-app server processes? He comes in and says, how do you do that? Can we have multi-app server processes? He comes in and says, what do you think we are? A database company? Ladies and gentlemen, is he all right? I should point out that when I said that, it wasn't in anger. It was in joy and expectation. Actually object-oriented databases because back then they were still a thing. There were companies doing interesting work with object-oriented databases and I thought it was pretty exciting. It's a shame that the industry sort of went a different direction. There was a lot of cool research going on back then. I'd like to talk to you. As I mentioned, I've been working a lot on ZODB lately. I've been doing some interesting things, moved forward on some things. I plan to continue doing that for a while at least. I'd like to get input about directions that we might go. I really want this talk to be kind of a conversation. Whenever I give a talk, I always start by saying the only bad question is the one you don't ask and I prefer to be interrupted rather than have you have a question and then miss out although this isn't going to be super technical. I encourage you to ask questions. If it gets out of hand, we'll move on. There's a lot of time here so that we can actually do that. Anyway, I'm working 100% on ZODB and have been for several months. I've wanted to do this for a long time. There were times at Zope Corporation when I could focus on it, but I was really focused on the problems that we were trying to solve and we solved some pretty interesting problems. Then Zope Corporation is gone if you didn't know that, but Zope Corporation is like the Python and it's like the parrot and Monty Python. I hadn't been able to work on it. I was doing interesting work for a great company, but I really wanted to work on ZODB again and I really wanted to provide some focus to really give it a chance to succeed. Succeed in the niche that it belongs in because it's not a solution to every problem. No database is really. It always frustrates me when people say, well, this is a good database and that statement makes no sense out of the context of whatever problem you want to solve. But I think ZODB is an excellent database for certain kinds of problems and I'm excited to be focusing on it again. This was made possible because a company called ZODB was building a product on top of ZODB. They took advantage of the fact that most of the logic is on the client, which meant that the data could be encrypted at rest on the server and that was sort of an opportunity. Unfortunately, their customers weren't really Python developers. They didn't really, you know, it wasn't really a good fit for the kind of customers that they had. So we did a lot of interesting work together, but they're focusing on their Hadoop effort and I'm continuing to focus on ZODB and hoping that there'll be some opportunities for me to help out on projects, provide training, consulting, et cetera. Everything, I, Zope Corporation used to offer support contracts for both Zope and ZODB, more or less whatever you'd want to buy them on, but they were structured in an interesting way because you could buy a support contract, basically you gave you a certain number of hours that didn't give you a solution. And I think that's actually, especially for open source, a pretty good model because if you have to give somebody a solution, then you have to limit their freedom to be able to hack on the software and so, anyway, I'd like to do that when I figure out how. But moving along, so before I get into sort of bringing you up to date on some of the happenings, I'd like to get some feedback from the audience on a couple of things. So if you're using ZODB, unless you're doing an embedded system and there have been some interesting embedded systems with ZODB and, you know, I occasionally hear of interesting things where something like file storage makes a lot of sense, but if you're doing a typical web app or a typical, typical database application, you're going to be using ZODB with rel storage Neo or ZEO. So how many, so I want to get a feeling for what people are using these days. So how many people are using Neo? Nobody, I'm not surprised, but does anybody know what Neo, how many people know what Neo is? Well, that's kind of cool. So Neo has a lot of potential. Nexity is doing projects for people where performance and reliability are really important. So they're doing some interesting things with Neo in terms of, you know, sort of highly durable storage. So I think that's a worthy thing to investigate. It's a little bit more effort to set up, but I think it's a worthy alternative. How many people are using rel storage? Okay, and how many people are using ZEO? Okay. Cool. Thank you. Okay, so of those people are using ZEO, how many people are using ZRS? Okay, well, I encourage you, if you're using ZEO, I encourage you to try ZRS. ZEO provides real-time backup. It's not quite as durable as Neo. With ZEO, typically replication happens very quickly, but theoretically you could have a system crash between committing a transaction and before it's gotten replicated to another system. And so there's a little bit of a chance of losing data, whereas with Neo, Neo doesn't commit the transaction, doesn't consider the transaction committed until it's been committed on a majority of the replicas. But ZRS works really well, and it's actually, especially since ZRS 2. ZRS 1 was kind of a nightmare with, I forget what it was using, but it was kind of a mess, but ZRS 2 is extremely simple. And I'll say a little bit more about some of the opportunities with ZRS later, but anyway. So I encourage you, if you're using ZEO, I encourage you to use ZRS to back up your data. In my opinion, at Zope Corporation, we never actually did back up, so we never used repose. We never backed up our databases, we just replicated them. And so we knew that they were essentially backed up, and they were backed up in real-time. Yeah, so. Okay, so in terms of providing search in your applications, how many people are using a catalog or something like that? Okay. How many people are using an external index? Okay, fair enough. Number. And how many people don't use an index at all? Just maybe use a VTRI here and there. Yeah, I guess I'm not surprised at that. At Zope Corporation, for what it's worth, we ended up, partly because of the nature of the applications we were doing, towards the end, we were doing some interesting mobile applications where we, it wasn't really content management. And so we actually just, where we needed to search, we pretty much just used a few VTRIs. Okay. So I'm probably, when I was writing this slide, I was afraid that I'd miss some pain point because I feel no pain. But how many people feel that database performance is a pain point for them? Okay, interesting. Conflict. Okay, probably a lot of the same people. Indexing. Wow, you all are very kind. Either that or I've forgotten all the pain points. Rules of persistence or the sort of programming model. Okay. Anybody want to shatter any pain points that I've missed? Okay, well you can tell me later too. Okay, so I want to say a little bit about some of the stuff that ZeroDB, that I did with ZeroDB. So ZeroDB had two products. One was a, and they're both about storing data encrypted at rest. The first was a database built on ZODB. And then the second was something similar with Hadoop where the idea is that you'd unencrypt your data as it entered a pipeline and encrypted it at the other end. At least I assume that's what it was. I never really got into it myself. And that's what they're focusing on now. So one of the first things I did, because I felt it was, the ZO implementation is very old, it's the ZO4 implementation. And it was kind of a little bit over engineered and kind of complex in some places. And the library that it was using, the asynchronous library, async core, which is by, it's got to be by far the oldest async library in Python. And it's the same library, not quite coincidentally that Z server was built on. Async core is really sort of deprecated, it has some issues. And there was some suspicion that maybe it was contributing to ZO performance. Over the years when I talk to people, I've heard a lot about performance issues of ZO. There was the whole ZODB shootout thing. I think ZO is a little bit closer to parity now. But that's been something that's bothered me for a long time. And there was a suspicion that maybe async core was to blame. And maybe it is a little bit. So before doing other things like SSL, and of course async IO also makes SSL easier because it's got support built in. I reimplemented on async IO that would have been less effort than I put into it except that I also used that opportunity to clean up the code base quite a bit. And so that was a good thing. And in fact there were some performance improvements, especially for writes. And there were a couple of places. I published some performance. I've added a link there to a spreadsheet that has the results and some description of how I did them. It also is interesting because it touches on some configuration choices you have, like whether you use SSL or not, or whether you use server sync, which I'll talk about in a minute or not. But anyways, ZO5 is especially with Python 3.5 and UV loop, which is an alternate implementation of the async IO event loop. It's significantly faster for reads if by significant you consider like maybe 20 or 30% significant for writes in most cases, and especially high concurrency, it's an order of magnitude faster. So it's quite a bit faster. Now async IO introduced sort of a flurry of interest in asynchronous programming in ZO and using ZODB, and I've used asynchronous libraries for IO applications pretty much since I've been using, from the beginnings of digital, you know, since, say, 96. So I've been a big fan of asynchronous IO. I'm very biased. I happen to hate asynchronous programming interfaces. And ZODB is an inherently synchronous API. For better or worse, I think for better, but people can legitimately say worse, ZODB is an object-oriented database, and it wants to provide the illusion that you're just working with objects in Python more or less like you work with any other data. So that's what it's really about. And that's its value proposition. And so there's really no good way to fit an asynchronous programming model into that, at least that I can see. Although there's been some interesting work that I'm going to learn a little bit more about later tonight, so maybe my mind will be changed. So ZO is using an asynchronous library, but that's only an implementation detail, and in fact, that could change. There's an issue, which I mentioned here, is that, and I realize this when doing sort of profiling and performance analysis of, you know, when working on ZO5. And Shane actually sort of figured it out a while ago, which is that, although I'm not sure if he figured out exactly these terms, but when you're combining an async IO library with thread pools, when the thread's done doing its work, it has to notify the async library that it should, you know, do something with the data. And it turns out that that interface seems to be expensive relative to, say, a thread queue or a queue or a lock of some kind. And in fact, for Zope and Z-server years ago, Shane introduced this hack into Z-server so that rather than waking up the event loop, he just, when a request is done, he just writes directly to the output socket, and there's a lock that protects that so that the event loop and the thread pool don't write to it at the same time. And that turns out to be a big performance win. So there is a little bit of a dark side to the architecture that I recommend in terms of async-server and thread pools. But this also has an impact on Zio. So I might actually, in the future, go to, actually, a less asynchronous model in the implementation of Zio because of that. So this led me into, I didn't really have any good place to put these slides, and I'm disturbing the flow a little bit, but there's some things I wanted to point out relative to this. And in terms of just when thinking about developing with ZODB, you know, if you're developing with, if you're developing an application that only has, you know, one client, some of these things aren't important, but if you're anticipating an application that has lots and lots of clients, then some of these things become very important. So the first is that when trying to service lots of clients, you want to keep transactions very short, and there are a few reasons for this. One is the short of the transaction, or long transactions have a much higher chance of a conflict because basically, when a transaction, ZODB uses a time stamp based protocol, which many modern databases use, where basically, at the start of a transaction, it sees a snapshot of the database as of the start of the transaction, and so any changes made after that potentially conflict, so you want to reduce that window. Also connections are expensive resources. One of the big wins of ZODB is its object caching. Please don't call this a pickle cache. There's a module that suggests you should call it a pickle cache. It's an object cache. It doesn't cache pickles, it caches objects. But anyway, so there's this object cache, and you want it to be big enough to hold your working set ideally, but that means it's a very expensive resource because, you know, you don't want to have a lot of connections unless you have a lot of memory because connections can use up a lot of memory if your working set is of any significant size. And if you've got any long running tasks that you need to do, consider trying to do them asynchronously, typically using some sort of queuing system like Celery or SQS or what have you. But unfortunately, there's a gotcha with that in that you want to find some way to do that handoff reliably. And most of the solutions like Celery doesn't really provide a good way to do that. SQS doesn't provide a good way to do that. So we came up with something at ZOPE Corporation that used a very short-term transactional queue, and then we would move data from the transactional queue into like, in our case, SQS. And so it would be ideal if you could somehow handoff to something like Celery transactionally so that if the transaction committed, you knew that Celery had it. But with us, we were sending data in SQS, and it hardly ever failed, like really, really, really, really, really failed. But any failure is really hard to reason about. So we really didn't want to tolerate any failure because we didn't know what the heck was going to happen if it failed. Anyway, also something in terms of, again, this is a little bit off the path I was going, but these submit ideas that I wanted to share. If you're building a large application on top of ZODB or possibly any database that has an effective cache, a common problem is that your working set doesn't fit in memory. I've talked to people who said, well, I've got something that runs a whole bunch of sites within a given instance, so this is not an uncommon problem. At Zope Corporation, we had one application that hosted 400 newspaper sites. And the data that we typically needed to use wasn't, you know, was too large to really fit in the amount of RAM that we had available to us. We used, we allocated roughly four gigs, really three gigs per process. And so because of that, we were constantly, you know, churning data in and then having to make requests to the server. And then the server was getting beat up pretty bad because we were constantly hitting it. So we wrote a content-aware load balancer so that, and the one that we wrote happened to be Dynamics so that it would sort of learn and sort things out over time. But there are a number of content-aware load balancers available. And with the content-aware load balancer, you could sort of say, okay, well, if there's any correlation between the content that you need and something in the request, then you can say, okay, all of the requests of this particular class that need this particular content I'm going to send over here. And then all these, you know, and then, so you could segregate by class and then reduce the, reduce, essentially split up the working set. And that was a huge win for us. We would, and also just to give you an idea of, you know, scale feasible with Z80B, we were running 40 or 50 clients. And then after adding the content-aware load balancer, we were able to reduce that to about 20 clients. And also when we had to restart a client, it started a lot quicker. Just generally things moved a lot better. So if you've got, you know, sort of large applications where the sort of content can be segregated, you should consider that. So when I think about growing ZODB, among the things I think about are maybe moving beyond Python. And the thing that's most interesting is from a market point of view is JavaScript. But again, remember I said earlier that ZODB is inherently a synchronous API. And that, of course, is at odds with JavaScript. So for what it's worth, if I were to do this and I would love to do this, I wouldn't do it speculatively, but if somebody wanted to pay me to do it, I'd love to work on it. But if I were to do something like this, I would probably run ZODB client-side applications in web workers and then have them provide an asynchronous API to the UI so that your browser UI would still use an asynchronous interface, but it would be an application-level interface rather than a low-level ZODB interface. And if I were to do this, I would actually rewrite it in JavaScript. ZODB actually isn't that big if you're familiar with it. After working on it a few months, I'm pretty familiar with it. I'll probably forget it if I stop, but right now I'm pretty familiar with it. So anyway, if you're interested in this whole issue of asynchronous APIs and performance, there was a really interesting article that was posted last year where somebody actually measured the blockiness of different database APIs on the client, and they found that local storage, which is synchronous, was less blocky than indexedDB. So, yeah. No silver bullets, I'm afraid. Okay, so back to new things in ZODB that are interesting. A lot of these are interesting from a performance point of view, and most of you said you weren't interested in performance, so sorry. But a challenge for some applications is that when an application, especially an application startup, but even in other situations, you need to make a bunch of requests. And in Z04, only one request could be outstanding at a time, and there was no real way to say, I want these five objects. And so my first answer to that really is, at least for the startup problem, is use of persistent cache. Persistent cache has got a bad name because for a while they were kind of unstable, but we finally solved those problems several years ago, but I don't think the word has gotten out. There can be also some operational challenges with them, but if this is something of concern for you, if you have a working set that could fit in a ZEO cache, persistent caches are stable at this point. But you might have a situation where, for example, you have a bunch of objects that, you know, maybe you did some sort of index query, and you have a bunch of objects that you know you're going to want to load, because you're a genius. And so there's now an API that lets you do that. And so the way it works is you call prefetch, and you can pass OIDs or objects or sequences of OIDs or actually iterables of OIDs, or as I like to say, irritables of objects. And what it does is it sends the request to the server, but it returns right away. And so when you go to fetch one of the objects, ZEO will say, oh, okay, well, I'm already fetching this object, so I won't make a new request. I'll just wait till that request I made before comes back. And then by the time you get the first object, chances are the next object is going to be right behind it. So it basically addresses the sort of round trip latency of requesting objects one at a time. The challenge is figuring out how to actually leverage it. Amongst the ideas I thought of is, you know, you load a B3 bucket that contains persistent objects, maybe you have some policy that you're going to load all those objects in that bucket, but not obviously the whole B3. Or sometimes you might have an object that has a sub-object that's persistent, but you always get, whenever you use the parent, you're always going to use the child. So maybe you want to say, okay, when I load these kinds of objects, I'm going to load the children, or maybe looking at the other way, maybe you have certain kinds of objects that should always be loaded when they're containing, the referencing object is loaded. So we could build some of that possibly as pluggable policies into ZODV to actually automate some of those things. That sort of pattern is where, you know, where I thought kind of thinking of them as sub-objects. So anyway, something to think about. You all have been very quiet. Okay, good. I don't believe it, but good. Okay, so one of the big things was SSL. Obviously, this was a big thing for ZODV. So it provides encryption of the connection, of course, but it also provides alternate authentication models. And ZO had an authentication model. It complicated the code quite a bit. It was kind of a specialized thing, and it didn't actually encrypt the channel. So I think SSL is a much cleaner option. So the old thing is gone. That sort of disappeared and part of it is part of it. Was anybody using the old ZO authentication mechanism? Good. Okay. So I think this is kind of interesting. It provides, like, ZODV was considering, you know, doing like hosted ZODV databases where, you know, the actual clients were outside their controlled clusters. So this is pretty interesting for that. So, you can basically, when you set up a ZO server, you can give it a collection of self-signed certs that it will then use to authenticate the clients. Primarily to allow access to the ZO server itself, but you could obviously do more than that, and they did. I'll talk about that in a second. But, anyway, amongst the things you could do is, and that they did was, they played with the model where each user would upload a certificate of their own and then use that to authenticate, and then they later came to, I think, a standard approach of just simply using usernames and passwords, but that were sent over the SSL connection. Another interesting change is that ZO now supports client-side conflict resolution. Neo and Rails Storage already did this. For ZODV, since the data were encrypted on the server, there was no real way, and the server didn't have the keys to unlock it by design, the server couldn't do conflict resolution. So in order to be able to do conflict resolution, and for some applications that's important, then we needed to move it to the client. This has been something that I've been wanting to do for a while. In fact, I'd like to take it a step further. Let me, I'll come back to that. So with conflict resolution on the client, there's potential to do a lot more. For example, you could have conflict resolution object that looked at, logic that looked at more than one object at a time. The current machinery also really only sees state. It doesn't really, it, it, and it tries to deal with common situations where it doesn't have the classes around. So for example, if you've got a B tree that contains references to persistent objects, it doesn't know what the persistent objects are, but it knows what their IDs are, and so it takes that into account in terms of deciding what's a conflict and what's not a conflict. But if you did this on the client, of course, everything is there. You also have like operational advantages, but the big, the big potential win here is to actually possibly get to the point where common situations are, are sort of, can always be resolved. So, you know, what I'd like is to have sort of non-conflicting data structures, which, which is really a misnomer. What I mean is data structures where we can always resolve the conflicts. And I think that's, I think that's within reach if we do it on the client. What I'd like to do eventually is actually move conflict resolution up to the ZODB and then sort of start exploring some of the different ways we might make conflict resolution always work for some interesting cases. Like the one that comes to mind the most to me is, is implementing a queue. And you know, you should be able to implement a queue in such a way that it doesn't conflict. But you know, a lot of the common use cases like, you know, adding separate, you know, adding or updating separate keys in a, in a B tree, you know, we could probably arrange that that never conflicts. Right now, those conflict when, when a B tree, when a B tree bucket splits. And so there, and then you get into all these strategies to try to prevent that from happening. And it's kind of a, it kind of ends up sort of going in a lot of different places like, are the buckets big enough, how do you allocate the keys, if you allocate keys sequentially, then you're going to be, you know, then lots of different threads are going to conflict at the same time when it splits. So anyway, so getting back to the client side conflict resolution, of course it works with encrypted data. The biggest operational win is that you no longer need custom classes on the server. So if you've tried to write your own classes that implement conflict resolution, then in order for them to work, they have to be on the server, which is a, which is a deployment headache. Now you can't simply use a generic ZO or in our case, ZRS RPM or Docker image or what have you. You've got to have these other classes. But if you do the conflict resolution on the client, then this is not an issue. So it potentially reduces server load because the server is not doing this computation of the conflict resolution and it opens a door for non-Python servers. The cons are that you increase the number of round trips to the server when there's a conflict. The way, basically the way it works is that when, when, when at the, during the, the, at the transition from the first phase of conflict resolution to the second, there's a, a vote step and so, so before vote was, you know, always return yay or nay. But now it returns a, it can still return yay or nay, but it also can return a list of conflicts. And then the client, if it can resolve all the conflicts, it rewrites them to the server and then if there are no conflicts at that point, then it can commit. And the, the client-side conflict resolution doesn't support Undo. So Undo can possibly sometimes undo transactions that would otherwise not be undoable by using conflict resolution. So, so another feature that was added that, that I worked on for 0DB, it didn't turn out well, but it was object level locks. So currently when ZO locks, when ZO, when ZO locks the database for the second phase of conflict resolution, it locks the entire database. Now, I, I think there's a, I think some people have a misconception that it, it locks during the entire con, commit process, but it only, it only emits, it only gets the database wide lock in the second phase. And that's a problem because in the second phase, there's a round trip to a client and sometimes clients aren't, well, I mean round trips are expensive to begin with and sometimes you can have misbehaving clients possibly because they're talking to another transaction manager that don't respond in a timely way and, and you're, you're basically prevented from committing new data while that's going on. So it's, it's kind of a problem. So a way to mitigate that is to get object level locks so that if, you know, so if you lock a transaction that, that touches certain, that modifies a certain set of objects, transactions that don't touch those objects can still commit. And in fact, this is what Neo does. So another, another, another reason to investigate Neo. I should have investigated Neo more myself, but I'm really lazy and it's a little bit involved to set up so I'm like, anyway. So I did some work on, on object level locks for Zio and I got it working, but it didn't actually provide a performance win except when clients were connected over very slow links, which I think was potentially a useful use case for zero DB since that, that, that was one of the things they wanted to be able to have outside clients be able to talk to the database. But for sort of a normal configuration, it really didn't provide a win. In fact, it might have even been slower. And I think a big part of the problem is it's really easy to get under heavy load like, like we do in a benchmark, but at Zope Corporation, we definitely, especially before we did the content aware load balancer, we beat up our servers pretty bad and, and Zio 4 uses multiple threads. And so you could actually, I mean, I've seen the Zio, I've seen our Zio servers go to 200% CPU, which means they're actually using more than one CPU. Mainly that's because they're doing IO outside the jail, you know, outside the, a lot of, a lot of, a lot of the computation is not done in Python, it's done in C because you're, you know, doing IO and C. But so it's not, it's not uncommon to, to, to get the Zio server CPU ban. And so even though theoretically there should have been a win by letting multiple transactions happen in parallel, the win was sort of swamped by the, the database being just slow. Part of that was that there was extra computation involved in actually managing the locks, but I don't think it was that significant, yeah. And, and, and, well, Jason Madden has, and Jason Madden is awesome. He's, he does a lot of interesting things with Zodb and he kind of pushes it's limits. He runs Zodb, by the way, for people interested in Async. He uses G event for all of his servers. It's still a threaded computing model, but it's, it's on an Async library. But anyway, he did some analysis and it was, it was, it was quite a bit faster using PyPy, especially on the server, not so much on the client, but on the server it was a pretty significant win. Yep. So you know, that's definitely something to consider if you're deploying Zio servers. If I was deploying Zio servers today, I would, I would definitely consider that. So some other interesting things about, so, so often now I've been mostly talking about the improvements to Zodb that I did on behalf of Zodb, but they did some interesting experiments. They actually did most of these experiments before I got involved and I just sort of enhanced them a bit. But because they were sort of thinking about trying to provide hosted Zodb, they came up with a model for multi-tenant databases. And so what, what, what that really meant was that somebody could walk up to a UI theoretically and say, I want to buy a database. And what they would really get is a, is a, is a sub database of an existing database. And so they had a mechanism for splitting databases, a single database into virtual databases where each database was owned by a user. Each user's records were encrypted separately. So even if the users saw each other's records, they wouldn't be able to decrypt them. Plus they had an access control model that, that prevented access to other users' data. And also interestingly, that affected invalidation. So, so invalidations for a user would only be sent to that user. They wouldn't be sent to other users. And of course they had a user database and an authentication model. So it was pretty interesting. I think it's worth, particularly if you want to sort of support multiple, if you have a need to support multiple databases within a ZO server, I think this is a potentially good way to go about it. Yep. Yeah. Okay. So I actually spent a little bit of time in the spring looking harder at Neo. And then as part of that, I realized that based on some, I mean, Neo's has done a lot of interesting things where they patched the ODB to work a little bit differently and actually a little bit better. And so I got to, I mean, I, I'm sure they've given me those patches before, but you know, at some time when I was fighting some other fire and I never really bothered with them, and it's a shame because there were some pretty good patches. And so most, if not all, those patches got applied. And one of them really simplified quite a bit the way ZO, ZODB implemented multi-version concurrency control, which both simplified the logic quite a bit and then also sort of at the same time and as part of the same cleanup, allowed me to reduce some, get rid of some stupid sort of cases of locking things and preventing concurrency when it wasn't really necessary. And as sort of part of this, and part of sort of making sure REL stores would work with this new way of doing things, I realized that actually the way we were moving and the way ZE, Neo was doing things was already in some ways closer to the way REL storage worked. And in the course of this, I also realized something that I never understood before in terms of REL storage is that, because it was always kind of weird because REL storage had this, this IO called MVCC storage. And I was like, wait, wait, ZODB is already MVCC. What? Why are we doing this? The key is that REL storage uses the MVCC implementation of the underlying database. And so it leans on the underlying database to do that. And that's why. And so really most of what it's doing is just sort of bypassing ZODB's MVCC. So the end result of all of this is that the REL storage API is now the dominant API and the older storages like ZO and Neo and Cloud Storage, et cetera, are really adapted to the API that REL storage provides. And the adapter is where the MVCC logic is. It's actually changed the storage API a bit. For those people who have ever dealt with it, which you probably haven't unless you're a ZODB hacker, is that the load method, which is sort of like a core method is now gone or effectively gone and now everything uses load before. Another happy sort of outcome of all of this discussion is that Shane Hathaway has handed the baton for, well, he sort of dropped the baton a couple years ago, but he picked it up and he handed it to Jason Madden. And so now REL storage has a maintainer, which is a really good thing. So a common problem, and this was brought up on the list a few months ago, is sort of inconsistency between ZO clients and a typical scenario, one that we ran into at ZOB quite a bit was you'd have a request that caused an object to be added and then the browser on the next request would then try to do something with that object and they would happen to hit another ZO client very quickly and that ZO client hadn't gotten the news of the new object. And the reason this happens is that each client is consistent, but it's consistent as of a particular point in time. And because network communication isn't instantaneous, while all clients are consistent, they may not be consistent with each other in terms of what sort of view of time they have. And this potentially, this was a problem for ZO, it could potentially have been a problem with REL storage as well due to the way it pulled. If you set the pull interval to zero, I think it pulled at the beginning of every transaction, I think, but if you send it to non-zero, then potentially you could have the same problem. So Neo has always, at the beginning of a transaction, made a round trip to the server. It didn't really matter what that round trip did, it could have effectively been a ping. But what that does is by waiting for that round trip, any invalidations that were in flight, it sees before it gets the answer to its ping. And so that means it may be at a different time as the client that added the data, but it's at least up to the time at which that object was added. So Neo has always done this, ZO now has an option to do it, and REL storage has gotten rid of the pull interval, and so now it effectively makes this round trip every time as well. The reason it's an option in ZO is that it's kind of expensive in the sense that you're making a round trip. If all your data are in memory, you've changed what would have made no server round trips to something that's making a server round trip. So if this is a problem for your applications, this is an easy way to solve it, but if it's not a problem for your applications, then I would consider not doing it. Maybe it should be the default, and maybe turning it off should be the option. I am going to take off my ping Hoy. Set design to Tamy szeroi. Well, I mean, you could always conceivably, it should be doable on, I mean, there are a bunch of application strategies that you can use for this, but it's kind of a bother. It's potentially a win because it would mean that you wouldn't need to sink unless you knew that you needed to sink. So it's potentially a performance win. But this actually provides an easy way to say, okay, I'm going to just make the problem go away. So zero DB sort of started going in a different direction in, or decided to go into a different direction at the end of the summer. So what have I been doing since then? Well, I decided after a while that I just absolutely had to unscrew the documentation situation. It's been a sore for a long time. I'm a bad person. I didn't really pay. I didn't make it a priority a long time ago. But for zero DB to succeed, it's got to have decent documentation. So the documentation is at when you go to Zodb.org, you have a sort of why use the adb statement and then links off to both non-reference and reference documentation. It's far more extensive than everything we had before. It could be improved a lot, and you can help me improve it by bitching at me about things that really should be documented. You can help me even more by writing documentation, of course. But I don't mind writing documentation if you help me by telling me what you think needs to be explained better or needs to be explained more. I don't think this documentation would ever necessarily be a replacement for, say, something like the Zodb book, but it should be fairly complete and give concise documentation of pretty much everything people need to know, including touching on topics like how do you do more scalable Zodb applications. So there's more work that needs to be done, but I feel like I actually got it to a point where we could finally retire that guide that had been written 20 years ago and was woefully out of date and written as a blog post. Even the documentation is executable thanks to Manuel. So Manuel, how many people know about Manuel? Manuel? Manuel. It's a very cool tool. If you write documentation for like software libraries, it makes it really easy. So when we did... when I fell in love with doc tests, I really liked the idea of executable documentation. But what I learned way too late, just look at the build-out docs, is that tests don't make good documentation, but good documentation can help with the tests. And so what I did for Bobo, NGI, and the Zodb docs is I wrote documentation, and then I made sure that all the examples were executable. So a couple of infrastructure projects that I've been thinking about for a while in terms of, again, performance. So I've operated big ZODB databases for multiple ZOPC customers for several years. And in addition to the overall performance, packing was kind of a big deal. So file storage, first of all, the implementation hasn't changed, again, probably close to 20 years. And it works, but it's pretty slow. And of course, it's particularly problematic because of the GIL, because while you're packing, then you're sort of starving other things, even though it runs in a separate thread. So one of the first things I did was ZC file storage, which does most of the packing off in a separate process. It actually creates a sub-process to do most of the work. And I also wrote ZC ZADB DGC, which was primarily to deal with the problem of garbage collecting multi-databases. But it turns out that garbage collecting as a separate process from packing actually is also a big win, because you can do it in a separate process, and you can do it at your leisure. And I think it's a much better model. And so ZC file storage actually doesn't even support garbage collection. And even with all of that, and all of that could use a lot of polishing up, a big problem was that when you pack a database at the end of the pack process, you're copying data packing pack records at the same time you're committing. And there's a lot of contention there at the end. And so we got pretty good at ZOBE Corporation about having lots of metrics. And you could always see it packed, because all the metrics would go awful. At the last, depending on the database, 10, 20 minutes of a pack. And so we always tried to time things that happened in the middle of the night. But it was pretty bloody. It was pretty awful. So file storage is designed, file storage too is designed largely to solve that problem. It learns some lessons from file storage. When I wrote file storage, I never imagined that it would work as well as it does. It's pretty efficient. It's kind of SSDs. It's a pretty simple model. But so anyway, so it's sort of file storage too sort of still keeps the file storage model, but it removes some crufts like back pointers and versions. Back pointers you can argue whether they're actual crufts, but since people hardly ever use. So I'm glad you asked that question. So today primarily for undo, when you undo transactions, it doesn't actually write any new data. It just, it writes back pointers to the data. So it doesn't actually copy the data record. It just says, okay, the data record is back here, which is elegant. And it was actually a big part of the version feature. So that when you committed a version, it would do a similar sort of trick. But it adds a lot of complexity to the implementation. And since people don't really use undo that much anymore, I don't think it wasn't really worth keeping it. So it uses multiple files. And the idea is that you have an active file and then some previous files, you know, zero or more previous files. And when you want to pack, the first thing you do is you split and you create a new active file. And that's a very cheap operation. And then at your leisure, you can pack the previous files. And that doesn't affect the act. And when you create a previous file, it also writes the index in a way that it can be used as a memory mapped file. So it still uses memory, but it uses memory a little bit more efficiently for the old indexes. But more importantly, you can, if you've got a Jill to deal with, you could pack the previous files in a separate process. And then again, at the end of that, there's just a fairly inexpensive handoff to get the database to use the index of the new files, the indexes of the new files instead of the indexes it was using before. So that's really the big win. So I think from an operational point of view, I didn't document packing as part of the documentation, partly because nothing else was documenting it and also because it made me cry. So the other thing I've been wanting to explore is could I get the Xeo server to be a lot faster if I didn't write it in Python? And I've been dating different languages over the last few years looking for, you know, a possible choice. I did a, there's, if you're running an AWS and if you use a lot of blobs, you can save money by putting those blobs in S3. And so I wrote essentially a sort of an S3 blob cache. And I wrote that in Scala. Scala's a really a lot of fun. I enjoy it quite a bit, but it wasn't very fast. It was probably my fault, but it failed the test for this. So I decided this go around to use Rust. Rust is interesting in a number of ways. It's very fast. I mainly started looking at it because a friend of mine suggested it and because I found some things that suggested that it was faster than Go. And the reason I think it's faster than Go is because it has no runtime and instead of using a garbage collector, it uses stack-based memory management. So basically the sort of standard way of doing things in Rust is everything is either on a stack or if it's in the heap, then there's sort of a pointer to the heap from something that's on the stack. So data is garbage when it goes out of scope. And so basically the memory management decisions are all made at compile time, which is pretty intriguing and could provide some performance wins. It's a little bit more complicated than that. You end up using some reference counting, but it's a relatively small subset of your data that's reference counting. And of course it has no Go. So I started working on this a few weeks ago. Some people saw a blog post that I posted a while ago. That was about when I decided to start working on it. It includes a file storage, two implementation. It's very sort of early in development now. I've been trying to get it just to the point where I could run benchmarks. So lots of things aren't implemented yet. The internal API is very different than the way it is in ZO. There's no sort of, you know, the sort of pluggable storage API isn't really a thing here. It implements object level locks. And it should be as easy to set up as a ZO or ZRS. Probably easier. I think that's the definition that replication will just be built in. I'm very happy with, so ZRS is pretty cool because, you know, in ZODB if you're, it has this sort of pluggable storage architecture and we've gotten used to this pattern of just layering things, which I think has worked really well. And ZRS is just another layer. And so you can run a, you know, you can run ZRS replication without running a ZO server. I mean, it's just another storage. But here, here nothing is pluggable. Everything is about trying to go as fast as possible. And so I expect ZRS to just be built in. So I've been scrambling to try to get to the point where I could do some performance testing before this talk. And I finally got everything running for the benchmark this morning. And the initial results on my Mac, it's a four core Mac, so it's not a terrible machine for initial tests, are pretty encouraging. It's probably, certainly it's twice as fast as EO4Write's. Not quite, it's maybe 50% faster for reads. But I think I can take it a lot further. So I'm hopeful there. Lots of work remaining, including in some Python work. This whole issue of waking up an event loop, I think is a significant performance hit. And it's been part of the ZO design for like ever. So it's kind of embarrassing. But I think if I address that, I can actually make ZO go quite a bit faster just in Python. I have some tests, but I'll need a lot more. And lots of features aren't implemented yet. So some things, and an earlier slide that I skipped past, I threatened once again to implement a transaction run decorator for running transactions with retries. And I threatened it in anger a couple of weeks ago, and I still haven't done it. So that's going to be one of the next things. But so things that I think I would be, from my point of view of pain points that I felt and performance has definitely been, since I've worked on fairly large projects, things that matter to me, a lot of them are about performance. So things that I want from Z80B is more speed. I don't want people to choose Z80B because it's fast. It's never going to be a NoSQL database, believe it or not. But I would like speed not to be a disqualifier, at least for many kinds of applications. More documentation. I think object-oriented conflict resolution would be interesting. If we could get to a point where for a few critical data structures, conflicts could always be resolved. I think that would be a big thing. A really tiny feature, tiny, tiny, tiny, that would be a big win for certain kind of applications is the ability to subscribe to object updates. This would be interesting for GUI applications. I've got an application that I should be finishing, but I'm not. For, it's called a two-tiered combine board. And this board basically, it's meant to be like a combine board that multiple people use and whenever somebody makes a change, everybody's view gets updated automatically using long polling because my web socket burned me in the past. I've heard they've gotten better. But in order to do that, then something needs to know that, okay, there have been changes. And so it's an easy, quick hack to hack the DB class to do that, but that should really be a built-in feature that would really be beneficial and open up some interesting possibilities. As an aside, at Zope Corporation, we, towards the end, we really achieve quite a lot in terms of automation. And it, you know, they say that really lazy people are good at automating things and I'm really lazy. But we leverage ZooKeeper pretty heavily and ZooKeeper is pretty cool and there are other similar sorts of tools like SCD because you can sort of, the idea is it provides a service registry and it's a service registry where it not only knows when services appear, but it also knows when they disappear. And so you could sort of get notifications that, oh, the server fell, service fell over. I need to adjust my load balancer or I need to start a new one or what have you. And so this idea of being notified of things is pretty important. And conceivably, if the ADB had this, you know, it might have been an alternative to ZooKeeper that might be a lot easier to operate because ZooKeeper was a little bit of a pain to keep running at times. Let's see. So another project that I think would be really interesting for somebody to do and I feel like maybe at some point I should at least enable this a little bit. But we often use, in some of our applications, we use solar as an external index and keeping solar up to date was kind of tricky, especially since solar itself was replicated. And what we ended up doing was having the update process keep track of what data solar had seen last and we sort of kept track of like an index number for a data set. And of course, a much more straightforward way to do this would be to leverage ZRS replication. So the way ZRS's replication protocol is extremely simple. It's basically a client connects to the ZO server and says, I've got this tid. And the ZO server then says, OK, I'm going to send you all data after that tid forever until you disconnect. And that's it. And basically, it's just sending data that's very similar if you've ever run one of the database iterators like the file storage iterator. And so it's a really simple protocol. And so I'd like to see people write applications where instead of replicating to another ZODB database, they look at that stream of data and update solar or elastic search or relational database. So any kind of situation where either you have an external index that you want to update or maybe an external sort of replica that might be easier to write reports against, this would be a really interesting way to approach it. And so one of the things I'd like to do if I don't have any reason to do this myself would be maybe to at some point write a little module that just provides you the iterator that sort of does most of that for you. Sure. Thank you. Thank you. Potentially. Yeah. Well, I think it does. I mean, that's how OIDs are dealt with now, although I'd like to sort of sometimes not do it that way. And what you're talking about is not so much of a serial as just a sort of way of generating IDs that aren't necessarily ascending. Okay. So one of the challenges in my evil plan to get ZADB sort of more alive as a project is to get the wider Python community to know about it again. And one idea I've had at my last job, I did quite a bit with Pandas. As an aside, I found myself saying something that I never thought I'd say, which is that I found it much easier to do data manipulation and data wrangling in PostgreSQL than I did in Python and Pandas. But anyway, I used Pandas quite a bit. And it would be, but sharing the data sets was kind of awkward within the team. And so I think it would be really interesting to have persistent Pandas data sets using built on top of the ZADB blob mechanism. So that would be a fun project to do. And I might do that soonish if somebody doesn't give me better things to do. Something that we've talked about for quite a long time is something that I call a JSONIC API because I like to make up words. And so the idea is that when looking at a database, you should be able to be able to look at it without having any of the classes around. We've had several ZADB browsers like that, but I think that would be nice to be a more widely available sort of API, both for accessing a database as JSON rather than objects. When I say JSON, I really mean dictionaries and listen tuples. And having that mechanism be sort of readily available might be interesting, again, if you're sort of, if you're listening to a ZRS stream, getting JSON rather than pickles might be very much more convenient for some applications. So Carlos, where's Carlos? Yeah, he's threatened to sprint on this during the sprint. So if anybody, maybe somebody's interested in helping him with that, I'm going to try to help remotely. So ZRS, ZRS failover is manual right now. And I'd like to make it automatic using some sort of leader election protocol. We're pretty lucky at Zope Corporation, we hardly ever needed to failover. I mean, I think there were only one or two times we had to failover unexpectedly. But AWS is pretty awesome. They would often tell us in advance that a machine was going away so we could plan. And it also seemed, I don't know if anybody else has noticed this, but, you know, we made heavy, so at Zope Corporation, when we were thinking about servers, we had sort of, we classified them into precious and despicable. And so the despicable servers were always running auto scaling groups, even if there was only one of them, so that if they fell over, they would be replaced. Whereas the precious servers, you know, required a lot more care. But it seemed like the despicable servers tended to get wiped out a lot more often than the precious ones. And I have the suspicion that AWS sort of, that's part of their policy. If it's an auto scaling group, it's, you know, despicable. Docker images, maybe official Docker images would be good. There are actually several Docker images. I looked on the Docker Hub this afternoon and was surprised. I wasn't actually really surprised. But you know, there are several Docker images, which is good, although the one that was most popular was the Plone Docker image, which I guess just implied that it was Python 2. Unfortunately, because of the fact that XEO currently uses Pickle as its networking protocol, you can't have a Python 2 client talking to a Python 3 server the other way around, which is really unfortunate. The byte server uses message pack in part to sort of escape Pickle, sadly. So a Docker image should really, unfortunately, until that problem solved, probably identify what Python needs to be used with. I think a XEO authorization model would be really interesting, especially if people ever start maybe using ZODB for non-traditional applications like client server applications. And it occurred to me that just stealing the Unix file system security model, the traditional one, would be pretty easy to implement and could be pretty useful. Persistent classes, I still think, are, you know, with all the trouble that Z-classes had, part mostly, I think, because I left them to wither. But I think persistent classes are potentially pretty interesting if we could figure out how to do them someday. And, you know, at ZOBE Corporation, whenever something was, whenever software was in the database, I could deploy it transactionally, which is really, really cool. Other languages, you know, I'd like to increase the ZODB's audience. JS, unfortunately, is the most obvious choice, even though it's not very compatible. Ruby would probably be pretty compatible. Scala would be really interesting just because I really enjoy Scala and they've got this macro system that would probably allow the sort of automatic persistence thing to happen. And that's it. So you've been pretty quiet. Any parting questions? Yeah? Right. I think that's a great idea. Or you mean the client cache, or the client cache, right. You know, the funny thing about the client cache is that the better you configure your systems, the worse it is. Because both the client cache and the object cache try to use a, what I prefer to call a most recently used model. And the thing is that the object cache is really successful at keeping the most recently used, the most used objects, then it's hardly ever going to request any objects from the client cache. And so the client caches don't really have a signal of what's actually good. So yeah, except that the placebo effect actually works. Okay. I would love to do that. I'm a big fan of instrumentation at Zope Corporate. I'm really proud of a lot of the sort of DevOps-y things we did towards the end of Zope Corporation. We did a really good job of having lots of metrics and having lots of graphs of metrics. So I'm a big fan of that. That's kind of challenging because it's, the tuning is complicated and I definitely think it's a good idea. I think things like that should be instrumented. I think the storage server should be instrumented better as well in terms of knowing what kinds of requests you're getting, how many reads. Something we did a lot with all in a very hacky way was getting a better handle on conflicts and what was conflicting so we could try to figure out why. Because it's really, sadly, easy to make a mistake like allocating keys sequentially that cause lots of conflicts that you don't expect. Or like a common pattern that we came up with in Zope 3 that I regret was the ID, an ID service where it kept a table of object IDs to integers and a table going the other way around. And I forget, basically, you think you solved the problem in one direction not realizing that the problem is still there in the other direction and so you'd still end up getting conflicts due to bucket splits. If we had object during a conflict resolution then we could possibly make that whole problem go away. But, yep. Okay. Thanks. Yes. Well, it's actually pretty easy to arrange that the storage server only accepts certain globals. Well, actually, okay, if we get away from pickles then it gets actually harder. Right. Right. Right. It used to be possible. Well, what we used to do is we used to actually, we do this on the server on the client. I guess, I don't know. It should be possible to do some sort of whitelist. Well, okay. So on the potential victim clients, it'd be pretty easy to have a whitelist and we had a server, a storage wrapper that was trivially implemented that provided a whitelist. We just by accident never open sourced it and so corporation is gone. So it's kind of gone, but it would be really easy to implement. It was a trivial implementation to do that. And it could just be done as a storage layer. Well, I'm sorry. Actually, it could be more easily done if we hooked that into the ZADV machinery itself. So when you created a database you could provide a whitelist. That might even be a better way to do it because then it'll be part of the regular deserialization mechanism. So, I mean, some things that we've taken for granted in the past get harder when you have parent pointers. So, for example, exporting a part of your object tree is, I can't remember if we saw that in Zope 3. We may have, but we had to really work a lot harder because, yeah, yeah. Something that I thought about doing over the years and I've sort of started to implement at various points was to do reference counting garbage collection in the storage server, which would be a lot, I mean, which you could still do. You could sort of do a lot of garbage collection earlier and more easily without having to open up the records. Like, for example, if the data format that we sent to the server had the external references outside of the actual data payload, then you could sort of, without looking at the payload, you could still do reference counting garbage collection. Or you could do any kind of garbage collection, but you could do reference counting garbage collection sort of in real time. And that would be a good question. Potentially yeah. Right. Yeah. Yeah. Yeah. Yeah. Yeah. OK. Yeah. Yeah. That's a good question. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. That's a good question. That's a good question. I wish I could say that I've run its test and verified that it works with everything else, but I haven't. Probably works. Somebody should run its test. Well, transactional undo doesn't have to go away. You would do undo. Because undo is rare, you could just get rid of the optimization and say, when I undo records, I just copy the older records forward. History is still there. Time travel, I'm a huge fan of time travel. I'm especially a huge fan of a certain time travel error. Anything else? So I would love it if you would chip in on the ZADB list, which is a Google Groups list with, when you leave this room and you remember your pain points. And they're ones that I haven't mentioned. I encourage you to bring them up. OK, thank you very much.
|
Jim, the father of Zope, will talk about recent improvements and what lies ahead. The Z Object Database, ZODB, is a core foundation of Plone. There has been a flurry of work on ZODB over the summer, which is continuing. The talk will provide a deep dive on recent developments and development plans. Jim will also be looking for input from the audience on priorities, and new directions.
|
10.5446/55294 (DOI)
|
Hello everyone, welcome and thanks for the reminder. My name is Fulvio Kazali, I'm not Calvin. He, I'm taking his place because I actually assisted and helped and taught some of the trainings in the past few days, in particular the one that we're going to talk about now, so might as well just do it again. And, oh I should plug in here. And we, all presenters were asked to put that slide that Christie showed you at the very end and I forgot so I'm doing it at the beginning. Please go, please after this presentation, please go to plonekonf.sixfeedup.com to give your feedback. So again my name is Fulvio Kazali and I have been a plon developer for eight years from Seattle, Washington. And I'm here to talk about dexterity, creating content types for plon through the web as a follow on to Chris's talks, steaming through the web. And this, I just want to give a bit of a shout out to this effort that has been in, in, in progress for the last couple of years where we finally got, got to the point where we have excellent documentation for plon which has been sorely missing for years. And at the same time also trainings. Philip Bauer and his colleagues started writing a, writing and offering a training curriculum for plon in Germany and basically open, publishing it open as open source for anybody to use and that has led to a compliment to our documentation for, for people to use and to learn about plon and at the past, well last year in Bucharest I also gave one of these trainings and, and this, this effort has been really taking off and so a lot of people have been contributing and, and, and improving this documentation and it's now getting really good. And, and one thing that we've heard was okay great now we have documentation but it would be great to have video also, video, video trainings and so this is what this, what this talk is because it's being recorded and so it's sort of like a short, a condensed version of what we did in the past two days of some of the topics that we covered in the past, past few days and so hopefully we're going to start building a library of video trainings and they, no doubt they will be replaced and improved but this is just the first shot and so down at the bottom right this, this cycle of, of trainings is about through the web and in particular we're talking about dexterity and creating content types. What, what do we mean by content types and because this is being recorded and anybody could be watching it I'm going to give an analogy that a lot of people are probably familiar with working with other content management systems or, or other web frameworks and that is you know the, the relational database type of idea where if you, if you think of a table in a relational, relational database you have a bunch of columns and, and each record, each row in the table is a record that has different values in each of the columns and, and the, the, the number and the types of columns in this table is the schema for the table and that is sort of what we mean by content types in Plone. It's we're, we're, we're thinking of a particular, a particular unit of content that has certain fields that have certain attributes and, and all of the, the, the content items in our website that were created with this schema have the same fields. I should say also that Plone does not use a relational database in the back end so this is just an analogy, it's not actually how it works behind the scenes but it's, I think it's a good analogy. So for, for definitions the, the schema is, as I mentioned is the type, sort of the, the, the set of columns in our, in our table in other words what the fields are that we want for our, for our objects. However, that's not all because we also want a few other bits of metadata that describe our, our content type and that is what we call, in Plone we call the factory type information, the FTI which, which includes things like the name, the icon that we'll see when we create or when we look for data content items in the, in our website and also what views are available for this particular content type. So what is a view? A view is the, the representation of this object and it can be visual, it can be a JSON data structure, it can be an XML serialization of that type, that data, it's just how you, depending on how you're going to use it, you can, you can have different views. There's also a view view and an edit view for example, if you, when you, when you're, when you have the permission to actually change this, this data, data element you can go in and edit it and then you see the, the field widgets. Now in Plone, the, the, the content type framework that we use is called dexterity. But this is, this is like the, the latest iteration of content type frameworks. In past versions of Plones we, Plone we used something called archetypes and it's now, it's now being replaced by dexterity. It's obviously we thought the, the community thought that it needed to be improved so it got replaced with dexterity. The main difference and, and the main, the whole point of this, this, this presentation is that, or one important difference is that in dexterity you can actually create the fine content types through your web browser. You don't have to write Python code to, to, to, to, to, before you can actually use your content types. And there are other differences that happen under the, under the cover that I'm not going to go into. So when you are logged into a Plone site, the first place, the, the most common, the most frequent place where you run into content types is the add new menu. Here are all of the content types that are, that come out of the box with a, with a brand new Plone site. So the most common one that you're going to use is the page. This, the home page is an example of a page content type. There are others like a news item, an event that has start and end dates, participants, location and other information that is useful to have for an event, a folder which is just a content for other, a container for other content types and so on. But we are actually interested in defining something new, something that's not on here yet. And if you've seen Eric's keynote earlier, he actually went through the same exercise that I'm going through now, he created a recipe content type. I'm going to create something else. I'm going to create a, so basically I'm, the assumption here, the premise is that this website is for, for, for, is the, is the website for a conference and a lot of the content in this website is going to be pages about talks, about presentations. So I'm going to create a content type called talk. And I find the, the control panel for creating, creating and, and viewing and editing content types in the dexterity content type control panel. And here you see the default types already. So you can actually, actually already just go in here and change the default content types, which I actually do with, for my clients a lot of times. I go in and add some fields that they want for pages or for news items or for what not. But here, now I'm actually going to create a new one. So this is going to be called talk and I've created it. And now I'm going to give it some fields. So I go to the fields tab and we can see that it already has a couple of fields that come sort of out of the box from what we call behaviors. We can also remove them if we want to, but for now we're just going to not worry about those and add a new one. So I'm going to call this field the presenter and fields have just like columns in a database table have type, a type. So here are choices for all of the types that we can use right out of the box. In this case, it's just going to be a string. So it's a text line. And let's make it required because every presentation will need at least a presenter. And there it is. Now I can go into the settings link and change some of the attributes of this field. For example, I can make it not required if I want. I can give it a minimum or a maximum length. And these attributes depend on the type of field that you're going to be using. Let's add another field. Let's say type of presentation. And this one is going to be a choice. So it's basically a pick list where now I have to go into the settings because right now it has no values. So I'm going to give it some values. So it could be a keynote or it could be a training or it could be a talk. And now already you see here the values, the allowed values that I gave it. So already we can use this, go back to the home page, go to the add new menu and there is our new type. And if I create a new one, I see the title and the summary as we've seen. Those were already there. And here is the presenter which is required as denoted by the red dot and the type of presentation. And because it's required, if I move the mouse out of it, it's telling me, hey, you forgot to put something in there. So let's actually, I'm going to put my name in here. And I'm going to say this is a talk. Save it. Oh, of course, I forgot the title. Save it. Should always give it a summary for search ability. It's very important. But I am cheating right now. And here it is. And I can also see it in the contents of my website. No? Oh. There. There it is. And it inherits all of the functionality that other content types have. For example, I can publish it either through this menu, state, or through the state's menu that's in the toolbar. I could do that as well. I'm just going to do it through here. And there we go. So I created a content type, and I created my first item of the talk content type. Before I go, I just want to mention a couple of other things. So Dexterity allows you to do this through the web, through a browser, but you don't have to. You can do something through code in an add-on that you create, that you instantiate with Mr. Bob, and then you put it in your GitHub repository and all that, all those common development practices. But you can also move back and forth between through the web and through code. So for example, I can, and this, so actually I could, for example, I could write a new type. Right now I'm working on my local host on this machine, but I could just now move this type to a production website by saying I check the box here, and then I say export type profiles. And this is basically the FTI definition that I mentioned earlier. It gives me a zip file that I downloaded on my local hard disk. And now, actually let me do this. So all right, let's look at it. So now I've unzipped it. And actually I'm just going to drag this into my text editor. I'm not going to do it. Just, okay, well, let's do it. All right. So here at the root of this zip file, there is a, this is a generic setup profile import step XML file definition. And in the folder that has the name of the content type, there is the FTI XML definition. And it also contains the entire XML schema definition that is now encoded, so it looks really ugly, but it's there. And if I go back to, let's say now I'm creating a new site. Now this is a brand new site I just created. There are no talks, but I can go to the dexterity control panel and click the import type profiles, browse to the file, where am I? Browse to the zip file that I just downloaded from the other site. And there's my talk. And there it is. I create, I can't create it. I think that's all the time I have. And I did not forget. So please go to submit your feedback at plonkopf.sixfeedup.com. And thank you for coming. And do you have any questions? What's the difference between those two export options? Oh, there were two export options. Okay. So we exported the type profile. The schema model, I'll show you what it is by going here. That's what that is. That button would give you this XML file. And this is the schema description. See it has the presenter, which is a text line. It has the type of presentation, which is a choice. And there are the choices. So that's the entire schema definition of my content type. The use case for this is that I can actually go in here and change it. This is an editor. This is a text editor. I can make changes to, I can say, required true, for example. Save it. And this change is immediately applied to the content type on my website. I can also download it from the main dexterity control panel here. I can export it. And I do it. And are there like model options that aren't available here? Yes. Can you find yourself going to that sometimes? Because it's the only way to do it. Yes, I do. For example, one thing that you can't do, there are many things you can't do through the browser. So you will hit a ceiling pretty quickly. But one example is some fields, you want them to be available only to site administrators and not to editors. And so those permissions you cannot do through yet through the browser. You have to actually, but you can do them just by editing the XML schema. Yes? Can you easily add an icon for that new desktop? That is unfortunately not yet. So you've noticed, you've noticed, add new. Everything has an icon. The talk does not. Sorry. Yeah. Although, maybe, I haven't tried it. It's possible. Can you have behavior? Yes. Oh, behavior is a whole other topic. But yes, yes, yes, yes. That's in fact, that's one of the main use cases. There is this tab behaviors. So there we looked at the fields. We added some fields. And there's this other tab behaviors that has a lot of behaviors that are really, really useful and important. In fact, that's, most of the time, that's what I do. I don't actually add fields. I just turn on and off behaviors for content types because it just gives me a whole lot of things at once that I don't have to, nothing else to do. Yeah. If you take an existing content type like events, and if you add something, take an upload field because I tried this, an upload field, and then you create a new event. Because I remember that new field doesn't show up. Am I wrong? It should show up. It depends. It should show up. It may be that it ended up in a different field set. So for example, here, these are field sets that we get just from behaviors by default and we can turn off. And sometimes people create field sets by mistake instead of fields. And then they add, so. Okay. When you're done, could I come up and try it? Sure. Sure. All right. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you.
|
A demonstration of creating new content types through the web using the Dexterity content types framework in Plone.
|
10.5446/55297 (DOI)
|
How many of you have had to programmatically rescue your customers from human error? Okay. How many of you found it painful? Okay. I still have a site that was broken a week or two ago that I just haven't had. Okay. I just hope he finds someone else. The second question might not seem related, but it's related in concept. How many of you have some kind of need for an audit log of changes that are made to your site for regulatory purposes or any other reason? Okay. My goal here is to talk about an approach that I have and pitch this approach and discuss whether or not this is something that might be appropriate at some future point in the Plone Corps. My initial goal was to demo an actual product. What I have here is a half-finished product that is well-tested but has no user interface. What we're going to talk about is what's going on under the hood, how I'm trying to solve this problem, what I've done before, and what I'm proposing we might be able to do. I'd like to spark a discussion. There's been some interest in this approach in discussions previously on Twitter and GitHub. If there's any questions, my goal here is just to really get people thinking about the viability of this idea and whether or not it's something that we want to, as a community, consider. My disclaimers here. First, this is an experimental approach. I'm developing this in an add-on that can be installed and uninstalled in Plone. It wouldn't necessarily be if it was ever integrated into the Plone Corps via a plip and standard release process, it would probably be refederated into a variety of different places, not necessarily into a single add-on to prevent any kind of package proliferation. But it is an unfinished work in progress. What I have working right now in the package with tests is a pretty reasonable back-end for modification logging. It uses a variety of events for four different kinds of facilities. Those facilities are modification, deletion, addition, and moves or renames. We'd have the audit log that would tell you for any of those four facilities, which kind of, in last and first-out order, and basically reverse chronological order, which changes were made, who made the change, what the content was, if it was a move, we want to do things like log the previous path and the current path, those kinds of things. We're logging some very basic small metadata. We do some of this stuff in other places, like for modifications we have the catalog, we have CMF additions, but in some cases it's not going to be sufficient for the kinds of restores that we need to do. For the customer that I had to do these rescues for, they had some data that was stored in the ZODB that was not necessarily content-ish. The content item was a proxy to a lot of underlying ZODB data stored on that object that could be imported by ZEXP, but could not be simply just restored from something like CMF additions. I want to pitch this idea, I say it's about undo, but really I don't mean undo. I mean restoration in the broader sense. Because when we talk about undo, there was a ticket or an issue that was put on GitHub some months ago asking for us to restore undo into Plone. We had this undo tab in Plone 3 or undo action. We can't do that because transactional undo in ZODB just doesn't really work in the real world for a variety of reasons. On any reasonably loaded site after some time, you just can't get the undo to work. I'm here to pitch an idea. I'm here to show you some of the way in which I manually, programmatically rescued my customer and talk about the audit logging system that I've been working on, but also pitch the talk about the need for a UI. I'm working on it. This is work in progress. I was hoping to have a little more finish before the conference, but this is something that I think that if there's interest in this, I'd like to spark discussion about. We'll look at the pieces, so we'll talk about them and hopefully find some kind of practical way forward. Here's my motivations. There was a reasonable degree of interest in discussion previously on GitHub and Twitter. This happened after I had gone through three different rescues for one customer. These three rescues, one of them was a mistake that our customer service folks made in trying to help out our customer. The other was made by the customer themselves. They got in what I'd say deep-due-due because they gave themselves a situation where they made one modification that cascaded to dozens and dozens and dozens of content items. I needed to actually restore more than one item at a time. On top of this, this wasn't just a modification that affected content. It affected things like embedded indexes that we were storing using Repose.catalog on each of these form items that we were trying to restore. The ZODB content that was stored underneath the content that had nothing to do with the content fields. When we're talking about restoration, there was really only one option, and that was go back in time, grab the snapshot from last Tuesday and do a ZEXP import of that. Rather than do that manually, piece by piece from, say, a data backup, like an old data.fs that I've backed up, what we ended up using was zc.beforeStorage because we had this in the undue history. Our default right now is we packed to seven days. We knew that somebody called us on day two saying, we know we screwed up. We get that call, and we know that we have some fixed number of days to programmatically go in and rescue them with a production database without having to resort to going into backups. What I found doing this over and over again was that this was repeatable. What we were doing was very patterned and very formulaic in terms of how I was programmatically solving this problem. That means I think we can have a user interface for this. Again, we can't use undo in the ZMI slash ZODB sense to really undo things these days. The basics of what we need, we need to have some kind of audit logging system. The goal here, and my approach at this point is to store this in ZODB. There have been other approaches that people have had saying, hey, store this out of band, do things like, if you are logging all requests, not just writes, you might do something like there are things out there, collective.firehoses, the example that comes to mind, something that might log to a separate database using an asynchronous queue. In this case, what I am interested in doing is simply at the very end of things, when there is any kind of modification, using the event subscribers to capture that in a very small record inside the ZODB. Right now, logging asynchronous, it might be asynchronous in the future. That is to be determined. One of the concerns I have is that there are increased potential for conflict errors the way I am doing it now. That is usually not a problem because we retry transactions in ZOP three times. However, if your transaction is very long running, you have a very expensive transaction at the end of it. You have a conflict error because you are trying to insert something into a list, it is basically a queue of integers. It is sort of ridiculous that that one little tiny thing is doing your entire transaction. When it could be retry, the insertion could be retried easily. There may be some opportunity for either using conflict resolution or asynchronous behavior to solve all this. We are going to log all these four facilities. Deletion, modification, moves and additions. The different kinds of writes that we really care about. When we think about moves, we are excluding additions and excluding deletions from the I object removed interface because that is notified in a variety of ways. We log all of this independent of the catalog. We do it in annotations on the site. Notice the way that I am handling this right now. The handlers are simple. They call a logging system which adapts the site and uses annotations and built in out of the box content types. There is a risk to that in that using out of the box content, sorry, data types. I am using things like ob trees, iob trees and persistent lists. I am not subclassing any of them. In theory, my goal here is to make this an easily uninstallable product. By using those out of the box data types, I am making this fairly low risk. I just have some annotations that are stored in your site using standard data types. If you try this and then you uninstall it once this thing is ready and you don't like it, it can go away without being a problem. The goal here is to have a testable, installable and uninstallable add-on that can be a proof of concept to verify that this is a reasonable way to restore things from the past. How we are going to use this. We have the core components I have tested. I have integration tests for a variety of things including things like pruning the logs. It still needs a good user interface. Outside of the core use case, the audit logging could be used for other things like regulatory audit requirements. For example, if you have HIPAA requirements, it doesn't solve everything because in some cases you might actually need to log things like views. Right now we just use Z2.log and collective.username.logger to log all the authenticated users access. In theory, from a regulatory requirement, if I was wanting to do something that was HIPAA compliant for healthcare privacy laws in the United States, what I would have to do, I could potentially get away with just using my Z2.log plus some kind of audit log that I am storing in the ZIDB. My UI plan here is enumerating things and views. By enumerating, I mean we need to be able to view all those four facilities. Modification, deletion, addition, and moves or renames. We need to be able to handle the fact that somebody might delete a parent object and you need to restore something that has a deleted parent. We can handle all those cases potentially in restoration, but we also need to have a way to prune changes depending upon how long we want to keep history. You may pack to seven days, but maybe you want to keep six months of audit history. That could be up to you. How this would be used and how that would be automated potentially, like right now, keep in perpetuity unless you use the view to go prune it. It's not very big, so I don't think it's that big of a deal to keep that information because these are small records. These are records with four different fields. They're very, very small metadata records. They're not going to take up a lot of space, they're not going to take up a lot of overhead. But anyway, you're going to have the RAM overhead of having a few extra B3 buckets. I may need to fix the conflict resolution strategy because we have a queue that basically keeps the order of these inserted records for each of those facilities. That queue, it's not really a queue. It's more like we just do insert zero into a persistent list. We're basically doing it in LIFA order, but we need to avoid conflicts. In order to do that, really, I actually have to do this either asynchronously or we have to look at doing under P, under resolve conflict method on a subclass of persistent list. I don't want to do that now because that becomes yet some data type that you have to have in the package that potentially gets persisted. That's not uninstallable. I want to punt that to a future date. Log in every change. We want to log every change. We want to be able to filter changes by things like by path. I want to see every change that happened in the folder foo slash bar. I want to be able to filter by path. I want to be able to filter by user. I want to know everything that somebody, that particular user has changed in the last 60 days. We want to be able to filter by date. Some of this filtering might be indexed and some of this filtering probably won't be indexed. That's okay because this is really an emergency facility. It doesn't necessarily have to be fast. The records are small to iterate through. I'm not trying to over optimize this at this point. Filtering right now goes through every single record and just tries to find matches. It's not an indexed filter. The goal is to have a view that allows you to list in reverse chronological order all changes for that facility, modification, deletion, addition, or move. For regulatory logging, you could have a data retention threshold before you prune anything. You could potentially never prune if you wanted to keep the audit trail forever. The change records are small, so there's very little risk to that. I talked about the facilities that we would try to have. Each item has two items in an annotation that are stored. There's an IOB tree that stores the actual records and there is a persistent list that stores the keys. The keys are automatically generated integers. Each of these four facilities have the ability to keep track of every change in insertion order. Each is going to log the following four things, UID path authenticated user and timestamp. Right now, that's local time. We may choose to do this in UTC. I haven't made a design decision on that yet. Also for things like moves, what we probably need to do is log both the path before and after the change so that we have the ability to restore something to its previous path. What do we do with this? We fix human errors. Use ZCE.beforeStorage multiple times programmatically to fix these errors. If you have something within your kept history, that's great. If you don't, you've got to resort to going into your backups. Hopefully, you have reasonable backups or snapshots or some system that keeps some way for you to go back in time if you're not able to do it from your production database. Each of the cases where I've had to rescue a customer, it's been from the production database because it's been within that seven-day window. Usually, that oh no call comes very, very quickly once somebody realizes the scope of what they've done. I'm creating a package right now called Plone.Wayback as a transitional package and is an experimental proof of concept. This is currently on my own GitHub account. I don't have the permission to create repositories on the Plone GitHub. If anybody wants to create a repository for me, I will put it there and keep working on it. Also soliciting any client collaboration, if anybody wants to work with me on this. The general idea, Wayback is in reference to the Wayback machine. They named it WABAC, sort of like in reference to ENIAC when Mr. Peabody in Sherman was originally created as a cartoon. It's a transitional add-on. It's half finished. It's probably going to get exploded and federated elsewhere if it ever is integrated into the Plone core. That's assuming that everybody actually likes this thing. We are assuming that I would write a clip for this, assuming that there was sufficient interest in folks working with this, playing with this as an add-on. I got positive feedback, I'll write a clip. I want this to be testable by users. I want it to be uninstallable in the meantime with a very little risk so that folks can give me good feedback. This is probably something that will be finished next week, at least in some very basic state. It's not going to handle all the edge cases, but it should be able to handle basic things like restoring the version of content from last Tuesday, or undoing the deletion that you had last Monday, or whatever is within your actual transaction history in your production database. We want to restore previous known good state before accidental deletion. Modification we potentially could do with CMF editions and other ways, but some of the problems are that CMF editions is dealing with only certain stuff. My assumption is that I have things that CMF editions probably is not taken care of. My belief is I don't necessarily know that to be absolutely sure, I would probably trust going back before storage to grab a copy of the content rather than using CMF editions for a rollback of some kinds of content. The goal is that the user simply just views the audit logs and filters into the audit logs, finds the things they want, clicks a couple check boxes, and clicks restore. That's simple. Potentially more than one item at a time. ZCW before storage is a really interesting thing. It basically wraps your existing storage, whatever that is. That could be something that ZO, FileStorage, any BrailleStorage, anything. It requires you to keep history. If you're using a history-free storage, that's a problem for you. It requires you to keep enough history. My eventual goal would be to have the ability to point to some directory where you have a bunch of file storage instances that have some kind of naming convention where you could ostensibly go back through and find stuff from your backups. That's not in scope and that would require some degree of configuration. It would require file naming conventions. There's a lot of things that I haven't thought about yet. I don't want to solve that problem quite yet. Mostly I just want to undo things easily. When that phone call comes in, I don't want it to be three hours of me writing some script to fix it. I want to be able to tell a site administrator how to fix it. Programmatically using this, basically the way that you would use before storage without some kind of tool like this is you'd load the storage wrapper programmatically in a run script. You would use set site on the time-traveled version, last Wednesday's snapshot of the storage. You'd get your content, you'd export it using the ZEXP functionality, probably to a string IO or to an attempt file depending upon the size of the object. You then set site on the live target that's the current version of the database. You'd restore it from ZEXP. You probably would have to do, there's a few other things that you have to do like restore original. If you're using content creation APIs, you probably have to set things like the original UUID and potentially deal with NVID problems and other things that I haven't quite dealt with yet. But then you'd repeat that. What I'd like to show you at the end of this is some example code of how that's been done. Before I do that, a few disclaimers. First, you may want the ability to not undo things. Somebody comes to you and says, our site's been defaced by an employee. That employee has been fired. You need to delete this thing. It needs to be permanent. Nobody can roll this back. There needs to be the ability to purge things from the undo history so they can't be rolled back even if they still are in the ZODB undo history. My vision right now is a site-wide audit logging facility that would only be accessible at the site route as a view to site administrators and manager roles. You'd basically have to have some kind of superpowers to be able to roll things back. That's fine. I'd rather it be a site administrator than me as a programmer. We can make this placeful, meaning you could actually say, hey, I want to roll back things that are in this folder. But that needs some thought about permissions and local roles and other things that are not yet in scope. Next steps, I need to finish the user interface for enumeration filtering and pruning of the audit data. The backend is done, but the front end needs to work. I also need to come up with a reasonable tactic for avoiding the conflict, write conflicts on the key insertion for the audit logging. That in terms of testing is not a necessity, but if we were to do this in a large-scale production site with long-running transactions, I'd say long-running transactions. I really mean just a transaction that takes longer because you have a high level of concurrency and your databases in high use. We need views and adapters for restoration via before storage. I'm working on that piece. That's basically taking what I've done programmatically, repetitively, and rescuing my users and trying to generalize it. We want to log the restorations themselves. Once you've actually restored something, we need to keep an audit trail of that too. If this proves universally useful, then I'll flip it. Before we get to questions, let's show an example of what using before storage looks like. You can also provide feedback with a link from 6VDUP. Let me go to... Let's load this up. Hopefully, I can... In the show, I'll try to make this brief. In the meantime, while I'm loading this, are there any questions? There's the ZMI undo tab. If you were making a change that really... I wouldn't ever recommend a user use it. That's why it was removed after Plum 3. I think it was Plum 3. When was it removed? I think the Plum 3... One of the Plum 3s, they were removed. Once folks realize it didn't work half the time, I think that that became a fairly sufficient motivation for the community to remove that. I've used undo in the ZMI all the time, usually for stuff that I've done or stuff that somebody comes to me immediately and says, I made this mistake 30 seconds ago. I don't always have that luxury. Once time... It's a function of the passage of time and the addition of more rights to the database, which is why time travel becomes a necessity. There is undo on a single object. There's version. You can refer to a previous version, compare a version with content that's not always enabled by default on all content types, but it's available. I had another question. Yeah, shoot. On to your UI dilemma. Yeah. A very common scenario that I get is somebody updates content. Sure. Two weeks later, somebody else says, oh, this is broken. What happened? Nobody knows when it changed, but they know it changed from one state to another. Okay. Yeah. You can go in and certify a specific date for time, but you need to look through the content to figure out when it got messed up. The solution to that is you need to not just be able to enumerate all the changes that are made on the site, especially a busy site. You need to filter to specific path or users. In your case, you're saying you don't know who made the change. You don't know when it was changed, but you know this thing screwed up. With modifications, I think you probably do have some, in most cases, with most content, you have some degree of history. Use it. If your workflow history on your object includes all the saves and changes, as well as things like state changes, then you absolutely have the ability to know, Joe did this two weeks ago. He's to blame, but we still have to fix this. I know where to look by date. In this case, you're just saying, I know the content screwed up. You're searching by path, right? Unless it was moved and then you've got some other things. Yeah. Yeah. There are some edge cases where you're going to have a needle in a haystack. I don't think if you can filter through these things, even if the filtering is somewhat slow, because you've got to go through linearly search through all of these records and load them up for the past six months or whatever, however you're retaining these things. I am envisioning this as a scenario where I say, hey, you've got two week undo window, and beyond that, we're not doing anything. Your scenario might be, I don't know when they modified it, and maybe it's past that window. At which point you'd say, well, something like this really ought to support the ability to go into your stored backups. You can say, this is the directory where my backups are, and be able to pull things out of load a file storage by naming convention, go in and use before storage to wrap that file storage, not your production database. This is a reasonable thing to do. I say, I have this specific script that says, I want to basically go to before this date. I want to use this specific rel storage database, and I want to wrap it with before storage. This configuration becomes something that I then use to load before storage. What I'm doing is getting two databases. I have the current setup, and I also have this before storage, getting the database connection and the site route right here. I'm doing set site on the source data while I retrieve it, and then I'm doing set site on the target data while I restore it, so that I can have access to local components in the respective time travel context. My functions that are calling these things are typically passing in source and target site, and they're doing specific things. The fact that I'm restoring these things is not germane to the larger discussion, which is we can just simply say, I have content items that need to be restored. This is very easily generalized. While this is a very specific run script that says, do this thing, grab these things, add them back into the site, manipulate them so that they look like what they looked like before, things like change the UUID back to its original UUID, that sort of thing, all of this is totally generalizable. There's no reason why we can't do this with a user interface that allows folks to filter through the audit history, click some check boxes, and say, give me my stuff back. That's the goal. I'm wondering, is there any data you pass to the org storage so that you get the data from a specific date that can be used? You have to. I think the question is, how do you know the date for the before storage? If you're clicking some check boxes, you have to go back for each individual item, you may have to go back to a different date depending upon when it was modified, so you might have to load the storage multiple times. That's not unreasonable to do. There's some overhead in doing that. But specifying the date, what you'd end up doing is you'd say, give me that modification date, subtract, add one second to that with time delta, and then use that as your time stamp for things you want before. So you have some reasonable guarantee that before plus minus one second is basically what we're looking at, because the time stamps in the ZODB, if I remember correctly, are not granular enough to do things at the millisecond level. So we're just saying it's a little crude, but it's going to work most of the time. And that is, take the date of modification that was logged, add one second to that. I'm not sure if the time stamp for the actions are granular. I don't know. Are you referring mainly to this? So yeah, I should be clear. There is a risk here. If the transaction takes a long time to commit and your event logger logs something, one second may not be long enough. If it takes five seconds between the time that the event logger gets a date and the transaction puts a time stamp on that specific transaction, you do have a problem with that drift, that specific, I can't predict what that's going to be. So I need to experiment with it. That is a challenge. I think the ZODE time function that gets used for that stuff has a fixed time once the transaction starts. Like you can call ZODE time over and over within a single transaction and you always have the same time stamp. So there's no difference. I believe that's what you use to record the transaction. But I'm not positive. The start of transaction is basically what you get recorded. So if the start of transaction is what's recorded, then we're in great shape. If it's the end of transaction, I can't predict when the transaction time ends. I'll need to find out. One way or another, I think I can have a workaround if it becomes a problem. Yes? The director of the London presented the site they did for the Brazilian government for the Olympics and they also created an add-on, just the auditing part, collective.fingerpoint. You might want to look at that. Say that again? Collective.fingerpointing. Okay, fingerpointing. Okay. Yeah. They probably had another use case. Yeah. And I have looked at that before. I've looked at that before. Other related things, I think Wildcard in their Castle CMS has some facility for undo. But basically what I think they're doing is they are, when you delete an item, they're actually just marking it as an invisible and they're changing the idea of it or something to that degree. So they're not actually truly deleting the item. So it's like half deleted in a way, bi-bicycly hiding it from you. So there's some kind of like pseudo-trash bin functionality there. That's a different approach. So it's not an invalid approach. This is just a little bit more aggressive. Another question. I'm wondering why you need this evaded block because all the information should be already evaded. For example, if you click on the undo top in the ZMI, even if you move an item, I think you have both paths available. So this information should somehow be already evaded, I guess. So if you're undoing something in the ZMI, what's the problem again? You have an event log. In the beginning you were proposing to write an event log where you record all the changes. That's correct. The changes are already recorded. So what's the difference? The difference is that the transaction history isn't capturing some of the things that you need. Now, you might say, hey, you know, the path is being captured in the transaction log. That's not necessarily very useful. And you'll find that what's actually being recorded in the path is a virtual host monster mangled version of the path, which is complicated to deal with. So you can't go into, if you have an admin URL and you have your front end URL and the change happened on one or the other, you can't go into the other URL and figure out the relative path in relationship. It becomes complicated. So my advice is not to trust the path that's necessarily stored in the transaction history. Doing this at the application level does make some sense. Instead of using the database's history, the other thing that the database lacks is the ability to tell us. It might tell us what user and what path. It doesn't necessarily give us the ability to add on any extra metadata that we might find useful for filtering. So for example, I want to be able to track back to what path. This item was deleted because the parent of that item was deleted. And I have one transaction in ZODB that says folder A was deleted, but inside folder A was folder B, folder C, folder, and 82 items. Those are all sort of buried under the obscure one single transaction record. Because when I do an I object removed event, and I notify object removed event, I get a cascade of these things recursively down to all the removed objects. While that makes things slower, because I have to notify things for all those objects, but that's an expected cost of removing a whole folder with sub-volters and content and other things inside it. But you do have the physical path, the physical path of the one thing you removed, the place where you made the modification. But if that place contained 82 other items, the existence of those 82 other items are gone. You don't know that they were there. And searching through transactions is very tricky. Because using transaction iterators to find things is probably going to be 10 times slower than having an application log where we'll potentially find the pain points and say, hey, we really need to index these things by path. So we have some flexibility from an optimization standpoint. If we find anything is slow, doing it, doing it the application level provides very little additional costs. So yeah, we're storing some of the same information twice, once in the transaction record and once in the, in, in, you know, some B tree somewhere on the, on an annotation on the site. But these are small records. And the cost of that from a storage standpoint is low. The cost of it from a performance standpoint, we'll, we'll see. Okay. Okay. One more. So what you're proposing basically is single object restoration instead of transactional output. So how would you deal with the consistency issues that you're going to make this user-facing? That's very true. So here, here's my, here's my thinking about those different facilities. Addition. We probably don't necessarily need to use the undo log to un, to add something. We already have a nice facility for just deleting the content. So it's just useful from an audit logging standpoint. Modification sometimes we have history that gives us what we need. We're potentially logging some additional history. That's somewhat duplicative. But if you're saying, hey, you know, we have this subscriber that does these things when you delete the, when you delete the item, it does something else and content rules, for example, could be running. And I can't know about those things. I can't, I'm hoping this is general enough that it's solving 98% of the problems that people would have with content restoration and that people would be able to manually work around those other, the other 2%. And the thing is, is that I feel like if I can give the side administrator the ability to try to restore this first before coming to me, that I'm going to save myself as a programmer a lot of time. And save myself as a responsible party for digging into the under the hood. If the site or content administrator has the ability to deal with this, they're going to be more savvy to whether or not it worked or not as than the average end user. So I'm going to trust my power users just to save me a whole lot of time. Because if I'm having to invest, you know, scale this up and think human error happens probably at any given site once, twice a month, something happens, somebody wants something undone. Usually that person just says, oh, oops, I'm going to fix my own problem. Sometimes, you know, if that happens six times a year and each time costs me one to three hours, that's, that's time I want back in my life. So if there's some simple control panel that gives the side administrator the ability to simply just check, find and check items to restore, even if it didn't work in 2% of the cases, if it worked most of those cases, it would be a big time saver for me. It would be a big convenience for the users. And it makes us look good, you know, as people who support systems. So, thank you.
|
A user-facing restoration system built around zc.beforestorage. Modifications or deletions of content items can have unintended consequences, or be themselves unintended. Ages ago, Plone exposed ZODB's transactional undo facilities to users, but these have proven too fragile for many real-world use cases. This talk presents a new tool integrated into the Plone user interface that allows (usually administrative) users to undo modifications that need to be reverted to previous state. More than just a "trash bin" for restoring deleted items, this tool provides some degree of simple content rollback for one or more content items, and is built on zc.beforestorage. Carefully done, a better cherry-pick restoration of your undo-history is both possible and safe.
|
10.5446/55301 (DOI)
|
Thanks for coming in. It's such a beautiful day. I sort of assumed that you guys would have gone over to Bartos and then just sort of decided to go like sit by the Charles for the rest of the day. So I appreciate you coming by and saying hi. My name is Ethan Zuckerman. I teach over at the MIT Media Lab, which is where you were just hanging out. Let me just say at an extremely practical conference where people are learning lots of wonderful things about this amazing system, this is a hugely impractical talk. But I'm hoping that we can be impractical for a couple of minutes to sort of talk about some ideas that I think actually inform the work that all of us do or I kind of hope inform the work that all of us do. I want to tell a story or two and then I hope we have a little bit of time for a conversation. So let's see where that goes. But you may have noticed that I'm starting this talk with an image of Andy Warhol and it's to evoke this idea that I think it's quite famous by now that in the future we will all be famous for 15 minutes, which has certainly been more than true for Warhol. What most people don't realize is that it's quite possible that your fame for 15 minutes may be total infamy. It may be basically people around the world hating your freaking guts. And that's an experience that I've had and it's actually taught me a lot about my work as a developer. Nowadays my work sort of is a software architect as a teacher and really about technology and ethics more broadly. This is a story that starts for me a very long time ago. My glory days as a developer were really from about 1994 to about 1997. Thank goodness no one actually lets me write any code these days. But way, way, way back in the days I was helping write a software platform called Tripod. Does anyone remember Tripod? Hands up. This is the only room where you would get more than one or two hands up for Tripod. So I want you to head back into 1994. I dropped out of grad school. I moved back to my hometown of Williamstown, Massachusetts, where a couple of friends had this idea. What if we started a website that would focus on helping people basically have life after college? So if you look at what our homepage looked like, we wanted to help you get a job. We wanted you to travel. We wanted you to have a good time. And then kind of sneaking in over here, if you see this little link down here for Homepage Builder. This is something that came up in the middle of the night. One of my developers had this idea. We'd put up a resume builder. You could put together your resume. It would export to PDF. This was really cool in 1995. We figured the same thing. We would just let you have a little bit of space on your server. It would be behind a tilde. You could put up your own homepage. Wouldn't that be cool? And in 1995, that was actually pretty cool. And suddenly we had a whole wave of people creating beautiful works of art. Clearly wonderful. So we looked at this and we felt about it roughly the same way that you did. This is a nice thing to do for our users. But who really gives a crap about this? We had the Blink tag. We supported it. We made it automatic. I suspect that that under construction probably blinked as well. And I'm hoping the text did too. So it turns out that when you let users generate content like this, each individual piece is not particularly interesting. But when you put a whole mass of it together, you actually have a different business model. You have the business model that is now, in many ways, the dominant business model of the Internet, which is the model of user-generated content. So we found ourselves sort of in the middle of being basically whipsawed by this rise of the next Internet movement. We were paying lots and lots of money to people to write articles on here's how to get your first mutual fund out of college, here's how to find a great apartment in Boston. And the truth is what people wanted to do was build these home pages. And so we figured it out. We got to the point where we sort of retooled ourselves and figured out how to be a very, very different website, how to let you have your home on the web. But it was an awkward and difficult process. The first piece of the awkward and difficult process was looking at our server logs. When I had set up our server logs, I didn't log any of these total pages. Because I assumed that no one would look at this crap. And the first time I found out about this was when my bandwidth provider called me up and said, I just have to warn you, your bill this month is 10x what it was, the previous month. And I said, what the hell are you talking about? What's going on? And he sat down and showed me his server logs, which suddenly had 90 times as many people coming to my website, it had come the moment before. You may look at this and sort of go, oh my god, like who lived on this web? We were the number eight site on the internet in 1998. We had 15 million users putting up home pages. And we had somewhere around 50 or 60 million people visiting them on a monthly basis, which at that point was utterly huge. So there's another thing to point out. You can see at this point how we were trying to monetize, right? We were throwing up ads, we had text ads, we'd put all of this stuff up here, trying to figure out how to make some money off of this. That didn't happen automatically. When we started this, we simply put up these pages, we didn't bother trying to make any money off of them. Then we realized this is a problem. We're going to have to make some money off of this. Our bandwidth bills have gone through the roof. We did the simplest possible thing. We put an ad banner on the top of it. And that seemed great. Suddenly we had a huge amount of ad inventory. We were going to go out there. We were going to make a ton of money. And then there was one little problem. When you're working with user-generated content, you don't have any control over what the user generates. So let's fast forward to 1996. We have ad sales guys. One of my ad sales guys is out visiting Ford Motor Company. He's showing off the work that we're doing. He's flipping through the home pages. And he finds a Ford ad on top of a page. And he's like, great, I can prove that this is doing the work. And then it turns out that the page is an enthusiastic celebration of gay sex. And Ford is not particularly happy about this. And Ford makes very clear to them that they're not happy about this. And he calls my CEO and says, why the hell did our ad rep find a Ford ad on top of a gay sex page? Why are we hosting an explicit gay sex page? Now, I want to point out that there's a couple of things that I should have done at this point. The first is, anytime you're running user-generated content, do not put in a random page button. It's a really, really bad idea. Just don't do it. It's too easy to write. Please don't do it. It's just a terrible, terrible idea. Please learn from my mistakes. Second, what this pointed to is a problem that no one solved, which is how do you monitor content in a user-generated community? Facebook deals with this every freaking day. And they don't do it all that well. You know, they have quite a small team proportionate to the size of their company. It's something like 900 people on staff and then a much larger group that they're outsourcing to. They do a lot of algorithmic filtering, which is how you end up with stories like the girl burned with napalm showing up in a Pulitzer winning photo, getting banned for child porn, despite the fact that it's basically an incredibly powerful symbolic image. They've never figured it out either. We didn't figure it out either. We figured out a way to get rid of some simple pornography. We looked for flesh tones and JPEGs. We reviewed the rest of it. But at this point, with my boss breathing down my throat, all I could think about was how do I get that ad off the flippin' users page? JavaScript's brand new at this point. It's just come out. New functions are being added every day. We're sort of writing functions when we don't have them. I remember earlier in this writing my JavaScript random function because it wasn't in the language at that point. Someone invents the window open method. We can create new windows. And I figure this is my solution. The user can have their page. We're not going to interfere with it at all. When that window opens, I'm going to open another window. It's going to have a navigation console. It's going to have information about what tripod is. It's going to help you move to other pages, find page building tools, and it will have a small ad in it. My friends, I invented the pop-up ad. Let me first say, I'm so sorry. Really. Like, I'm so very, very sorry. Let me also say, however badly you are feeling about me right now, whatever harm you would like to do to me, believe me, worse has already been said. In fact, I'm going to prove. Let me just say that was a really weird week. So, I went off the Internet for about a week. And I went off for good reasons. I was getting death threats. And let me just sort of say, like, this is not me crying crocodile tears here. I generally speaking like a bad week for me on the Internet is what most women online call Monday. The level of harassment that's fairly standard for women in the tech industry is way, way higher than what I got. I got a few people wishing me bodily harm and ill. I warned the local authorities. I also got a lot of good-natured ribbing. But I also got a lot of people who were genuinely upset about what I had done and really personally blamed me for this. And I spent a bunch of time sort of thinking about this and essentially going, the first version of this essentially says, like, come on. Someone else would have figured out how to put an ad in a window. And in fact, we watched our competitors over at GeoCities literally grab my JavaScript code and sort of cut it and paste it into their page before they figured out how to sort of refine their own. So obviously, it was going to happen. There are some bad ideas whose time has simply come. But the more that I thought about it, the more that I actually thought, there's a much deeper problem here. And the deeper problem was not so much where we put the ad. It was our utter and total failure of imagination about how we might support the Internet. Right? So again, 96, right? Commercial Internet has only existed really for two years. 94 is the year that the web sort of comes into most people's consciousness. Anything was possible. And what we ended up with was something that was so bad, so lame, that we're stuck with today, which is essentially a system for monetizing this thing that we all love that's just pathetic in that it's completely counter to what our users want. It's completely counter for the most part to what our advertisers want. And I've spent a lot of my time lately sort of thinking about this question that technologies have politics. So this is from a graphic novel about Robert Moses. Who's heard of Robert Moses? Cool. For those who haven't heard of Robert Moses, Robert Moses is one of the either great figures or great villains of the 20th century. Moses was in charge of most of the city planning for the city of New York. He ended up sitting on a whole set of boards that had to do with the shape of how New York came to be. And Moses had this very particular vision of how New York City should evolve. He really believed that there should be this sort of dense business core in Manhattan and that everybody else should live out in the suburbs and that the suburbs should basically be parks. So Queens and Brooklyn, they should be beautiful, but everyone should commute in via car. And one of the things that he really wanted was a giant expressway through lower Manhattan that was going to be 10 lanes wide. And this put him in conflict with a woman named Jane Jacobs, who basically said, hey, in building this, you're going to destroy this beautiful organic city that I've grown to love. And this is one of the great debates, not just in sort of urban planning, but actually like in technology as a whole. How should we think about how we build our cities? Should they be built organically? Should they be built for the top down? You can spend your whole life just studying Robert Moses. Like there's enough in this man's story and on his shape on New York City that people have made entire careers out of studying. The guy. One of the big things that people have studied is Robert Moses and bridges. So when Robert Moses ended up designing these parkways designed to bring people out into the outer boroughs of Queens, designed to bring them out to the beautiful beaches of Long Island, he designed these parkways with very low bridges. Most of these bridges have overpasses that only accommodate a vehicle about eight and a half feet tall. Most city buses in New York City are 12 feet tall. This turned into a giant argument in the field of science and technology studies, which I occasionally teach in, called, do artifacts have politics? Beautiful article. Highly recommended. Guy named Langdon Winter writes this in 1980 and basically says, the technologies that we build have politics embedded within them. When Robert Moses decided to design these parkways, he was designing them for very specific people. He was designing them for comparatively wealthy people who had their own private cars, who would live out in the suburbs, enjoy these parks, enjoy these beaches, come into Manhattan, but he was not designing them for poor people. He was not designing them for black people. He was not designing them for people who were taking public transit. So these technologies have built into them a sense of who they were for and the politics behind them. Now, you'll note, I didn't say that Robert Moses was a racist. That's where most people go with this story. Most people essentially say Moses was racist. He didn't like black and brown people. He built the technologies to ban them. I wouldn't go nearly that far. I think what happens when technologies have politics is that the politics are a lot more subtle. They are not that you are Democrat or Republican. They are not that you are left or right. They are that you are making certain assumptions of what people are going to do with your tools and you usually aren't conscious of what those assumptions end up being. So let's talk about another place that technologies have politics. Facebook. One of the big things that Facebook did in, let's call it, the second wave of user- generated content. First wave, folks like me, folks like GeoCities, here, come have a space of the web. Frankly, we don't care who you are. Just come. We'll figure out how to make some money off of you. My best users, as it turned out, were Malaysian political dissidents because there was no other way to publish political content in Malaysia at that point. So they flocked to our platform. We ended up hosting the entire reformasi movement through the 1990s before Anwar Ibrahim ended up in prison. Facebook looks at this and goes, tripod is a sewer. Like these people are creating content. There's no authority over it. We got to clean this up. The best way we can clean this up is we should find out who's using this tool. We want people's real names. This seems like a really good idea. People will treat each other better if there's no anonymity, if we know who everyone is. Who is real name policy bad for? It's really bad for drag queens. It's really bad for people who don't use their real name in everyday life, who use a different and alternative identity. Who else is it really bad for? Anyone know who this is? YL Gonnim. YL is an Egyptian software developer. He's running a Facebook group called We Are All Khaled Said. Khaled Said is a man who gets beaten to death by the Egyptian police. His Facebook page becomes the rallying point for the Taqir Square movement. But YL has a serious problem. It would be a really bad idea to be known as the guy running the Taqir Square movement. So he's registering and trying to run this page under aliases, which Facebook keeps taking down. Finally, what YL finally figures out is he finds an Egyptian friend in Canada who can run the group under his own name without Facebook taking it down and without him ending up in prison. I am not saying that Facebook is against drag queens. I am not saying that Facebook is against Egyptian protest movements. What I am saying is that when you make that decision that your policy is going to be a real name policy, you end up affecting people that you probably didn't think about and you probably didn't imagine at the time. Because technology has politics. So if you go read my article in the Atlantic about this, ironically enough, you will get hit with a pop-up ad. And so I've been thinking about what are the politics of the ad-supported internet? What's inherently built into the system that we don't think enough about? The first is the notion of an attention economy. So crazy quote. Herbert Simon, brilliant social scientist, works in like 19 different fields. He gives a talk in 1971 where he says, as we head into the future, what we're really heading to is scarcity. And the thing that becomes scarce when information becomes rich is our attention. Every time you create a surplus of something, you're going to create a deficit of something else. And what we're going to have a deficit of is our ability to pay attention. Now, Simon is not reacting to the internet. No one knows what the internet is in 1969. He's reacting to that dreaded technology, the Xerox machine. Simon is an academic and what he realizes is when other academics can Xerox off their papers and send him a copy of them, he is going to be flooded under stuff that he is suddenly obligated to read. And he is so freaked out by this, he stands up and gives an address where he warns about information overload. We are now well within this. The only thing that is scarce in the internet is our ability to pay attention to something. And what ads are is a demand on your attention. You want to do this, the ad wants you to do this, you really want to read my article because you go, this guy is sort of interesting. And suddenly the Atlantic wants to say, come claim your two free issues now. Ads are inherently conflicting with users' intentions online. Now, not all ads, right? Big difference between two kinds of ads. If I go into Google and say, hey, remember two years ago with all that snow, my roof collapsed. Might be a really good time to try to fix that. I am telling Google what I want. And Google responds by saying, hey, here are some nice commercial roofing companies. By the way, if you click on one of those ads, it is roughly ten bucks. Those qualified leads, like showing up as a qualified lead for a roofing contractor in the right area, ten to twenty dollars depending on what market you are in. It is one of the most expensive ads you can click on the web because it is probably worth somewhere between two and five thousand dollars if you end up being a converted customer. But it is a very consonant ad. I have said this is what I want. Google says, let me help you out. We feel pretty good about the experience. This ad on another one of my articles on the Atlantic? Not so much. I am not interested in conquering college. If anything, I am really interested in making sure that my students don't use this service to try to get their way through my classes. And so in grabbing my attention away from my article and trying to help students subvert what I am trying to do as an educator, they are trying to construct a desire. And this is why it shows like Mad Men are sort of interesting because advertising is a fundamentally unheimlich field. It makes us uncomfortable. It is all about constructing desires that we don't yet have. And this is not to say that it is inherently unethical, but it is to say that it is in tension with what we are trying to do as far as seek out what we are looking for online. This tension has been very constructive. It is supported in the newspaper industry as we know it. It is gutted in the web off the ground. But it is a tension that always makes us competing for attention with what we are actually trying to put online. So ads don't work, right? How often does this ad get clicked? Display ads get clicked a vanishing percentage of the time. It has become very fashionable to say, yeah, web ads don't work, but mobile ads do work. Look at how often people click them. What is the last time any of you voluntarily clicked on a mobile ad? What is the last time you accidentally clicked on a mobile ad? All the fricking time. We are not quite smart enough to figure out how to avoid them. We will figure out how to avoid them. No one wants to pay attention to display ads. And so what is happening is we have this incredible arms race that says, oh, you didn't like this ad, but that is only because we don't know enough about you. If we just knew a little bit more about you, we would give you an ad that was exactly what you wanted and at some point you are going to like ads because we are going to know more about you. This is what Facebook knows about me. It is a lot. Some of it is not right. A lot of it is. Yeah, I do like books. Yeah, I study economic inequality. Yeah, I assign homework. I have no idea what the hell is up with Wolverhampton City Hall, but all the rest of this stuff is accurate because they are basically either pulling it out of my browser environment or they are pulling it from information that I have directly given to them. And even despite this, they are not able to give me particularly effective ads. I have seen more goddamn Donald Trump ads in the last couple of months despite the fact that they have accurately figured out that I am in fact very liberal and therefore very unlikely to come over. There is this arms race to sort of say, yes, we understand advertising totally fricking sucks, but we are going to make it suck slightly less than the other person. And the way that we are going to make it slightly less sucky is by taking as much possible information from you as possible. And once we know absolutely everything about what you do and what you want, we will be able to give you an ad that is not quite so awful. My friend, Misha Glowski calls this investor story time. He basically suggests that you think of it as the world's most targeted ad. It is designed to be clicked on by one person, which is to say a venture capitalist. So when you build a system that essentially says, hey, not only am I going to invite my users to share photos, but that I am going to apply deep learning analytics to it so you will understand what this user photographs and we are going to combine that in a psychometric profile and we are going to make it much, much more likely that we can target ads to that person, it doesn't matter if that shit works. All that matters is that some VC will write a check for $20 million. It is a targeted ad. The goal is to tell as best as possible a story to an investor and it fricking works. As he points out, there are companies out there like Quora whose competition is Wikipedia. Now I am just flat out stealing lines from Misha, but as Misha points out, Wikipedia has to run fun drives to keep losing money. And Quora is able to raise $80 million based on the idea that somehow based on your question asking behavior, we will be able at some point in the future to target ads to you like no one ever has before. So it would merely be funny if it were not also corrosive. Because the same systems that we are using to figure out how to marginally better target an ad to you, those systems are being intercepted by the NSA. Literally, we know from some of the documents that some of the same information that is being used for ad tracking is now being used in surveillance systems. And for the most part, we are not out in the streets marching about this because in part, through my fault and everybody else from my first generation of the web, we are used to this notion that everything online is free and that what we pay for for it being free is with us. And we simply accepted it and we don't review it the vast fricking majority of the time. The real problem with this, and again I am just flat out stealing from Misha, go watch his talk beyond Telloran, it is vastly smarter than anything I have to see today. But he ends up arguing in some later works that the real problem is data as toxic waste. Who has got a Yahoo account? Who has had one over the years? Come on, everybody should have a hand up, right? They got compromised. They didn't let us know for two years. Where is that data now? I don't know. Does Yahoo know? I don't think they know. Yahoo at some point may offer a month off on a credit monitoring service. Gee, that's going to be helpful. All this data that gets locked up in these different platforms, this is monetized as an asset. But the other way to think about it is a giant liability that's out there. And that's information that's out there that someone can start using to impersonate us, to try to target us in ways more like a security service than like an advertising platform. Could we have done better? I suspect we could have done better. Here's a like casual late night list of other models that we might have dug into in a serious way when we were trying to figure this out in early 1996. There were people trying to build micropayment systems. And micropayments are hard because transaction costs are really high. But there's ways to think about this where you essentially store over a long period of time. You bill once a month, you might find a way to deal with this. Subscriptions. It was probably too early for subscriptions. We would have had a lot of people resisting paying money for this, but certainly at this point subscriptions in many cases is a good way to go. We tried fee for service. We had some luck with it. We got some people to upgrade. Certainly much more realistic now. Miche runs a company called Pinboard. He does a one-time fee. You sign up for his excellent bookmarking service. It costs about five bucks. You never pay for it again. He has very good policies on where he goes with it. You can offer a lot of this stuff for free. You can put down a paywall and start beyond that. One of the ones that I'm finding the most amazing right now is what I've started calling the love economy. I spend a lot of time listening to podcasts. I commute from western Massachusetts. I drive all the freaking time. That whole culture comes out of the public radio culture. Public radio in the US never had the opportunity to put down a barrier and say, no, you can't listen to this unless you pay more money. What they did instead was say, hey, you love us. We love you. Please help us keep doing what we do. Won't you give us some money and let us support this? Actually, it kind of sort of works. There's amazing things, like the Welcome to Night Vale podcast that I end up paying five bucks a month for that's now ended up being this sort of international phenomenon, spawning other podcasts, putting out books. Amazing and really based on this sort of affirmative decision to support, rather than stealing my attention and demanding that I do it. Friends of mine are building self-monitoring systems. These are systems that you can put in your browser that track where you're spending your time and allow you to decide, are you going to donate money to the things that you support, to the things that you think are important? So what I want to suggest is that all of us, to one extent or another, are involved with building systems and putting them out in the world. And I just want to suggest some principles that I think help as we think about building systems that are better and fairer for the people that use them. Data minimalism. What do we actually need to know about the user? If we treat data as toxic waste rather than as some sort of magical asset that we're going to monetize at some point in the future, how do we know as little about our user as conceivably possible? When we do know things about our user, what rights can we give the user to the data? At minimum, those rights have to include the right to review it and the right to delete it. You've got to find a way to get out of the system. None of this LinkedIn bullshit where you cancel an account and it's all still ready there when you come back six months later because they didn't remove anything. We've got to think about interoperability. Everyone wants to lock users into their platforms. It's basically a way of preventing competition from happening. You want to make it possible for someone to take their data and move it to somewhere else because just having export without an interoperability right, it's not an actual right. It doesn't actually get you there. Transparency. So many of these companies don't tell you what their business model is. Their business model, if you can't figure it out, is you. You are the product. If you can't figure out what they're selling, they are selling you. We have to be honest about what that model is so that people can make decisions about is this company trying to help me follow my intentions, get done what I want to get done, or are they trying to trap me and use that data in a very different way? Why am I telling you this? For the most part, you are not necessarily people founding your own companies, but if you are, this is particularly important. When we build these systems, we are the people putting our politics into the tech that we are building. We're not the only ones. We're working with other people. Their values come into play. We're working within an ecosystem. The values of that ecosystem are enormously powerful, but we are in a position to advocate for users. And what I am asking people to do is think about those ethical decisions that go into building the systems that we build every day. Who does our tech make more powerful? Does it make our companies more powerful? Does it make our users more powerful? Is someone else empowered coming out of this? If you are running a company, I would argue that it is absolutely incumbent on you to be thinking about this in these terms. But even if you are a foot soldier in the great Web 3.0 or wherever the hell we are wars, this is important as well. Because otherwise, at some point, some late night TV host is going to get up and make fun of you for some shockingly stupid decision you made 20 years ago because you weren't thinking about this larger question of what are the deep ethics and what are the deep politics of what you are doing? So I offer myself as a cautionary tale. I hope I made your morning at least a little brighter, if not provocative in some way. I am happy to have a conversation if we have time. Thank you. Yeah, anybody? Yeah, please. Oh, interesting. Okay. So just repeating for the cast here, the idea was that there is an essay that notes that a culture that feeds on ads is a culture feeding on its own excrement. I don't know the essay. I will say that generally speaking, the most valuable things out there in life don't have to advertise. I don't know if any of you have ever worked with consulting firms, but I used to do a lot of work with consulting firms and what I would discover is that my clients were sort of shockingly mediocre. And I finally talked to the consulting firm and they said, yeah, really great companies don't need consultants. Really great products tend not to advertise a ton. They tend to sort of build their buzz on their own. So in many ways, this sort of whole notion of let me try to build desire is very different from this sort of intention-based advertising where really all you're trying to do is connect someone with something that they already want. So yeah, I'm not sure I would put it quite that way, but I actually am not going to disagree with that. Please. I just have a little bit of a lineup. Pat's server and 5.0 were by a good morning. Yup. And it was built for class 5 ad 5. And this, we're talking late night for early night. All of those folks were existentially afraid of the cycle. Wow. Everything available. Let's move forward to last week. Yeah. It's Paul, right? So Paul's wonderful windup for this question, right, is that inside Plone is tech that in many ways was designed to try to protect newspapers from the rise of the digital economy back in 94 and 95. There's now an argument in the newspaper world that actually radically embracing digital was the worst thing they could have done that actually if they had just stayed on the paper platform, they might be in a stronger place. And I actually think that argument is probably right. I don't think, I think a lot of these cats are very hard to put back in the bag, right? So I mean, I think the problem with this is that I don't think we're going to seamlessly pivot away from display advertising, even if everyone can agree that it's wretched and a bad idea. What I do think we can do is sort of look individually at the decisions that we're making right now and sort of take two decisions. One, we can sort of say, yeah, we've been down that path doesn't work so well. Let's not do that again. And then I think the other thing that we can do is sort of say, can we learn from the larger process? Okay. Yeah, maybe we should be digital. A lot of people are going there, but it looks like the revenue and digital is much lower. That's going to undermine our newsrooms. We have a civic duty on this. Let's not go too far down that line. It's possible that we could have made that decision differently. No, I don't. And if we were doing this for the newspaper industry, you know, there are people writing articles that are like, here are 80 different models that you can do to do this. What almost no one wants to talk about is, you know, the models that actually work, which are some combination of donation based, which is how really good journalism in the U.S. tends to work, or state supported, which is how really good journalism in Europe tends to work. And you can't say that in the U.S. without, you know, being dragged into the streets, particularly at this moment in time. But it's probably something that we're going to have to think about if we actually want news in a functioning democracy. No, tell me about it. Yep. Yeah. Right. So, and sorry, this is the brave browser. So, this is a browser that integrates ad blocking within it, but it's allowing you to monitor your own behavior and try to figure out where you're going to distribute money. I have a colleague over at MIT who's also working on a plugin, basically requires a single line of JavaScript on your site to be able to accept donations from this platform based on sort of monitoring your behavior and figuring out where to go. I mean, we need an enormous amount more thought on this. The whole ad serving industry is this giant multiple billion dollar industry with many, many, many, many smart people trying to figure out how to raise the click rate on an ad that no one wants to click by some miniscule percentage. And it's an incredible waste. If we could find some way to essentially say, yeah, ethically, I'm not going to do that. I'm going to look for other revenue streams. I run these days a, a, a modicized website called Global Voices. It's basically a collective of about 1500 people around the world reporting on what's going on in civic media and social media in their countries. We made a decision years ago not to take advertising. And we took it in part because it's just not appropriate with what we're doing. We've had to do other crazy things to make money. One of our best revenue sources is we run a translation bureau because so many people who are involved with our projects speak English in another language. And so we actually run a very, very large fair trade translation service that helps support what we're doing. If you get rid of the easy and bad ways to make money, you start coming up with really interesting and strange ways to make money. Last question because I know that we got to, we got to go after this. Uh-huh. Right. Can ad tech be used for social good? I think that there's ways to take advantage of systems that already exist and try to do good things with them. But I think in truth, like the actual architecture of how ads work, we are programmed to learn how to avoid them. And I would so much rather people try to figure out how to create viral content to, to try to deal with things like trying to help people stay in school. I'm grateful for things like the ad council that try to give ad space to good causes. I'm happy to take advantage of them when they exist. I think at the end of the day, I think advertising the way that we know it now, I have a hard time believing that it's going to be a major economic force in 10 or 15 years. It's so badly aligned with what users want and frankly with what advertisers want. Thank you all so much. I really appreciate it. Thanks for coming out.
|
Ethan will discuss the social, cultural, political, philosophical, and moral implications of these technologies. Digital marketing platforms empower marketers with content-related capabilities such as personalization and targeting, data analytics, test-and-learn, and omni-channel campaigns across email, mobile, etc. How can we harness them for good? How can we overcome their negative effects?
|
10.5446/55304 (DOI)
|
Welcome, I'm going to get started. This talk is titled Greater Than the Sum of the Parts and it's about a project where we integrated pyramid and plone and react and a bunch of other things. My name is David Glick and I've been a part of the plone community for nine years now. This is the shirt from the 2006 conference in Seattle which was actually a year before I got involved but that's where I live and I picked up a t-shirt from my former employers who had hosted that conference. This is my first conference in a few years so it's good to be back and see you all. I'm currently doing work with Jazz Carter and with a company called Odd Bird that does non-plone related web things. This is a talk about the project we did for the Washington Trails Association. I hope my voice holds out, I have a bit of a cold. We built them a volunteer management system. So WTA is a large non-profit in Washington state and they do a bunch of things. They maintain trails, they advocate for the state to protect them, they promote hiking. They're celebrating their 50th anniversary this year. The project we did was a partnership between Jazz Carter and Percolator Consulting which was their sales force consultant. So this is a talk about building a complex system out of smaller parts. And I gave in the title, you know, greater than the sum of the parts and that makes it sound like, oh yeah, this is what you want to do. You know, you take a bunch of run-of-the-mill parts and you put them together and you get something really awesome. But it's not quite that simple. There are trade-offs. If you take something simple like that toy car on the left, it's got some advantages. It's unified, it's straightforward, it's easy to reason about. On the other hand, there's only one way to do it. It's not very flexible in terms of how you use it. A complex system is more like this Lego car on the right. There's lots of parts. You can put them together however you want. Some of them are going to work better than others. The advantages are that it's flexible and you're not locked into one way of doing it. But there's more potential for both, you know, great success or great failure and it could be a bit of a puzzle to put it together. Part of the thesis of the talk is also that the tools you use matter. The choices we make about tools have an impact on how successful we are and on how much fun we have using them. So while I said greater than the sum of the parts, the point is really that if you're going to build something complex, it's going to be painful. But if you pick the right things to do it, it can help make up for some of that. That also means that this is a talk for developers. I'm going to talk about the architecture we used. But then I'm going to dive in and I'm actually going to show a little bit about how Pyramid and React in particular, about how they work and why they were a good fit for the project. So this is the WTA website, WTA.org. It's a polling website. This was there before we started the project. It was built by Groundwire a while ago. And it's a very popular resource for hikers in Washington state. They come here to search for hikes. They can keep track of where they've gone hiking, submit trip reports so that other people can find out what the trail conditions are and that sort of thing. But this is just a part of a broader set of tools. By the way, Charlie, can you raise your hand? Charlie is from WTA. It's our main contact there. Glad to have you here. The website is just part of the technology. Before the project, there was the website. There was Salesforce, which is used for keeping track of donors, keeping track of who are members in the organization, keeping track of who's a volunteer. And then there was also this thing in the bottom right, which was the volunteer management system. This was a completely separate website. Obviously, not styled very well. It was sort of a decade old pile of pro scripts. So that was what the goal of the project was, was to replace that with something nice and modern. So let me just show you a little bit about what we ended up with, what we created, and then I'll talk about how we built it. So you can find a work party. We've got this nice, fasted search. You can search by region or other characteristics, see what's coming up. These are trail work parties, so chances to go out and be a volunteer fixing up the trails in Washington state. And we can show the results as a list or a map or a calendar. There's a registration form. So once you pick one of these, you fill out the form, you can sign up yourself, you can sign up friends or family. You can put in, you know, whether you're looking to carpool to the trailhead. And you can also sign up to get an account on the WTA website if you don't have one already. There are a bunch of tools for crew leaders who are organizing so they can see a roster of who has registered for a particular work party, see who's on it, how much skills and experience they have. There's also a message board so they can communicate with the people on the crew. And there's also some other, you know, forms and things that they can access. There's also reports available to WTA staff, to the crew leaders, and also to representatives from the various organizations that manage the public lands. And so you can come here and see, you know, how much work has been done on a particular set of trails in a particular set of time. So we were looking at the plans for all this and asking ourselves what is the right technology to use. During the entire thing in Salesforce was an option. Salesforce is a platform that supports a lot of things. And that was where WTA staff wanted to enter data and manage it. So in some sense, it made sense to build the user facing front end there. But on the other hand, as a completely hosted environment, Salesforce can be a little bit awkward for doing rapid development. You have to run everything, you know, they enforce test coverage, which is good, but you have to write your code in Apex and the tools for doing visual things are, it just, it limits your options. And there's also restrictions on the licensing, which meant that if you're, if you, means if you're building something that's going to be used by, you know, a large number of people that can start to be a factor. And it was also something that we considered. And it could have been a natural choice because that's where all the user accounts were. People are already logging into the site when they're searching for hikes and that sort of thing. But in this case, we figured that the features that we were building didn't necessarily overlap a lot with Plone's content management features. And there wasn't necessarily a lot of benefit to having it be one unified thing in the CMS. So this is the architecture we ended up with. You can see the CMS on the right, that's just Plone, normal. On the left is VMS, the volunteer management system. So what we did was Salesforce is down there at the bottom. That's where all the data is stored. That's where WTA staff are entering things in. We have a pyramid app, which sits in the middle, and that takes care of a couple of key things. It pulls data out of Salesforce once a minute and indexes it in an elastic search. And we use that to do the fasted search for the work parties without hitting Salesforce because Salesforce has pretty limited restrictions on how often you can call their API unless you pay extra money and that sort of thing. And then it also takes care of just access control. So Pyramid communicates with Salesforce via a REST API. It's a privileged Salesforce user that can do a lot of things. So we can't just use that from the client side. So we have a bunch of API endpoints that we built in Pyramid that will make sure that a particular user can only do what they're supposed to, and then it sort of relays things like a command to register over to Salesforce. Up in the browser on the VMS side, we used React.js to build a nice polished UI that is interactive and a nice thing to use. So that is what we built. Let me tell you how we built it. Specifically let me talk about Pyramid for a little while and why we like it. This is sort of going to be a smorgasbord of different things about Pyramid. So like I said, a lot of the app is just endpoints that serve JSON. I won't call it a REST API because it's more of remote procedure call. We sort of needed more control over how much interaction is going to happen with Salesforce and that sort of thing as opposed to just exposing a broad API that does a lot of things. But in any case, Pyramid's view configuration system is very nice. If you look at this code for registering a view, you'll see things that you recognize from Zope, registering it for a particular context in this decorator here. We're giving it a name. But you'll also see some other things. Both of these are used with the same name, register. One of them, the one at the top, is the one that shows the registration form. The one at the bottom, you'll see it has request method post. That means it's only going to be used if it's a post request. So this is the one that handles the form being submitted. And that's called a view predicate. Pyramid's very flexible about being able to register different conditions under which different views will be used. I also like these view decorators because it keeps the view configuration right next to the code that's going to be run. You don't have to keep flipping back to a ZCML file or whatever. Unlike some micro frameworks, in Pyramid, when you put that decorator on a view, it doesn't magically register it right away. It just stores some metadata. The actual registration happens when you start Pyramid up and some code like this runs and we create a configurator and we tell the configurator, go and scan that views module and register any views that are in there. That's a good thing. It means there are no global state in Pyramid, for the most part. And that's good. It makes it possible to test just part of the system. You can scan in your test, just scan the parts that you want. Also, this is something that only is relevant sometimes, but if you want to run two different Pyramid apps in the same process, it makes it a little bit easier to do that. Avoiding globals is not the only thing that makes Pyramid views testable. If you look at this view, you see it just returns a dictionary. That is generally true of Pyramid views. That means that in your test, you just run a view and you get back a dictionary and it's pretty easy to check and make sure it has the right thing in it. As opposed to the sort of views we're used to implone where it actually returns a rendered response and then you would have to parse that if you wanted to make assertions about it. There's a renderer that's mentioned in the view config. That's going to, the dict that we return from the view is going to get past the renderer. It could be JSON renderer. It could be a template. It could be something else. That will take care of actually creating the response from what the review returns. A couple other things that Pyramid takes the marks on, runs on Python 3, has really good documentation. There's this set of things that I like Pyramid because it's a framework that's good at creating frameworks. Why was that important for this project? Like I said, we're using Salesforce as the data store. We aren't using something like a SQL database or the ZODB that there are established patterns for how you do this in Pyramid. We sort of need to build some infrastructure ourselves. A few ways that Pyramid made that pretty nice for us. Request properties. We're going to be using the Salesforce client all over the place in different views. It wouldn't be nice if we can just say request.salesforce and get it as opposed to needing to import something from somewhere. Pyramid makes this really easy. In your configuration, we create our Salesforce client. We have a method that returns it. Then we just say add request method and call it Salesforce. That will take care of making sure that every time we get request on Salesforce, it's a lazy sort of thing. The first time you get it, it'll do whatever. It'll call the method and get it. Then it'll cache it. If you use it again in the same request, you'll get the same object. That's handy. Pyramid also has tweens. A tween is sort of like middleware that lives within your application. I think it's going to be processing every request as it comes in. It can do something with it if it wants to. We use this for doing single sign-on. When you try to log in to VMS, if you're not logged in already, it'll redirect you over to the Plone site to the login form. Then you go through the normal Plone login process. Plone will see if you came from the VMS app. If so, it will redirect you back and it'll put a little token in a JSON web token in the query string. We have this tween, which gets called for every request. It just says, if the SSO token is in the requests, get variables, then we do what we need to to handle, set up a session here in Pyramid. Otherwise, we just return handler, which is the next thing in the chain of tweens that lead down to the actual application and it'll continue processing like normal. Another frameworky thing we had to do was because we were serving all these JSON views, we wanted to make sure that if we had an error, we would actually return JSON if the browser had requested that. With Pyramid like with Zope, you can register exception views. If the application hits an exception, it'll look to see is there a view registered for this type of exception. We registered just a generic one for any exception, but we put that predicate on there for whether the browser is accepting application JSON. This is going to be used in that case only. If you're loading HTML page, we hit an exception, then we render the HTML. If you're getting JSON through Ajax, then it will hit this. It'll format it as a dictionary and return that render it as JSON. Could somebody go get me a glass of water or something? Thanks, Kim. Or Sally. It has, I've talked about a little bit, the configuration system. That makes it possible to have packages that provide some preexisting functionality that you can include in your application. These are some ones that we used. Pyramid Chameleon is the Chameleon template system, which we used for some of the templates, the places where we weren't using React. Pyramid Layout was created by Chris Rossi, who's been in the Poland community some. It provides a main template thing, so you can use the same layout for your template, but just insert your actual content into a portion of it. Pyramid CacheBust is helpful for including CSS and making sure that whenever the file changes on disk, we'll serve it with a unique identifier so that it breaks out of any caching that you've got. Thanks, Sally. And Mailer is useful for sending mail out of your pyramid site and making sure that it is tied in with the transaction system only goes out if the transaction succeeds, that sort of thing. There was this really nice thing that happened because we used Pyramid for this. And this is really less of a property of Pyramid in general and more something that was true of the particular application we were building. But because we weren't using a database really at all, we were just, you know, using this as an intermediary to call Salesforce, the memory footprint was pretty low. That meant it was pretty easy to run a whole bunch of threads in our, what do you call it, whiskey server. And this was important because when WTA opens registration for work parties, it happens on a particular day. And there are people who are waiting to register for the very popular work party volunteer vacations where they get to go out and go camping and volunteer for a week. So they get a lot of requests within about 10 minutes right when that opens up. So this made it a lot easier to deal with that. If we had built the Simplone, you know, the first half dozen requests would have come in and then it would be sitting there waiting for a couple seconds to get a response from Salesforce and meanwhile nobody else can get it. They just have to be queued up. And this way we could have a bunch of threads going, handling those. There were a few things that I don't think are perfect about pyramid. Overall, I'm, I've really enjoyed working with it. Some of those things are, it doesn't automatically provide CSRF protection like we have in SOPE now. So you have to sort of remember to put check CRS CSRF on each of you. I think Donald Stuff has done some work on this as part of the warehouse stuff he's, he's working on for PyPI. I can't remember if it's been merged yet. But maybe, maybe working better soon or at least have better facility for doing that if you want to. The exception views that I mentioned, they are themselves handled by a tween. That means that if you have your own tween that sits outside of that and it hits an exception, it's not going to go through the normal machinery and you're going to get like an ugly whatever your whiskey server exception is. So just something to watch out for when you're ordering your tweens. And then just as a more general comment on pyramid, it's, I mean, it is a, a some assembly required sort of system. You're going to have to make some choices about how you want to use it and what, what pieces you want to put together. That's a strength if you want to use particular pieces and it's a weakness if you just want something that works. One thing that in particular I found was annoying to have to think about was setting up logging and making sure that logs were going to a reasonable place. I'm going to talk about testing pyramid a little bit because I really enjoyed the flexibility that it gave us. And we did a number of different kinds of testing. We used the pie.test framework, which people seem to either love or hate. I like it. It's a bit magical, but it keeps your tests clean and concise. Pie.test makes it really easy to define fixtures. Fixture is just a function that returns something. But then you can see the second one takes the parameter work party and pie.test is automatically going to say, oh, there's a work party fixture. I'm going to call that and inject it in there as a basis for your test work party test. So it sort of provides similar functionality to what test layers do when we're testing in open-plone, but it's a little bit more granular. You can include multiple different fixtures for a particular test. The other magic thing it does is around assertions. So normally when you do an assertion Python and it fails, you just, it just says assertion error doesn't tell you anything useful. And pie.test makes sure that, like it'll actually say, oh, that thing was not equal to that thing, so you can debug your test. Because we were doing all this talking to Salesforce, we wanted to be able to test things without actually talking to a live Salesforce instance. So we did a lot of creating fixtures using Factory Boy. And Factory Boy lets you define a factory for your fixture, and then every, anytime we need to get a work party for a test, we just call this thing, and it will, it's got a sequence there that will, you know, increment something every time it gives us one. It's got a thing that generates a random date and time between two endpoints, and it's got a sub factory that, you know, calls, refers to some other factory. So it just is a useful toolkit for building things, for testing. We did some testing of functionality at the HTTP level, because this is sort of an API server. Web test is the thing to use for this. Gives you a wrapper around the WISCII app that just is sort of like HTTP, so you call get on it, and then you can get your response, parse it as JSON, and make assertions about that. We also did some testing in the browser, so this is similar to what you get with Robot Framework and Plone. We used PyTest BDD and Behaving. PyTest BDD is the thing that lets you write a test in this style where you're sort of a specification of, you know, when I do this, this should happen. And Behaving is providing the particular, excuse me, the particular commands to interact with the browser using Selenium. I think Behaving was created by Yorgos, who's an old Plone guy who works with Gaer, on Cryfo these days. The final sort of testing we did was some load testing to make sure we could handle all that traffic when registration opens up. We did that using a tool called locust.io, which is just an open source Python package. It lets you write some Python code that says, you know, these go load these HTTP endpoints, basically. And then when you run it, it will fire up a bunch of work goes, or stuff, it will do that in parallel, and then it will give you some reports of how it did, how many errors that were, that sort of thing. So we actually found a problem on the Salesforce side with this where we weren't locking something properly, and it was possible to fill something up past the level that was supposed to be, you know, the capacity before you start adding people to the wait list. We found that with this, and we're able to fix that in Salesforce before we launched. All right. Shifting gears, let's talk about the front end. Talk about React. React is a system for structuring your front end code into components. It encourages using a syntax called JSX, which lets you mix HTML like markup in with your JavaScript code. So it looks like this. Here we've got a button component. It's got a render function that's going to render an HTML button. It uses this.props.label, so that's, you know, a property that was passed in. It's got an onClick handler that's going to call that click function down there. And then down at the bottom here, we're actually attaching this into the page and saying, let's create a button with the label click me and replace the body with it. So this is what React looks like. Now, the first reaction to this is usually something like this, depending on whether you are a developer or a designer. People don't like mixing the two. In some ways, I think this is like white space in Python. You'll get over it. You'll learn how to do it. And then it'll be nice to have the markup close to your code in some cases. There are also cases where you do have separate concerns. You want to have a template that can be overridden by somebody else. And it would be sort of nice if React had better support for that. Another downside to using JSX is that there's added complexity to compiling it into actual JavaScript. That I don't think is such a big deal. You're probably going to be doing some sort of compiling of your JavaScript anyway these days if you want to use ESX features and so forth. So it's just one more thing to add in there. Another downside is because JSX was originally written as, you know, to compile the JavaScript, it has some subtle differences from real HTML. Class is a keyword in JavaScript, so they couldn't use it. So whenever you run a write class equals something in your markup for React, you have to say class name equals. That is a practical annoyance as well as a theoretical one because it means you can't just take some HTML and copy it in to use it in React. You have to go through and search for a place. I'm actually working, have done a little bit of experimentation with creating a template engine that renders the GINJA 2 syntax directly into a virtual DOM React style rendering so that you can get React style performance. Hoping to do some more work on that beyond just the proof of concept. People get excited about how React can help with performance. And if you start using React, you'll start hearing all these things about the virtual DOM and the mutable JS and one-way data flow and redux and you sort of throw up your hands and say, oh, what have I stepped into? I don't think you really need to worry about a lot of this stuff in a lot of cases. The main thing to know about React and performance is that when React renders these components, it constructs some just plain old JavaScript objects that represent the structure of what should be rendered into the page. It then compares them to the actual document object model, what's in your browser, figures out where the differences are and then efficiently updates it as opposed to a classic template rendering system where your template renders to text. You then parse that text and create, update the DOM as you go. So that alone helps with performance a lot. And there's all these other things you can do to keep React from even needing to call the render function to figure out what the virtual DOM is that it's supposed to render. But you're really only going to need that for a site where it's updating frequently or rendering a large number of things for something like a content-centric site where you render the page once and then maybe, again, once somebody submits a form, it's not really going to matter that much. So you can start to dab on React without learning all about these things. Another thing that people get excited about with React is isomorphic rendering, which is saying you can use the same templates on the client side and the server side. And this is true, you can do this even in Python, which you wouldn't necessarily think, because you can have your Python render something to JSON and then you can pipe that to another process which is running Node and runs React and spits out HTML and you can serve that. I have done that. Well, really, Lawrence did it and I help on that system. This is important for if you're doing a site that's going to have public-facing content and you want to make sure it can be indexed because these days Google will try to render content even if your website is rendered as a single-page app using JavaScript, but it's not always perfect and there's other search engines out there that aren't going to do that. But for a private site, like much of the WTA site, that's not really worth the effort. What we ended up doing was the main view of a work party. We didn't use React for it. We just wrote that on as a server side template and then the things that were more interactive, like the registration form that didn't need to be indexed, we did with React. So there's all these things that people get excited about. The thing that I'm excited about with React is that it's declarative. There's a clear representation of what the state is of the data in your application and when you update that state, it will take care of automatically and efficiently updating where that state is reflected in the rendered page. That's really nice. For me, that's a revelation almost as big as when I learned how to use jQuery at the first time. Speaking of jQuery, this is a counter example of what React helps me avoid. So I find myself doing this all the time or at least I used to. So this is like, you've got this form and whenever this checkbox is checked or unchecked, you want to show or hide something in the form. So this is how you do it in jQuery. You get grab onto selectors for each of those things. You write this function which is going to look at the state of the checkbox. It's going to toggle the class on that section. Then you need to wire that up so it's going to get called whenever anything changes in the form. We need to update the form and then we also need to call it initially just to make sure that the state of the form is right when the thing loads. With React, all we would do is we would have whatever thing contains this form would have some state which is whether or not that thing is open. And the button would update that state and toggle it on or off. And that state, when it's toggled on or off, would trigger automatically re-rendering anything that depends on that state. So you have to think a lot less about when I change this, what is it going to affect? A lot more of that happens automatically and fairly quickly even when it needs to re-render a lot of things. Now of course React isn't the only framework that supports this sort of data binding. Angular is going to do it. Probably a lot of things do these days. If you've got something that you're using that you like, maybe I would argue there isn't such a big need to look at React. If you aren't using something like this, I think React would be a good thing to look at or to consider because of the work it's done on performance. We came up with a couple of patterns for using React within Pyramid. And I think I might actually skip a couple of these slides because I'm running short on time. And I want to talk about how we integrated the Pyramid thing with the Pwn thing. So if you want to know a bit more about dealing with React, with the client side stuff, talking to the back end stuff, you can talk to me afterwards. So the things related to integrating with Pwn were we have shared navigation. So this is the nav bar at the top of the site. That's all folders and pages that are set up in Pwn. But it also appears at the top of the VMS system. And the way that's done is we have a Rust endpoint, well, a JSON endpoint in Pwn itself that spits out a JSON representation of what the menu items are. And then on the side of the Pyramid app, we just load that JSON and then render it in a React component that happens to generate the same markup as is generated on the Pwn side. I'm not sure this is, you know, we could have probably just pulled in the entire chunk of markup that's used in Pwn. This is interesting, I think, partly because it reminds me of some of the work that's being done on the Pwn JavaScript client that was people were presenting on yesterday and the idea of turning Pwn into a system that's decoupled where, you know, the navigation JSON component is built in. And in fact, I think some of that exists in Pwn.Rest API already. If it had existed and I had known about it when we built this, it would have been useful. There's things you need to think about because these systems were designed to run on separate domains. You have to think about cross-origin resource sharing whenever something is being loaded over Ajax from a different domain. There are restrictions that the browser places on what you can access. I'm not going to go into detail on these. These are the various headers and things that you need to read about and be aware of. These days I would look at Pwn.Rest, which is the new library for registering REST endpoints for Pwn. It's going to help you specify some of these things when you create an endpoint in Pwn. And then I talked a little bit about, oh, this got really small. I'm sorry. I talked a little bit about the single sign-on for login. We're sending you over to Pwn. We've customized the Pwn login form. So in a couple of ways, number one, if you're already logged in to Pwn, at that point, it will just redirect you back to Pyramid immediately. So you end up back there logged in. If you're not logged into Pwn yet, we let you log in and then we determine if you came from VMS. And if so, we add that, you know, create this JWT token, which is basically some JSON that describes who the user is and it's signed with a secret so that when Pyramid gets that, it can verify the secret, know that it came from Pwn and say, oh, I should create my own session for this user so they can be logged in to VMS. And then we also handle going the other direction. If you click log out in Pyramid, we actually take you to the Pwn log out screen so that you will be logged out from Pwn also. But the Pwn logged out screen also includes a view from Pyramid that will destroy the Pyramid session. So we end up destroying the two different sessions. I believe we have a couple minutes for questions. Is that right? Yeah. If you have questions, you can reach me here. Don't forget to fill out the talk feedback from the Six Feet Ups app. So thank you.
|
The Volunteer Management System (VMS) is a tool that Jazkarta built for Washington Trails Association to manage volunteer sign-ups for helping maintain hiking trails in the state of Washington. It allows staff to schedule work parties, volunteers to find and register for them, crew leaders to access information about their crew, and land managers to report on work that was done in their region. This talk is aimed at developers and will discuss the technical architecture of the project. Topics will include: - My opinions about Pyramid, the so-called unopinionated framework. I'll share why Pyramid rocks for this type of project and some of the choices we made. - The major benefit of using ReactJS for frontend development. (Hint: it's not the reasons you've heard.) - Techniques for making a Pyramid app work alongside a Plone website as one seamless website from the user's perspective.
|
10.5446/55305 (DOI)
|
So today I'm going to talk about another way how to deploy a Plon to deploy another platform. We saw today Docker, we saw today Rancher which running is Docker and probably many others and we will talk a couple of more. And today I'm going to talk about OpenStack and how to deploy Plon in OpenStack. I did training yesterday. I see a couple of folks from this training. Hi guys. And I will basically show how it works. Just demo how the plumbing is going on, what is the user experience of deploying Plon on top of OpenStack. But before that I want to know, do you know what cloud is? What cloud does mean? How many people using cloud in the daily work? Fair enough. But I will still skim through a couple of slides about clouds. So basically cloud allows you to have your computer services on demand. Quickly provision them, quickly deprovision them. And OpenStack is one of the providers which you can use for your public and private cloud. And we have a couple of public clouds on top of OpenStack. One of the biggest was HP. It was on some point of time. And actually many, many companies from Fortune 500 are using OpenStack as a private cloud. OpenStack consists of a couple of projects which actually provide some capabilities. Like compute, identity, networking, block storage. NOAA provides you virtual machines, Neutron Pwadiou software defined networks connecting these virtual machines together. And also OpenStack has a couple of more projects adding more capabilities, extending ability to just spawn virtual machines and cloud resources to slightly more. And today I'm going to talk about Murana, one of the projects which provides some orchestration capabilities. But it's mostly focused on application catalog experience. So Murana is application catalog for OpenStack. What do you imagine when I say application catalog? Something like that, right? We have many of them for your phone, for your tablet, for your laptop. And I do this one of the first and biggest application catalog so far, right? So OpenStack has pretty same experience. You can have your application on top of the OpenStack cloud pretty much same, having pretty much same experience as your phone. Just click download and it will start running. And it's obviously has two parts. One is common community catalog located here, which provides you a way to publish your applications to the global catalog and download these applications from this catalog to your private cloud, for example. But it also has private part. And public part, community app catalog contains number of already existing and written applications, pretty same as Docker Hub. And we also have Plon, which we added recently. One which is published right now is just single node installer, but we are going to have scalable Plon installation in OpenStack catalog, which will provide the ability to scale your Plon installation on demand. Why do you need the catalog, right? And there's two, three sides to that. First of all, it's a way how to quickly onboard some existing application to the cloud, how to give tool to your developers, how to run applications on the cloud. If you're looking at the cloud as a bare metal, like virtual machines on demand, it's not enough. Your developers will spend some time setting up the application top of VMs. They need to take care of the application running on top of the VM and so on and so forth. And having application catalog gives them ability to quickly upload application and give other users to deploy this application out of the catalog. So talking about Murana, Murana has a couple of capabilities which I want to highlight. It supports several operating systems and specifically both Windows and Linux as a base system for your application. It gives you ability to have, to manage complete lifecycle of the application, including scaling, installing, removing, and monitoring and so on and so forth. It's also integrated with configuration management tools like Ansible, Chev, and Popet, and it will talk a little bit later about difference between orchestration and configuration management. And has pluggable up-definition languages. So right now we support only two languages to define your application, but it's extendable. If you have your simple shell scripts, which you always use to provision virtual machines, you can plug it into Murana and have just having your shell script turn to the application and sees them in nice catalog with icons, description, and self-service parole for your users basically. So orchestration is slightly different from configuration management. When I'm talking about orchestration, it's about automation on the cloud level, ability to orchestrate, spinning up resources, installing something on top of that, like starting installation. And configuration management, mostly talking about how to configure these already existing resources. So Chev, Popet, they can do configuration management on a large scale, but they mostly focused on how to configure something on the VM itself, rather than spin up 10 VMs, connect them to the network, attach storage to them, and so on and so forth. So let's try them out. And today we'll go to the live demo. So here I have my deployment of OpenStack. It's a small cluster, it's like only four nodes, and this is how it looks like logging page to the horizon, OpenStack dashboard, WebUI to managing your cloud. So when you're logging into the OpenStack dashboard, you have a couple of panels, including a panel which allows you to manage virtual machines. Yeah, it's back. Yep. It's back. So we can go here, see applications. This is how this way our application can look like. And if you click browse, you'll see what you would probably expect, just page with applications. Let's take a look at my demo environment where I have a couple of more applications than Plone. I have here Apache Web Server, MySQL database, Subjects for Monitoring, WordPress, and Plone to choose from to deploy. And once you see your application, you can search for your application, click quick deploy to just start deployment immediately. See nice wizard which will guide you through deployment of this application. Just as I'm floating IP in order to be able to connect to this application later. Click next and see a page where we can select flavor of the virtual machine which will be used for this Plone cluster. Let's start this small. A small is like two gigabytes of RAM. Let's select the boom to the base image and click next to actually almost start deployment. So, right after the second, we already have our environment configured. Environment is a logical entity which combines a couple of applications together if they connect and have dependencies. In this case, I have only one. So if you will take a look at the topology, you will see something like this. Even though we have one application and has two nodes because we have, we deploy a multi-node configuration. This application which I was talking about earlier which is not yet deployed but which is not yet available in the community catalog but I have it here. It's only review for community catalog. So it's multi-node deployment of Plone. There is HA from front-end servers but database is only in one node. It's always can be extended obviously because it's all totally depends on application rather than on application catalog. So here we see our topology of the environment. If you go back and click deploy this environment, deployment will actually start. So deployment of Plone takes obviously some time. It starts with creating a number of VMs then installing Plone on top of this VMs. So to not let you wait guys, I have already deployed installation of Plone. Same thing right here but just already deployed. So we can go and see how environment looks like when it's already deployed. So we see here that last operation was finished configuration of Plone, no errors which is nice. We can go to lattice deployment log and see step by step what happened to deploy the Plone. So we start with provisioning VMs. We spin up to VMs, connected them to the network, added storage and everything. Then we are moving to the installation phase and installation phase is starting with installing simultaneously on the both nodes software. On the first node we're installing front-end server and on the PloneDB we're installing the ADB. Installation is completed and we see that Plone listeners are listening at and we see URL where Plone is listening. So we go here and we'll see just regular create new Plone side. How to import applications from the application catalog to your local instance is also quite easy. We can go to the packages, manage packages where you see panel which applications are available in our catalog. Then we go to community catalog which has not only morana applications but also heating plates, heat is a project which is a copy of cloud formation from AWS. Also has just virtual machine images which you can technically also use as application right if it has something pre-installed. Going to the community application catalog selecting morana packages we can skimp for Plone, send the loan server, copy the package name, go back to the morana, click import package, select repository, paste name, click next and starting from the second morana will try to download all the dependencies of the application, all the needed virtual machine images, everything needed to speed up this application on top of the OpenStack cloud. We can edit description, name, select a category in which this application will be available as much as it matters which one. And that's it. Package parameter successful updated, application is here and we have two plones here. One is single node installation and another one is multi node installation. If you're interested how to actually use morana, what all the capabilities have, how to write scalable applications and so on and so forth, have a number of screen costs short from one to five minutes top screen cost published on the wiki page, link is below. You can always send the email to development team which is quite responsible. RC channel is also available where developers is hanging out including me and links where you can read more documentation and so on and so forth. Thank you folks. Thanks for your questions and feedback which you can provide with a link below. So huge difference between for example, not like really huge difference but difference between Docker and container management software like Kubernetes and so on and so forth, this opens up virtual machine and your laptop is also virtual machine, not virtual machine, this is computer, right? Virtual machine is computer, your laptop is computer, so whatever you develop locally can be developed remotely. And in terms of how specifically with morana, how to deploy locally and how to deploy remotely, it's not much possible because it assumes speeding up some resources. So you can technically deploy open stack locally on your laptop. There is a couple of projects like DevStack for development environments where you can install your own laptop, whole cloud and then spin up VMs inside of your laptop. And in this way you will have complete two same environments locally on your laptop with limited computer resources and you actually open stack cloud which is bigger and allows you to spin up number VMs. Is there a way to manage code deploys? Do you have to create a new open stack application when you change the configuration of your phone site? It depends how you're designing your application. Morana encourage you to have self-contained application when this application doesn't depend on anything else except itself. But you always can go to the environments where you have your software remotely and the application itself just pulls this software and then installs on the VM and it can pull from the, for example, GIT. And then you can have a CI CD pipeline which delivers, for example, software through several branches and application use specific branch to deploy. Okay, so that's a separate level of configuration in that application. Yeah, it's like, two levels where you can go. And if you wanted to say run multiple clones, EO clients, that would be something you'd also have a separate configuration for or something like that. Not really. I can have, for example, in this node we deploy it to instances. But let me try to go back to my environment and I will show how easily you can scale this. So if you will go here and take a look at already deployed environment. Yeah, sorry, VPN is failed. This always happens. So you can get to, I don't want to interrupt you while it's doing, you can actually get to, say you want to have load your own Python, content type through Python, you know, and then go, you actually go in, stick the code somewhere like in a source directory and run build up, get it picked up. Not sure how to do that. So here you can have scaling and scale out. When we click scale in, it will start adding new instance. Okay, so those are new features. Yeah, it will spin up new VM, install there in other front-end server and everything. But you can't currently configure how many clone instances run on a single VM or is that, I mean? You can. If you, you probably don't remember. On the first step, when we created application, let's take a look at one more time. Yeah, you can actually specify how many initial nodes you should have. I just choose one because number of resources. Okay. Probably some reasonable ways to build out the deciding of it. So the update is automated to build out or something similar to that. Yeah. And that's something that I'm talking about before. Yeah. Haven't I been able to set that up? No, I do it in the AWS Contest with Chef. Yeah. Well, thank you very much. Thank you folks.
|
A demo of automated deployments of Plone on OpenStack. Learn how to get repeatable production deployments of Plone CMS available to the users in your OpenStack Cloud and let them provision Plone for themselves.
|
10.5446/55306 (DOI)
|
So yeah, my question is, is Plano good tries for large B2C websites? And yeah, first I throw in a quick introduction and an agenda of my presentation. So first I want to introduce myself, my company and my customer or our customer and yeah. Then I want to go into the requirements, so I want to summarize it quickly what we had to do. And this also leads directly into the implementation, so what did we have to do for general requirements, technical requirements and how could we create a good solution for it. And at the end I want to summarize it and yeah, want you to help me to answer the question, was it a good idea to use Plano? You find this agenda on the left side and so we'll see where we are. So I'm Lukas Gutierre, I'm software developer and project manager and I'm doing this, I'm working for Interactive GMBH and we're an online agency in Cologne and this is in Germany. We're using open source CMS systems, for example, Plano of course and Magento for shop systems and also doing online marketing and web design. Our customer was Suzuki. So Suzuki is leading worldwide automotive and motorbike manufacturer and in this case I'm showing you the motorbike site. On the 5th of October, so a few days ago, the page went online and there was the intermod in Cologne, so it's a big mess where they show the new motorbikes, they show the new stuff and also of course on your site. And this is this website. So this one was implemented with Plano 5 and now I want to go into detail what did we have to do to make this happen. So it's blown a good choice for large beta C websites. Who uses Plano? There is a list if you look at plon.org of a lot of governments, NGOs, so there are some names but the big players, they are missing. That's one part of the thing is that there are a lot of internet systems, so you really don't know who is really using plon and maybe it has to be something with brand awareness or something else. But you really can tell the value what they are. So I want to talk about the quick gains for companies using plon. So what do we do for customizations, what add-ons are from plon, they are ready to make this page happen and how is the security working for us. And here at the end it is the question to answer. So the general requirement was to get a strict layout for the whole page. And it's like in every project but here it comes to that there are specifications. So you get a lot of specifications, this is good because you have everything written down what needs to get done on this page. But it's also a bit difficult because you want to get a good solution for your customer and you need to get the technical part and the specifications together to get a good solution and sometimes this is difficult. So for example there are multi-site management, so you have different roles of users and want to get this into several sites. You have front-end editing so you can edit the page dynamically as you feel you are just on one page, you see directly the result and other things. And there is personalized content, still a big subject and I think that's one of the most important things that's coming. Yeah, the example is you are on the page and you directly see I am from Cologne, I see the map on Cologne and it just knows it and it feels like I am home. And of course there are a lot of other things. Technical requirements, so it goes a bit technical more. There is a tile system I want to go into later on. For example you need to work in copy, inline editing and so on. There's a content proxy. You need to RSS content in-plone, so for example you can use search engine for it. There's image cropping, SEO optimizations, PDF generation, inline editing, so you see there's a lot to do and I want to jump in some parts of it. For example the tile system, what we did, we created our own tile system. I think you will ask yourself why did they don't use Plonab Mosaic. As we started with the project, Plonab Mosaic was in Kitchu's, so it really wasn't that developed and we had a lot of special workflows that we had to implement. And there are some custom creation process for tiles and a special layout. So that's why we created our own tile system. So to give you a quick idea, we have a content type, tiles, it's a container and there you have rows in it and in this rows are tiles. But I will show it to you after that. And you have a special workflow, you can set on one position, on one base position. You need an API for the front-end editing and you need models to edit stuff that isn't editable through inline editing. So this looks like the tiles editor from us on the Suzuki page and there are three rows. So you see there on the bottom of the side there are actions, you can remove it, you can move it. So these are the three rows. It's like an image and text and here is a headline and in this rows there are several tiles for example this is a row with three text tiles. So it's kind of like build like Mosaik. Just that you get a stricter, more stricter tile. So you have like a free text tile and it builds you already three text. So we can handle a bit more what we give the customer. And if you click on for example an edit button, it opens the model. As I said if you have to edit something more you can choose an image there, you can set a link or set an alt and title tag. So the PDF generation, we had two choices between Weezyprint and LaTeX that are Python modules. LaTeX you almost saw for mathematics and so on. Or I think the plan documentation book is made with LaTeX. It's a really good thing. We use it as in other projects but it's really large and so you need a lot of time to use it. And Weezyprint is a module where you can write your code in HTML and CSS. So most know that and it generates your PDF. It's really good and for us it works good because we had a simple layout and just put data in it. The inline editing was also very easy for us because Plone has the tiny MC4 and with a tool liner you can just implement it and there is the free text tile. You just double click and you can edit this with the tools there. The RSS content. So if you want RSS in Plone you can put the feed in it but you don't have it as content. And this was a problem for us because we have RSS in sliders, in overviews, so on a lot of positions and you want to use the search word to find them. And so we need to index them somehow so we created an own content type with the X30 for it. And there were a lot of APIs so you get the data from this and there. For example the motorbikes and they have a special database for it and they support Python module Sats which was really helpful for us and I just recommend to look into it. It's a good thing. Here comes the other part, it's image cropping. So there is a Plone product for image cropping but it's not migrated for Plone 5. So it was difficult because we needed a cropping tool and then we had to thought about it, can we rank great it or do we some something else and this was very difficult to migrate it. And we looked into the code and see what did they and what can we do. So they used the cropper as a jQuery application and built it like the same way more easily so you don't have all the functionality but it's a good first step to using cropping in Plone 5 and maybe this is the thing we can put into the community. Next thing is the SEO optimizer. It's a thing from Quintar groups and it's also a product which is not migrated yet and this one of our most likely products, like we use it on every page because you can edit the metadata and it's very good for SEO optimization of course and we had to do there something different too, create something different and we did it with the rules and the rules XML from Diazo. So we put our metadata somewhere in the content and it just replaced it on the same page. So it's really a small thing but you can do a lot of it with it so we would love to get this migrated too. And the third thing is it's a Plone form gen. It's based on archetypes so it's like not for Plone 5 but what I've heard now from the last keynote there is collective easy forms so we should definitely look into that. Now we use the Plone form gen but the collective easy forms is also dexterity so there's a good solution for that. But these are our main things we want on our page because it gives us a good amount of editable stuff and it's just from Plone so you can just install it and it works. And then there's the Plone app testing so you have everything to test like white box, black box acceptance test which you need to get a secure feeling about your page and it's all in Plone app testing there so it's all that we need. It was very good. Also the documentation there's things so it's automated. You can create an RST file and it just generates you the whole documentation so this was really helpful for us. So in general when I list up the pros and the cons in Plone 5 now we have a good looking interface in the backend for an easy integration in Diazo. We have an interactive editing backend so there's all JavaScript implemented so you can do everything on one page. Of course the ZODB is an object oriented database which is very secure and also the security on the application level for example the workflows you can just type in a URL and it works for you. It's on the lowest level. There's security for you so nobody can do things with no permission. It's infinitely extendable with ZEO cluster so it's not that the page can grow as much as it wants. You can build up and any more clients and do it with a load balancer. It helps us really much to get every big site running. And of course there's Plone app testing, Plone app caching, swings and so on which helps us a lot. On the cons side as I mentioned there are just a little amount of extensions that are already migrated for Plone 5. Like I said the SEO optimizer, the cropping tool or Plone Form Band but I put this back it's easy forms we have to check out. Now I want to throw the question in the round what do you think was it a good choice to use Plone? You just have to stand up and say something. Yeah? What was your traffic numbers? Were your content editors happy using the features of Plone to put in all the information about the motorbikes? How was your project received at the client? The project has received very good so you can do a lot with for example Varnish or something to get the good performance. So this was a good part. So it doesn't, yeah, it doesn't, I don't know how to say that, it doesn't exit on the speed. This was not a problem for us. Did you start the project? You did it in October? Yeah right. This was a really short project but there's another, there are other sites on Suzuki we had this earlier so and the tide system went further, further back with other pages like it started in Plone 4 for us. So this motor site was like I think four months ago. There's a lot of experience or connection with the community or knowing what's going on or worked a long time already with Plone. Yeah? So in the generation there's some very good modules for those, EAPF for example. The feed stuff you, I know it's still archetype but I know we created it for eight years ago, products feed feeder, that does it but it's archetypes are not dexterity so that might work. Okay. Image cropping was made available for Plone 5 in March 2016. Okay. So there's a lot of things that I think, okay. Well, the, of course it's not critical. I'm looking back at myself like wait a minute, you're doing all this stuff when I know that a large part of these modules have been created for Plone 5 so that's something we really have to fix because a lot of, maybe it could make it easier for you partly to have a part of this functionality already gated to you by using the upgraded module if you had known it existed. Yeah, okay. Yeah, maybe that's what's the thing. Yeah. So I'm curious other than, so I don't know what else the website does other than serve pictures and brochures of the product for example but do you, is there an online quoting system? Is there sort of a contact? Like if somebody's interested in a particular motorbike, do they interact with the site more to get more information? Do you do ordering? Can somebody buy a motorbike through the site? So you can order, so in the future there will be a functionality to order, not to buy a motorbike but you can order a test drive and so there's a form where you search your motorbike and it shows you okay, 10 kilometers from here now it's in this shop and you can try it out for like one week and you fill out the form and then yeah it goes into buying here. So did they add all the product data themselves kind of manually then? Like as a content type, how was that handled? Sorry, please. They went in and added all the product data like for each buying code. So there's an API, so they have a database, it's called the PDB and there where all the motorbikes data came from. So we used this Python module Sats and created this as a plain content type but automatically. So we have the Chrome job which runs nightly and create the new motorbikes. Okay and thank you very much.
|
What are the quick wins for companies using Plone for consumer oriented websites and applications? Who uses Plone? The list is quite impressive: universities all over the world, government agencies, NGOs. But where are all the Fortune 500 companies, the big players? Is Plone not made for consumer websites? Our talk highlights a case study from a leading worldwide automotive and motor bike manufacturer using Plone in a B2C context. We will walk you through one of our projects, mainly from a technical perspective: What are the quick gains for companies using Plone for consumer oriented websites and applications? Based on our customer’s main requirements we will discuss what technologies, interfaces and add-on products we used. What are common customization necessities and what are the development challenges in large-scale projects? Was Plone ultimately a good choice?
|
10.5446/55310 (DOI)
|
Is this thing supposed to be on? Can you guys hear me out there? Okay. I'll fix that. Microphone on? Hello? Okay. Ready? Okay. Awesome. All right. Can you guys hear me? Hey, it works. Okay. Sweet. So, if you're not here for Plone vs. Drupal, you're in the wrong room. So, I guess, yeah, we had a good crowd tonight. Awesome. Or tonight. Today. I have to drink a lot of water during the presentation. My throat has... I've lost my voice. That's just bad for presenting, typically. Yeah. So, don't forget. As a guest of the Plone vs. Drupal talk, we have goodies for you guys. We've got some M&Ms. I'm running out. You're running out. So, raise your hand if you don't have any and you would like one because we will run out of those. That would be kind of funny. Okay. Awesome. That's better. Now, I can see my slides. I guess we're ready to roll. We'll go ahead and get this thing kicked off then. So, I've had a really interesting journey recently. I spent the last three months every Friday afternoon drinking beer and looking at Drupal with this fine man. So, this is... If you don't recognize me, this is the person I'm talking about here. This is a Doug Van. He and I have basically sat down and decided we wanted to do a civil comparison of Plone vs. Drupal. We really wanted to understand where we were similar and where we were different. And the reason I was so excited is because Doug actually lives like 15 minutes from my house and he's been doing Drupal since 2007. He's actually one of the kind of premier Drupal trainers. He goes all over the world training folks about Drupal. He presents at all the Drupal cons. He's basically the Calvin equivalent in Drupal world of doing the same kind of things I go and do all the time. So, it was almost too perfect that we found a person who lives so close who could talk about Drupal so well. So, yeah, after much peer drinking and much talking and going back and forth, we would finally settle in and actually get a little bit of work done and talk about various areas of Drupal itself. Oh, and Doug, his first computer was a Commodore 64 in 1983 and he learned and taught himself basic. So, he's like a... He's not a computer science guru. He's a self-taught programmer kind of same as me. I am not a computer science person, but taught myself programming and computers so that I could go out and make cool websites. And so, Doug falls right in line with almost the exact same thing as me. So, what we did is we sat down and we had a huge list of what we thought were kind of main areas we probably should try and hit. So, I'm going to try and go over all these with you guys today. We'll see how much we get through in the 40 minutes we have allotted. I seem I can go a little over so we start a little late. But that's basically the methodology. We would sit down and then we kind of start talking. I'd figure out if that fit in one of these sections and I'd start writing in this big Google Doc that we did. And actually, all the material that I talk about today, I mean the slides that we posted up online, the video should go up online. But we've actually put together a like mini Plone vs. Drupal website inside the 6u.com site. So, all these sections are listed on that site there. All the screenshots that are in the presentation are going to be listed up there as well. So, if you actually want to refer back to kind of the source information for where all this came from, it's actually all up on our website. Because it's been quite an interesting journey. And also, the last comparison that we did of Plone vs. Drupal was back from 2010 and we were comparing Drupal 6.19 to Plone 4.0. So, if that tells you about any of the age, we decided we'd kind of blow the dust off this process and go do it again and see how things have changed if they've gotten better, worse, or how we compare. So, we'll go ahead and kick it off with the, obviously if we're going to compare them, we better talk about how we're going to install and deploy our instances. This is mostly going to be about the download and install experience as a new developer coming to that product. So, we basically both have a similar download experience. I think each of them is your three clicks away. You know, obviously, on Plone.org, it takes about three clicks to get here and you're downloading a tarball and installing that into your system. Yeah, you basically with, one of the really nice things about Plone, if you're new to Plone and never been in the community, there aren't a lot of dependencies you have to get installed into your system, especially if you're like on a Mac. We've taken care of this pretty well with the unified installer experience of just run install.sh. It gets all the dependencies and compiles Python. It does all the little, you know, bits for you. And then you start the instance and point your browser at it. There isn't too much more to kind of get in going. Again, the kind of the benefits here of Plone is very few dependencies are needed to get started. There's no requirement really to run Apache or have a database server or any other kind of thing because we're really relying on the Zope application server and the embedded database, you know, the ZODB to get started. So as a newcomer, there are not a lot of moving pieces to get going. Now the downside is, it is a very large download. So start your download, go get a coffee because it's going to take a little while. And then running that build at the installed sh takes a little bit of time. So again, you may have to go get your croissant and come back because it takes a while to get going. Now the Drupal experience is very similar. About three clicks away, you are downloading the tarp, a considerably smaller tarp all and you are basically ready to start the installation process for Drupal. This is where it gets a little different. You need to, assuming you basically need to have some stuff already on your computer, like whether you're using MAMP, WAMP, or like an Aquia desktop or some, you got to, you've installed Apache on your own with your own mod PHP, you have to have a MySQL server already running and set up, you have to create the database yourself before you actually get going with Drupal. So there's a little more things to do. It's documented very well. They do have great documentation on getting started, but you are going to have to know some things ahead of time before you can actually get started with Drupal. So there's is, you extract into your web group, set up your database, head to the browser, and then you actually run there, install wizard through the web. You may, yeah, actually, please, please ask questions as we go because there's lots of stuff and it will be difficult for you to remember which section you want to ask a question about. Yeah. What's that? Yeah. Yeah, and I think it would be nice if we could get the Heroku. button put up there. It does work really well. I was quite surprised. It installs. Yeah, because we don't have. Yeah. Right. Right. Oh, I agree. And that's actually one of my kind of findings that Drupal is very, very approachable because they have put things like this in there, you know, try a hosted demo. I mean, it's like one click installers and you're good to go. So yeah, you don't have to download and install, but I was trying to do it for more of a developer standpoint. If I wanted to actually work on Drupal and be productive, I got some stuff I got to do on my laptop before I can do that. Now, it's interesting with Drupal as opposed to Plone. So you just saw kind of the core, what is called the core Drupal download button. But Drupal also has these things called distributions. I don't think anybody here from Drupal can kind of finish the concept of distributions. So what they've got are these kind of pre-built sets of add-ons maintained by somebody so that you could do, for example, a social portal, open social portal. This is a distribution of Drupal with basically core plus X number add-ons. They also put in, which is really nice, some sample content to kind of give you a feel for how the site should work. And we don't have that. We really kind of miss out on. I mean, we put in news events and members in a kind of main nav to start with with Plone, but it doesn't give you a feel for, like, example kind of content, how user profiles should really look. And this gives you a full kind of in-context experience. And there's a bunch of these. Now, not all of them are compatible Drupal 8. So I will be talking about Drupal 8 specifically, unless I say otherwise. Drupal 8 has, let's see if I got in here, this is another one. This is like a sports league, so if you're running a team for your kids or whatever, they've got sites where you can put up a sports league stuff. Some of the more popular Drupal distributions are not compatible yet with Drupal 8. For example, they have one called OpenHRM, which is like an internet in a box type solution. They have one called Commerce Kickstart, which is also very popular for doing e-commerce online with Drupal. But those aren't compatible with Drupal 8 yet. And we'll talk about compatibility in a bit. But yeah, this is kind of that installation process. So once you've downloaded and installed, you still have to go through kind of a next, next, next finish installer where it asks you some questions about your database name and password and things like that to get going. But once you are installed, the upgrade process is kind of nice. It does tell you if you're out of date, these are some things that, if I put these on my wish list of what Plone could do, I really wish Plone had this built into the control panel where it would check for updates and tell you when you're out of date or when you need a hot fix or a patch. So because of security track records, I think they've built this in to keep people really aware of whether they're out of date or getting out of date with their current Drupal versions. Now Plone really don't really have distributions like that. But we do have a couple of groups have gone off and built some interesting things on top of Plone. I don't know, I didn't know how to kind of classify these, but I know the wild card group is doing Castle CMS, which looks really interesting to me. They're kind of bundling opinionated technologies together to give you a really nice experience. I don't think sign-in has migrated to like Plone 5 and I don't know how relevant they still are, but that was a internet, like social internet portal. And then the Quay project is based off of Plone 5 and also has an open source version of it that you can download and try out and install. So I know if you're in this room, you won't be seeing the Quave demo, but I highly recommend you guys checking that out because they've done a really good job of addressing usability issues and making that guided experience of an internet and knowledge management tool like super easy. So we're going to talk a little bit about the out-of-the-box experience. It's just kind of general features you get without having to add any add-ons into the system and also what kind of content types and things are available as you start. So your very first experience with Drupal firing it up, you log in, they have the edit bar across the top, kind of like our Plone toolbar. This admin button here, it can actually be pushed over to the side, kind of like ours, we can have it on the top or the left side. Drupal 8 focused on a lot of usability fixes. So they recognize, I think they've got a lot of people in their community, kind of a tremendously more than maybe the Plone community does, but they focus on a lot of usability, UI, UX. Drupal 8 finally comes out of the box with a WYSIWYG editor installed. It used to be that if you wanted to use what you see as what you get edited through the web, you had to install an add-on module, which, we'll talk about add-on modules later, but it could be a double-edged sword because if you get stuck at a certain version of an add-on module and you can't upgrade, you really kind of get hosed. So it's nice to have that built into their core features now. Another big usability is the icons, apparently, were a big deal for the Drupal community to put there in their toolbar. They said they got a big win in testing. And they have a package installer. So directly inside of Drupal, you can actually go to install add-ons or modules from Drupal.org into your system without having to go to the file system. There's actually, I think I've got a screenshot of it in here later, but you basically just paste in the URL of the thing you want or upload a zip file that includes the module you want to install. It installs it, activates it, and you're good to go. So they really kind of simplified that process for people who are new and getting used to installing add-ons into their product. Let's see. Yes? How much of the usability? I think there's a lot of this community. I mean, we talked a little bit about Acquia and their role in the community. Obviously, they're a big force in their community because they kind of have a central group that has a lot of money and investment. But their focus is strictly on, most of the time, strictly on hosting. So Acquia is a group that the founder of Drupal works for. And they try to basically focus on providing an enterprise-grade, super bulletproof hosting platform to put your Drupal site into. They try not to compete with the Drupal integrators and developers who are doing day-to-day work on Drupal sites. But the community has broken up into teams kind of like we are. You know, they have a security team and a core team and this and that. So I think they've mostly community-driven. At least that's the impression I got through this process. So it's very community-driven. So what I thought was interesting is in a basic Drupal install, you have two content types available to you out of the box, which they seem fairly similar to me as well. You have an article and a basic page. Now Drupal does include a book content type, but it's not active, and a forum content type that is not active. They used to ship with a blog and a poll, but in Drupal 8 they removed those out of the core product. They really do guide people toward custom content type creation through the web versus add-on modules or standard content types that are already built in and ready to go for you. So again, that is interesting that basically you've got two content types as an out of the box experience. I thought that was kind of a limiting. Now what they also have introduced in core of Drupal 8 is a thing that used to be a contrib package like an add-on for Drupal called views. And you could think of views as basically the equivalent of our collections or smart folders. And they do a lot with that kind of like we would do if everyone had, if Mosaic was, you know, primetime and everyone was using Mosaic now or using tiles and blocks and things, they use views to make basically collections of content show up anywhere inside the site and make layouts for different kinds of pages. So that's their technology for that, it's called views. Now they also have users, files and comments, so kind of standard stuff like we have as far as like user management, there's, you know, as entities inside the site that you can refer to, but they're not full-fledged content types. Now Plone, when you install it out of the box, we also have done a great focus on usability and redesigning from Plone 4 to Plone 5. I don't have to go too deep into this because I hope you people know what this thing looks like. We've been talking about it a lot, but what I really wanted to show off was I still like the fact that we have a great set of built-in content types that you can use right out of the box. I think this helps guide new users toward managing content right away. The old argument of the product versus the framework, Plone is nice out of the box because you can use it right away. Someone can install it and start managing content. They don't have to do a lot of configuration so they don't have to set up their own content types. Drupal appears to be more of a, you install it and then you better start doing some configuration to it first before you really want to start using it. You want to set up your information architecture, you want to set up your content types and things like that. We have a great set of out-of-the-box content types. We also have users and groups management, discussion items. Those do exist as well. Now, when you get into products and modules, this, Plone 5, I don't think we went forward in this direction with products and modules. Right now, I even kind of dressed this up by pointing to the new beta of PyPy. But basically, we're sending new people directly to PyPy without much guidance about how to find add-ons. It's very hard to find a Plone 5 theme in PyPy. You basically have to know how to run the checkboxes and even then, the results are iffy because you're relying on people having filled out the classifiers correctly in their setup.py. I don't think this is a good situation for new people coming to the community because it's hard to navigate. I mean, we kind of got some suggested add-ons on the main page for our site, for Plone.org. But I think we can do a better job here. Now of the add-ons that are in PyPy, yes. Oh, good. I'm going to be excited when this is what PyPy looks like when you go to it. It's a current one, I think, 1998 called, month their site back. They do a great job. I mean, they're working really hard and keeping that thing very, very stable for us. But yeah, I'm looking forward to this new, this is pyramid-based, the new PyPy, which is pretty awesome. But the Plone add-ons inside PyPy, there's about 2,800 of them listed. But once you filter down just for Plone 5, you can see right there, we've got 232. Probably half of those are Plone itself. So there's not a lot of add-on packages that are compatible with Plone 5 yet, which is a little bit disappointing. So not a lot of it ready to go yet. Now Drupal, on the other hand, this, and we're talking about Drupal 8. So Drupal has over 35,000 modules available in their download and extend kind of their add-ons section. Now the issue is, no, I was going to say this is definitely, again, a double-edged sword. I think because Drupal has typically shipped with less core functionality out of the box, they expected most people to build modules to extend it to get to the level of a Plone. I mean, probably in Drupal 7 compared to Plone 4, you had to add in about 35 or more modules into Drupal just to get to kind of feature parity with what Plone gives you out of the box. And once you've added those 35 modules in, who's to say they're going to stay maintained, upgraded, and there's a lot of lock-in that can happen. Now when you do filter down by only Drupal 8, you're down to about 1700 packages. So they have the same, you're going to see a lot of parallels between our communities. We started nearly the same time. We released Plone 5 and Drupal 8 about nearly exactly the same time. And the adoption rates have been about the same kind of path. I'll show that later and I've got a graph showing kind of adoption of Drupal 8. It's been slow. And the developers have been slow to update their add-ons for Drupal 8 as well, which is, like I said, a problem, I think. Yeah, so there's not really 35,000 modules available. There's far, far, far fewer than that. But here is that one cool feature I think we should add into Plone, which is this, if you want to install a new module, you just come in here and paste in the URL to the Drupal.org site. It goes and grabs it. I would hope they're doing some kind of code signing, signature checking, hash stuff. Oh, boy. Well, we should do it like that. Because it is a cool feature for newbies. If someone's coming to the site and they want to be like, oh, I really want to try Plone Form, Jen. Now, how do I get it? Oh, well, you're supposed to add it into your build-out or your setup.py and you go rerun this thing and then that's how you get this add-on. This is way, way nicer from a new person's perspective, not understanding the whole ecosystem of how you add modules and add-ons into your site. Which Plone Form, Jen, is a great add-on. I always recommend it highly. And they've addressed this a bit with the Drupal 8 stuff. I'll talk about that later when we get to the security slides. But the content editing story between the two sites, Drupal 8 uses a CK editor, which is available as an add-on for Plone. But that's now the default out-of-the-box editing experience. And obviously, Plone is using TinyMCE4, which is quite nice. Now, the big difference is how they lay out their content. So we'll talk a bit about, you know, okay, so yeah, here's Plone4 with our nice, you know, TinyMCE4. I'm pretty excited we got that finally upgraded. And now I'll talk a little bit about editing content. One of the nice things about editing content is this is probably our super secret sauce right here, is the fact that our workflows are so extendable, so tied into security, there's just a lot of benefit to being able to track the state of items and have multiple states and have trans-multiple kinds of transitions and all the things we can do with our workflow system just kicks us not out of what you can do in Drupal. So in Drupal, here's the CK editor. You see it's got kind of a very stripped-down toolbar there. You can actually change what kind of toolbar you get right from the edit box. If you're more of a power user of CK editor, you can get more buttons available to you or you can go to plain text. And all the button sets are editable through their control panel, kind of like ours are for a TinyMCE. So again, very, very similar. Now if you want to brace yourselves for their whopping workflow set up here, this is pretty much the workflow. Save and publish or save as unpublished. It's on or off. It's not really a concept of moving things through states in Drupal. So there's kind of the biggest area of departure and differences here is that their workflow system is on or off. So it's great for simple use cases where people aren't trying to do like a vacation leave forms through their internet where they want to be able to have approval processes going on there. They're not really trying to model business processes here. They're just basically saying either the page is visible or the page is not visible. And we'll talk again a little more about that in the security section. But they do have some workflow add-ons. They've not been maintained, unfortunately. The last update workflow in GHA was in 2014. There's a rules add-on that apparently replaces it, but I don't think it's still in beta for Drupal 8 at this moment. So there's not really that many great options even to go to a more robust state machine type workflow for us. So when you want to customize your product and you want to go hire some people to actually do some development for you, what is that going to cost typically in each community? So I asked, we kind of looked at this, the typical going rates for independent contractors in the Plone communities anywhere between $50 and $100 an hour, the typical consulting companies charge between like $185 an hour. That's kind of the going rates for consulting work. In Drupal, there doesn't seem to be much of a distinction between freelancers and shops. There's not like, there's a couple big shops, but really it's about the same pricing. It's about $50 to $200 an hour for Drupal resources. Because some of these pretty high profile sites have got some dollars. That's why they kind of got that top end. But there's a huge amount of the low end, like $50 an hour, you know, Drupal developers that are out there. Now when it comes to customizing Plone, you basically got build out. You want to add on packages for your site if you want to make a custom theme or content type. We typically make a custom policy package that was placed where you may put your workflow policies. You may save out generic setup, exports of your settings, things like that is what you'd put into these kinds of packages. What's nice with the custom development now in Plone 5, and this was in Plone 4. 2 was the dexterity custom content types being able to do this stuff through the web. This gets us much more in line with the Drupal 8 or the Drupal 7 experience of creating their content types through the web as well. So those people are going to be a lot more comfortable now coming over to Plone and being able to do, basically, click, click, click, build some content types and be able to be productive inside of Plone. And what's also nice, we can round, they have the same feature where they can basically create their content types through the web and then export them out as a YAML to the file system. I love the fact that we can take these and export these out as XML and actually use them inside of our content types package and then redistribute that as an add-on if you found you've made something that was generically interesting to other groups. Now customizing Drupal 8 has actually gotten a little more sophisticated or complicated depending on who you ask in the Drupal community. There's actually been a fork of Drupal at Drupal 7 called Backdrop because this, Drupal 8 is a major departure from the, what was the typical way of developing core Drupal and making new Drupal add-ons. They moved to this new symphony, basically web application framework. They ripped out the whole bottom end and replaced it with a new, basically a component architecture system. As I was talking to my friend Doug about symphony and customizing Drupal, he is not a fan. He's like, you have to have a computer science degree anymore to be able to customize things. You have to dig down into like five different directories and edit ten different files to be able to add something to Drupal. I go, you sound like you're talking about a clone. You would be very comfortable as a clone developer looking at the symphony framework. Basically what they've done is mimicked a lot of things we gained in clone 2.5. So when we incorporated Drupal 3 into clone, we started getting the component architecture stuff, adapters and views and all those kinds of nice things that we got in clone 2.5. They just got in Drupal 8. They can export their settings now out to the file system like kind of generic setup. So again, I think we would be fairly comfortable with this new set of tools, but I think it's been hampering some of the developer adoption of Drupal 8 because they did such a radical change on the back end framework. Now when you're customizing Drupal, they do have a lot of different ways to do it. This is one area where they've got a lot of flexibility. There's some cool command line tools such as the Drush shell. So they've got this thing called Drush make file. That's like the Drupal shell tool. So you can actually, you do the templating and boilerplate code kind of like we do with Mr. Bob or what we used to with Templar or Zope Scale. Same thing for Drupal console. They finally in PHP have a way of basically pip installing and pinning versions and that's called conductor. So if you start doing any Drupal, you want to make sure you get familiar with conductor because that allows you to pin versions so you don't get, let's just say, a bad experience when you go to like run the upgrade and all of a sudden you've got new versions of packages. But they do rely still on a heavy through the web experience. The typical integrator or Drupal developer is going to start through the web. They're going to create their information architecture and their content types first there and then export them out to the file system so they can move them into their other various environments. Here's the, this is their custom content type. Basically cool on our dexterity is that you can create, I've created a, what is this, a recipe. So this is a recipe content type. You just basically click add field, add field, add field. They've got different kinds of fields. For example, text, long format of a summary or standard text. But it's basically a whole set of like different fields like we have for dexterity for building the content types. And then it gets exported out of XAML to the file system. Oh, so another nice thing we kind of talked about when we get into a little bit of hosting. Getting started with Drupal actually is made pretty easy if you're using one of the new hosting providers that is available in the Drupal community. One of the example ones I'll talk about is called Pantheon. And you basically go sign up for an account, tell them you want a new Drupal 8 instance. It spins up a dev testing and production instance for you simultaneously and gives you a git repository that you just clone back to your machine. And as you make changes and push, you pick which environment you're going to, you know, migrate that or push that code to. So it really makes it like, they've got kind of best practices baked in with a system like that. So if I was going to start with something with Drupal on a project, I would use one of these services because it really helps guide you through the development process of customizing Drupal and making sure you're doing it in a well-tracked way. We'll talk about theming now, unless you guys have any questions about customizations. So in clone, this is an area where I'm really happy is that I think Diazo has really improved our customization story and theming story tremendously for people who are new to clone. No longer do you have to learn, I think I had to write down the list because it was so big, ZPT, ZCML, Python code, Python scripts, component architecture adapters, browser views, like there's this, you know, almost endless list of Z words you have to learn to be able to do theming in clone previously to clone 5 and clone app theming in clone 4. So I think we've made a great stride here that if you consider a designer or a web engineer down and have them know HTML, CSS, JavaScript, and then where is Annette? Oh, if you can be an XSLT ninja like Annette, oh my gosh, you can do amazing things literally without having to leave your browser window. So I was really impressed with the presentation this morning about what you guys have been doing with Diazo rules. And it doesn't seem like that unapproachable. That's probably the biggest technology you got to learn is the Diazo rules, not too big a deal. So, you know, screenshot of like our clone app templating. It gives you ability to do through the web, create your rules, create your templates, do all this customization. Oh, yeah. And the fact that we now with the mock up inside clone 5, it's really easy to bring in fancy pants JavaScript libraries and do cool stuff with React or Moment or whatever the things you want to like try out. It's way, way easier to do that now than it was before. Drupal theming, it's not like what we have known in clone 5. It's more like what we had in clone 4 and clone 3. You basically have this concept of main base themes and you typically when you're making a theme, you make a sub theme and just override the pieces from the base theme that you want to change. It's interesting that the templating language they use now in Drupal 8 has changed. It used to be PHP templates. So you actually wrote like real PHP code in your templates, which was a source of a lot of the security vulnerabilities that came out with Drupal prior to 8. Twig is a templating language that we will all be very familiar with because it was written by our very own Armin Ronecker. So it looks just like Jynja 2 or Django templates and it's very nice and this has actually led to a tremendous improvement in the security of Drupal 8. So again, as clone developers, we would be probably pretty at home or Python developers in general. We would be pretty at home with the Twig templating system, but it's very similar to how skinning used to work in clone where you would override like base view or whatever document view, whatever kind of thing you wanted to customize. You do the same kind of thing with Drupal 8. You basically override in a sub theme the bits and pieces you want to do to customize your look and feel. So it's still the same thing, HTML, JavaScript, CSS. And in this Twig, it comes as part of the Symphony 2 framework that they moved to with Drupal 8. Yeah. No, so that's another big difference. They don't have a through the web editing experience for Drupal. That's another big win, I think, on the clone side is the through the web templating. Right now, if you want to customize and modify these themes or create your own sub themes, you do have to do it from the file system. Yeah. Yeah. On top of this, yeah. Yeah, so exactly. For people making their own modules for Drupal, they will provide what are called hooks. If you want to add something into a menu someplace, you make your own hook and write your own code in there and it injects it into the whatever menu you put in there. But like you said, one developer's menu hook is going to look different than another developer's menu hook and it may be difficult to... Yeah. Right. That's where the views come in. So they have layouts with these views, they put blocks wherever they want to put them and those are also hooks so that the module developers will make their own blocks and they have different code styles for all those as well. When you install Drupal, and actually when you're using Drupal, this is another area where we differ quite a bit is there's typically going to be a default theme and that's the kind of front end that most web visitors are going to see for your site. But they also install and they kind of can't see it down here, but there's an administration theme. So you typically may have one single base theme and two sub themes, one strictly for like editing, like the use the content contributor or content editor experience, and one which is for the web visitors who are coming into the site. So you really don't get that full in context editing like we do with Plone that as I'm editing the page, I still see a lot of the page around as well. So that's interesting, they've got this administration theme default theme going back and forth. So you'll flip back and forth between these versions of the site. This is the administration theme and you can see the back to site button up there in the left hand corner, that gets you back into the retail version of the site itself. So this is kind of just showing the theme registry. They use YAML to set up kind of a settings file, that's how they hook up things inside of your modules when you create an add-on for Drupal. And typically your standard theme package is going to have a templates directory. Actually in Drupal 8 you have to put your templates in the templates directory or it won't find them correctly, it used to be you could name whatever you wanted. But they have the YAML files that describe the add-on of the module for your theme and then all the config and CSS go all through in there. Now this is the views part of this, I believe I mean the add. So you set up your information architecture. Yeah. Like oh, you mean through the content editing experience, right? Okay. Yeah, that's something I meant in dive into too deep. Oh right, because they give a little pencil like when you kind of hover over sections of the site. Now we turned that off and clone. Because there was a pain in the butt. But yeah, they do, so when it gets down to like blocks, you'll see here there's, like kind of like we have viewlets, everyone here familiar with like the view system inside of clone. There's all those little spots throughout the pages where you can inject like your own snippets of code or part of the view. They do theirs through the web with this section called blocks. And so this is where you can actually say I want to put the site branding into various areas, the main navigation and you configure what pieces go where. And it's basically equivalent of us configuring our viewlets, you know, throwing the nav into the header, viewlet, et cetera. That's their equivalent for, and they do it all through the web. And which we can kind of too with the at-manage viewlets, but we don't typically show that to normal human beings because it's a little tricky. Now upgrading and migrations. This again is an area where we will differ, but Drupal 8 is working on this. One nice thing about clone is you could take a clone 1.0 site and you could migrate it step by step by step all the way up to clone 5 and actually have a usable site afterwards. Plone comes with a great set of documentation at every release for all the things you typically need to do between versions if you want to make sure your add-on code is ready for the next version of Plone. Now if you want to migrate code in and out of Plone, we've got through the community, not built in, but through the community there is, you know, like transmogrifier pipelines for example, so you can actually export and then re, you know, kind of dump and reload if you want to start fresh with your site. Depending on the age of the site and the number of custom add-ons you'd ever put into your site, I will recommend the dump and reload version of it versus the try and walk it all the way through from clone 1 to clone 5. But if you haven't had a lot of add-ons or it's a very clean site, it's very, very feasible to go right from, you know, whatever version of Plone straight up through to clone 5. Now again, add-on products will be the real sticking point for you. If you're on add-on products that haven't updated their code for clone 5, you're stuck until they do or you remove it and put some kind of alternate technology in there. A big advantage of Plone over Drupal with our given our kind of large, robust feature set, I typically think we don't have as many add-ons or modules installed into our sites which makes them a little more able to migrate easier. The Drupal site, so up until Drupal 8, each previous version of Drupal was basically a fork of the previous Drupal and started from scratch, like scorched earth almost. So migrating from one version of Drupal to the next version of Drupal was typically not a priority for them. They kind of expected you to dump your content out, upgrade the newest version of Drupal and redo everything from scratch if you're going to actually migrate to a new version of Drupal. It's changing with Drupal 8. They are stabilizing their APIs, they're trying to make them more backward compatible, so if you're going from 8 to 8.1 to 8.2 or to 9, the idea being it would be possible to actually migrate your content in place. But that's not been historically the case for much of Drupal. But then you have the same problem with add-on products but a much greater scale because you probably have a lot more modules installed into your Drupal site for that. Yeah, they also have the ability for you to do migrations in and out of Drupal. So people would typically do a, what's called a Drupal to Drupal migration when they're going from like version 5 to 6, 6 to 7, et cetera, which dumps it out to the file system and then you reload it back in as part of the Drupal migration. So hosting for the two kinds of technologies. The clone hosting can be done on lots of different kinds of cloud services. What I typically find with clone hosting though is it's a lot of do-it-yourself. You need to do set up the machines, install the dependencies. There aren't a lot of these push, but there aren't hardly any push button ways of installing and hosting a clone out there other than say like the Heroku button. But has anyone run the Heroku at scale with any kind of a large site with large traffic? Yeah, I haven't either, so I'd be interested to know how well that works because that's a really compelling story for getting people started with clone for sure. But if you have an even the most modest site, I would recommend people running Xeo using two cores on a VM someplace, maybe about a gigabyte of RAM because it all varies based on the number of add-ons you've got in your site, the amount of content you've got in your site, but just a general kind of rule of thumb. I don't typically go much smaller than that. I've got some pricing in here as well. So for example, something a digital ocean will cost you about $20 a month. EC2 I'd recommend, no, a lot smaller than like a T2 medium for reasonable sites, that's about $37 a month. I was surprised Azure was so expensive. Is anyone hosting on Azure currently? That's probably why. It's a little pricey. And then obviously I have to, you know, prep my own services a little bit here, but 6-feet up does offer kind of entry-level hosting of clone sites at $50 a month as well. Yeah, actually I didn't put Google Compute in here. You know why? Because no one's using it. Yeah. No, I actually had a coffee with a guy from Gartner and we were talking about the various cloud platforms and basically the enterprise is scared to death of Google Compute because of the way Google can willy-nilly kill off any platform. So at any moment you could be at the victim of Google's choices. And then we also, like I said, the Heroku button I think is quite interesting. For free I think you can get up to about 200 pages of content in your site before you need to start paying for the Postgres dyno because I think you get like down 10,000 rows for free in the Postgres database offered there. Similar situation? Yeah. Oh, wow. True, yeah, if you've just got kind of a small brochure where site, that will work. Like I said, this has been a three-month process. That price probably got put in there a month and a half ago. And for Drupal it's pretty similar as far as like the size of kind of horsepower you need to host a reasonably sized Drupal site with comparable traffic and amount of content. The difference here though is you can have a DIY, DIY, do-it-yourself Drupal deployment, but there's so many great services out there like this Pantheon platform SH where they just set up multiple instances for you. They actually have push button. I want to bring my production data back to my test instance. I want to migrate this code changes up to the new instance. A lot of them will even go so far as if your code is with them in their repository and there's a new version of Drupal out there, some are already set up to basically splat the update of Drupal automatically onto your thing and you just need to go push a button to approve that you want that released. So they really eased out, they eased the experience of deploying Drupal. You can host with thequia.com, but if you have to ask, it's probably too expensive for you. So it is quite pricey from what I know. Now another nice thing these other Pantheon and thequia and platform SH give you is a lot of them also include services like New Relic as part of that plan. For $25 a month, you'll get Drupal hosting with New Relic stats. If anyone's using New Relic, it's a pricey service on its own and it gives you a wealth of information about what's going on behind the scenes inside your site. So that's also pretty cool. So web performance, both systems have similar techniques for accelerating. I mean, if you guys are already familiar with using varnish and HAProxy and MIMCached for auth and sessions, the two systems almost look identical when you go look at their performance pages about how to actually accelerate pages. Plone though out of the box has some advantages. With our Plone app caching control panel, we have the ability through the web to kind of adjust and do some settings where how long you want to cache something. Basically, you can kind of customize the caching experience through the web. Plone also allows us to purge, at least a very minimalistic purge request for at least the page and maybe the container it's in. The Drupal out of the box experience doesn't have that. What's also nice with Plone is you can, with a very small extension, customize that purging behavior. So if you want to make sure the front page gets purged, that item happens to be showing on the front page as well, you can do that really easily with Plone. The Drupal 8, they don't have the purge capability out of the box. There's an add-on that you would have to put into your site to be able to get purging working correctly. Now one thing that is really cool, the Drupal 8, that we don't have is the new big pipe, Facebook big pipe. Have you guys heard of big pipe yet? That is a really cool technology. So if you guys have gone to, who here is on Facebook? Yeah, okay, I'm going to just raise my hand. If you go to Facebook and you've ever noticed that when you first launch the page, you see like gray placeholders that look like the posts on your wall, and then all of a sudden they kind of filter in with like the real ones, that's big pipe. Basically they consolidate the non-personalized version of your page into a very few number of requests, get that to you quickly, and then they stream in the dynamic bits behind the scenes. Drupal 8.1 ships with this out of the box, which again, that's a pretty fantastic performance feature and something maybe we should even look into in the clone community because it helps with like, I was talking yesterday about the Google AMP, yeah, exactly. Same kind of thing to help accelerate and I think that's going to be really important for mobile devices on varying degrees of nice connectivity. So that's one area where I think Drupal has added some really nice customizations for the performance. I'm sure this is the slide everyone's been waiting to hear about, right? It's security. Each community, clone and Drupal has a dedicated security team. How many people are in our security team? I couldn't find a number. Steve, you know? Or Nathan? Where's Nathan on here? I guess it's like five, six people are kind of our security team. They're very busy. They do great work. But it's that secret, right? Drupal's security team is 40 dedicated people. Some of them paid by their employers, at least maybe a half time or some percentage of their work time is actually dedicated to Drupal security. They obviously have a lot to do as we've seen in the past. But what's nice, we've clone has only had reports of a very few sites being hacked and those sites basically weren't patched within six months of the release date of the patch. So it's very rare that clone sites are actually going to be exploited, especially as the people are being vigilant about applying their patches. So we have, you know, the zero day claim we make, I think it's pretty important for people who care about security. I went to that exact page and I only saw like two people because I wasn't logged in. Okay, so it's all secret. You guys are very secretive with your security team membership. Oh, you know, I got these slides slightly out of order. So that's kind of when it comes to like security teams. One of the nice things about Zope and clone is that we were built on Zope. I think that's a big benefit for us when it comes to security. We've basically been able to leverage that built-in roles and permission system and we've augmented it with our own users and groups where we have a really fine grained experience for being able to customize who can see what when and where. And when you combine that with our awesome workflow tools, you really get this tight experience when it comes to ensuring people can't see things when they're not supposed to see them or making sure pages get published or kind of the whole experience is really nice. And I like that out of the box, we have the built-in roles of like contributor, editor, reader, reviewer, site manager and manager. When we look at what Drupal does, which we will in a second, you'll see it's a very different experience. We also have, you know, very easy check boxes here and what these should, for most people who don't know, these security settings check boxes that are in the through the clone experience actually manipulate that, well, they actually manipulate this, which is something I tell people never to touch because this is not meant for any human to ever touch. If you were to, this page is long, now if you guys have seen this, this is the security tab at the root of your clone site. And again, if anyone ever learns anything from me, it should be to never touch the screen under any circumstances. Let clone do it through the clone because that's what it's doing for you here. But this is also part of our power, the fact that we get down to such a granular level with what people can see and do and you can think of all these permissions in clone is like little locks or little gates that kind of unlock our show for functionality through the site. Drupal doesn't have nearly this robust of a system. This is the equivalent of their security tab. This is out of the box. You can see that there's three roles, two of which are basically virtual roles like we have, authenticated and anonymous. If you want to do more than this, it's up to you to kind of design your security scheme by adding in your own roles and then again checking the boxes that's a long, it's a long page, not as long as ours, but it's a long page of managing check boxes. That's how they do their roles and security inside of the Drupal system. So again, I think it's an area where we really excelled because we're built on top of a system that thought about this first. I think Zope was made with security in mind. Yeah, so now they do in Drupal 8. So all the Drupal 8 settings can all be exportable into YAML just like we have a generic setup exporter. They have a similar thing here as well. So they can move those from environment to environment, for example. Yeah. Yeah. Right. Now the individual piece of content is either published or not published. Not that I could see out of the box. We didn't dive too terribly deep into it, but we have got an amazing system that comes to local roles. If you haven't checked out dynamic local roles with Bjorg local role, that is amazing. The fact that we can customize based on any fact or data inside the system or even external roles of the system. Yeah. Right. Right. Right. There is going to be based on the type of content. Certain users may have, like you said, access to create this kind of content versus that kind of content, but not where they put the content inside the site. That is handled by their view system. Right. This is about the only other security settings I could dig up in the Drupal site was kind of the registration and cancellation. So that was kind of the equivalent of our security tab inside of Plone. Now, when it comes to security track records, I really only wanted to compare Plone 5 and Drupal 8. They are not too far off from each other. When comparing exploits, numbers really don't matter too much in security because it is about track records, about severity. There are so many different factors to take into consideration. So I wouldn't look at these numbers and be like, oh, well, Drupal 8 is way less secure than Plone. They have got a very vigilant security team. They've changed up their framework quite a bit to address a lot of these kind of security issues. And they're right on par with, I think, you know, the improvements that are showing here in the numbers. The reason these numbers are so low is because there is such low adoption of Drupal 8. This graph here is showing Drupal core usage. And so you see here is 2016. Here is January 2016. This line here at the very bottom is Drupal 8. Yeah. That's Drupal 8. The big ones up here are Drupal 7 and 6. Yeah, 7 and 6. That's total. Oh, that's total, yeah. But this is Drupal 7 right here. So Drupal 7 is still hugely popular out there. Now when I say hugely popular, it's really not that popular. If you look down here, this is WordPress usage. WordPress accounts for about 26.4% of all CMS sites out there. Drupal is about 2.2. So in terms of scale, Drupal is dwarfed by WordPress, kind of much the same way we're dwarfed by Drupal. So you can see we're kind of in the long tail of CMS usage out there. So the reason we've not seen a lot of security issues with Drupal 8 is definitely because I think the adoption rates for Drupal 8. The numbers here, the total number there is 1.2 million sites reporting back for Drupal. Because of last week, there were 125,000 Drupal 8 sites out there. There was only 41,000 January 1st. So they're getting adoption, but it's still not taking off like crazy. I wish we had this for Plone. We don't have any way of, we don't report back to home about our versions. I think we should, because they're going to be interesting to have those kind of same numbers for us. All right, so intellectual property and community. I love this because we are an awesome community. We've been around since 2001, but Plone 5 got released in September. We created our foundation in 2004. So I think we were ahead of the game when it came to protecting our product and our IP and in our community. And this just, you know, 895 contributors have put code into Plone at some point in time along the way, which I think is also awesome. But then I heard about the Drupal numbers. They've had over 3,000 committers into Drupal. This says 150. I got this off of Olo. It's not accurate. They actually have a Drupal commit tracking site out there that shows you for this version of Drupal, how many people committed, what their names were, and what they committed. It's quite interesting. Very similar history. First Drupal released in 2001. Drupal 8 was released in 2015. They started the Drupal Association in 2009. Now the Drupal Association isn't like our Plone Foundation necessarily. They are mostly there to raise money and run the events like the Drupal cons and things like that. So they do a lot of promotion. But they are not the IP holder or trademark holder for Drupal. That is still solely controlled by Drees. And this is the copyright statement right from the code. I don't know how this scales when you've got 3,000 contributors if they all own their original like work that's inside the code. So it seems to be run by best good neighbor policy so far. They've not run into any kind of problems. This part scares me right here because if they want to change their license, what do they do? How do they go about this? This is just, it would be an insurmountable task if they ever wanted to do something like changing their license, for example. Then Drupal 8 has also spawned a whole new fork of Drupal off of Drupal 7 called Backdrop. So there's a new CMS out there for the people who are, you know, staunch. I like doing my Drupal 7 way of doing things. Actually Doug, the guy who helped me with this presentation, is a big fan of Backdrop as well. He kind of falls in line. You saw the graph, there's a ton of Drupal 7 sites out there still and people want that to kind of continue. So they're, you know, maintaining, enhancing. They're backporting some of the features of Drupal 8 when they can into Backdrop. But just be aware that if you are doing Drupal, there is a fork out there that is gaining some traction and it's popularity. Oh, so as far as community, I forgot to mention, you know, so for Plone Conference, here we are, this is us. But we also have lots of sprints, symposiums, lots of events where we get together. By comparison, DrupalCon this year in New Orleans had over 3,000 people. Their regional events are getting about 1,000 people per regional event right now. But they started their first Con in 2005 with 50 to 100 people. So kind of similar, similar startings to us, similar size. They have the DrupalCon, the DrupalCamps, lots of meetups around the globe. So there's a big, huge community of people doing Drupal out there. Really nice, really nice, powerful thing. So in summary, I think we're seeing a lot of convergence of features between Drupal and Plone. A lot of the through-the-web content type creation that Drupal has had for quite some time, we're finally getting, you know, comfortable with and it's nice. They've moved, Drupal Ace moved into a component style architecture. So we would be very comfortable if we had to do work on a Drupal site. We would know how to go look up all the crazy abstractions thanks to Martinus Belli. Plone has always had collections. Drupal's now added in views to their core. So if you're familiar with how collections work and views would be really, really familiar to you if you went over to the Drupal site of things. And again, Plone, if we can get Mosaic, I think we talked about that for the roadmap, having Mosaic put into the core Plone, I think would be a big boost for us in terms of feature parity with the Drupal folks because that would give us that ability to customize layouts and blocks on the pages. I will just say it, we absolutely kill it when it comes to workflows. We, that's one of our biggest strengths and also the through-the-web theming with Plone app theming. It also gives us a leg up over the Drupal community. The security standpoint for Drupal 8 is definitely looking much more optimistic than it used to. We get more adoption and more people actually using it and road testing against higher profile sites. Unfortunately, Plone hosting is still DIY. The Drupal hosting options was really attractive. They've always kind of had push button installers and various dashboards, but I think that the fact that they've got these enterprise grade deployment options is a weak point for, for Plone because we don't have something that is comparable. But in the end of the day, if you had to choose between the two, it really comes down to your needs. If you are doing a large internet collaboration site, various kinds of security workflow, you need to have something like Plone. Okay. For Drupal, it's a workbench. But again, I was trying to compare core to core features. So there's a lot of things you can add into Drupal. Yeah, it's not core. They definitely don't have that. But if you are actually don't have a very technical staff, you need to onboard people very quickly to get spun up on a CMS, Drupal is actually very easy to get going with. I think they've made a great amount of effort in their documentation, the install process, the getting going process, the whole through the web process, and then these hosting providers giving you that ease of use between the environments. It's very easy to get a fairly non-technical staff spun up and starting to use Drupal. I don't know what their experience will be later when they're trying to do more complex things, but I think the ease of use initially has definitely been a benefit for Drupal. So I want to thank you guys all for hanging with me. I know I went a little over, sorry, but don't forget to do your survey so I can get feedback so we can improve this presentation and take it other places. Oh, shoot. It should be not that. If you've used it earlier today, you should have it in your cache. So thank you all for coming.
|
An up-to-date and educated comparison of the latest versions of the two popular enterprise-grade CMSes from the perspective of Plone and Drupal experts. The talk will examine topics such as deployment, out-of-the-box experience, adding products/modules, adding/editing content, customization, theming, multisite management, upgrading and migrating, hosting, performance, security and more.
|
10.5446/55311 (DOI)
|
Welcome to this short demo talk of Quave. Who of you has heard about Quave? And who has seen Quave before in action? Yeah, so I already did, this is the fourth conference where I'm talking about this project. So I'm trying to do it a bit different and we have just a short slot and this is mainly focused on demoing the functionality of Quave. So I'm not going very deep into the history of how it came about or the business model or whatever, just showing you what Quave is. So let's play that. So obviously you need to log in. I will start with a short overview of the whole system. So very prominent in the system is the activity stream. So this is a dashboard but you can also switch to a task-centric dashboard. We also have workspaces which also have a stream but they combine a stream with a document management system and all the documents have rich previews and you can also comment on documents. We have an adaptive case management system where you can model processes ongoing and support processes as a sequence, as a collection of files moving through a sequence of process steps. We have a collection of apps which is extending. For example we have a messaging app now where you can directly communicate one-on-one with your colleagues. There's a library which is used to show documents to all of the company and we have a very rich search experience where you can facet down on document type. You can filter down on tag. You can see how fast it's working. It's solar paced. We have specialized result views for people, for images, for files with very large previews you can visually identify the file you want to see. You can see your collaborators very prominently and you can access their profile, their rich profiles with activity streams, all the basic info who is following them, who they are following, the documents they created, the workspaces that they are aware of. You can see all of that and you can follow Alice and already bookmarked Alice in this example. Let's see where I am now. So if we go a bit deeper into the components that I just showed, some of the components, the stream is a very important element of what we're doing and that's a very rich element, visually rich, where we show a lot of stuff. There is a global stream which is what we're looking at here and this is the input element where you can share updates with your collaborators and you can attach files to that, images, you can also tag it with concepts and mention people and that shows up in the mention popup for the guys you targeted. So you don't see the popup, the attachment selection here but there it is, so there's a preview generated from that file and I'm tagging it with a topic and I want to attract Alice's attention to this so I'm going to mention Alice Lindstrom. So posting that and jacks it into the stream and you can also like it, I'm not even showing that in the demo. If we then go to a workspace, so the workspaces have a stream of their own and this shows up in a global stream only if you have access to this workspace. So all these updates are shared in a secure way and you can access them on the top level stream or on the workspace stream and we even have document level streams so if I go to this specific document I can see the rich preview here and I need to scroll beyond that to get to the commenting system below and there I can leave a comment on this document so we have a conversation on a document which bubbles up to being shown in the workspace the document is contained in which then bubbles up to the global stream if you have access to that workspace and to the document so it's all security aware but very highly integrated and a very fluid user experience. So if I go back to the workspace I can see this update in this workspace and I can also tag it with certain tags, I can also go to the tag stream for the tag and there are all the updates for the tag and I can follow the tag and then everything I'm following so all the people I'm following, all the tags I'm following they show up in a specialized filtered view of the activity stream so by default you have the global activity stream but you can switch to a filtered activity stream so you can narrow down on your own interests and be much more targeted informed about what your interests are. So that was communication. The foundation for our secure collaboration is workspaces which are a combination of people and content within a trust zone so we conceive of a workspace as a very secure area where as a user you know who is in there and who has access and you don't have to think about security then. You just know who is the members so you can see the membership and you know all these members have the same privileges as you do except for the ones who are flagged as admins. So you can see the group of people who is collaborating in this trust zone and you can then also we can go a bit deeper into the security policy which I can inspect and in this case I'm logged in as a workspace admin so I can also change it. So the security policy is organized on three dimensions. The first one is external visibility which is is this workspace visible at all to non-members so if it's a secret workspace it's not even visible. In this case it's private which means the existence is not a secret but you need to be a member to be able to go in. I'm going faster than the video here. And then there's also open workspaces which means anybody can actually go into the workspace even if they're not a member. The second dimension is the joint policy which is who determines who becomes a member of this workspace. So the most conservative setting is admin manage so only admins can pull new people into this workspace. And then the in-between setting is team manage so anybody who is already a member of the workspace can invite other people into the workspace so you can you can build your own team there and then there's an even more open for an end which is self-managed which means you can self sign up for a workspace you can add yourself to a workspace. And finally there's the participant policy which controls that once you are a member of the workspace so what are your rights on content within this workspace. And that again goes from a very conservative constraint on the left to very open on the right and we have a lot of combinations there. In this case for example a produce workspace or let me say a consumer workspace is a read-only workspace. A produce workspace means that you can start adding documents but you can't publish them. In a published workspace you are allowed to self publish your own documents but you can't touch other people's documents and then if you go one step further to moderate it means you know everybody can edit and publish everybody else's documents which is a great way you know if you have a very high trust team you can just collaborate on documents together and then there's also a guest setting. So I just changed the setting here and that's all instantaneous it's HX driven so that happens in the back end. So I can add a new workspace and given all these variants of security settings we package some variants down into templates so you can select a lockdown SQL workspace and it makes a consistent choice of all these settings I just showed you or an open community or a private workspace and this then goes into all the details but I think I just covered all of that. So this is a private SQL workspace which is you know everything is to the left is highly constrained so it's not visible only admins can pull new people in and once you're in you can produce stuff but you know again an admin has to publish that before anybody else can see it. So that's suitable for a very you know conservative setting or a high security setting and then the next variant is a private workspace so there you need to be a member to get into the workspace but once you are in it's team managed you can pull in new members and the participant policy there is moderate. Finally there's the open community which is just accessible to everybody you can then self sign up as a member which then gives you immediately also the right to publish documents within that open community. So that's more suitable for a community of interest if you have a large network or large internet people can just congregate on a specific topic or maybe you're just organizing a party and whatever everybody weeky style collaborating there and you can see how we cover all these variants these are like 72 variants of security settings and a very intuitive three-slider model so you can configure any workspace in the system to adapt to your security needs for that specific group of people. We also provide adaptive case management which is a specialized variant of a workspace which instead of the security model I just outlined as a workflow security model. What you see here are milestones so the big blocks are milestones which are security zones and within the milestones we have to do. The milestones are security scripted so every boundary between a milestone is a security boundary where the security settings change in a way that is configured beforehand by us as system admins and then a case is a collection of files which progress together through these series of states. So you typically use that to do decision support and decision making and have some bit of a structure on a decision making process without over constraining it which is why it's called adaptive case management because as members of this case you're free to add and remove the do's at will so the only hard framework is the security zones the milestones themselves. And I just close the stakeholder feedback to do which then enables the close milestone button so I just as a team member hand it over control to the team admin because now I don't have any buttons there anymore everything has become read only because I gave you know I said as a team member we are done and now the team leader can inspect that whether that's actually true. So he goes he will we will show that later. So you can affect what those like new prepare complete you can affect what those say and have that's all configuration that's all configuration right yeah that's all done with a generic setup. So now the team has effectively given back the work they've prepared to the team leader so let me now log in as a team leader so I can see the status of this of this case and the way it's progressed. I can inspect in the activity stream what my team has been doing in the past week or the past few days. I can look at all the documents they produced and I can then check to do is whether they match with the documents that are produced and then when I'm satisfied I can say I did my college check and I'm closing this milestone and then the organization we built this for you has this double check model where then gets you know it gets handed over to a second check for the legal department and financial so you can escalate decision making that way. And if you're an admin we also have this case management app which gives you an overview of the progression of all the ongoing cases. So even if you have hundreds of cases ongoing you can filter them down on the right on various fields you can scroll them and you have a very visual representation here of the state and progress of every case and you can obviously you can flag here when a case is over that line or it's not progressing at all. So let me stop the video there because we don't have that much time. The way that we're building this is different from the way Plone is normally built. We are using a prototype, a pattern slip based prototype for the people who have attended the Wolfgangs talk. This is a responsive design so what you're seeing here is not Plone based. This is just static file based but it does contain all the Ajax interactions that we have so everything that has to do with JavaScript, HTML, CSS is in this prototype and we just take it from the prototype when we build the live Plone system. So I can type anything I want here. The response is hard coded. Boom. But then all the Ajax interactions and everything that needs to be done in the front end is already there and we can just lift that and then make that live. So we are going to connect that with actual database actions and actual workflow controls and whatever but the whole richness of the system is here and actually stuff that we haven't built in Plone yet or are building in Plone like the news magazine. That's already here in this prototype. So that's a comprehensive definition of all that we're doing. You can see that all the stuff that we're doing is here. And this has very clear consequences for the way that we approach development because as you can see this is really a product model. It's not that you can just throw in extra collective add-ons. For example, one of the things that we do is we don't do any C2-3 form at all. So we don't believe in generated pages. All our pages are handcrafted by our designer in the prototype and all the markup and all of Quave has been vetted that way and handcrafted that way. So we don't do schema-driven design or schema-driven development. Let me see if I can escape from this and there it is. So there's some consequences to that that are not immediately obvious if you are coming from the background of normal Plone development because our stack superficially looks very similar to this is just normal Plone development. We create content types, we create the X-Tarity content types, we have browser views on that, we have adapters, we have themes on that. So you'd think that all of your normal development practices would apply but that's actually not true. And you can only see that if you think of this in terms of multiple projects ongoing. So the traditional Plone development model is that you have the base of Plone which you reuse in all these Plone projects. You throw in some collective acts and then you start building your custom project on top of that. Now, you classically have a types egg, you have a theme, you have a policy that configures that and that's where your focus is. So you translate your cost-formal requirements and you typically translate them first into data types. Now, you slap on the other theme to make it look a bit less ugly. Yeah, sorry, that's the reality often and that's what happens. But if you can't apply that to Quave and I've seen people being burned by that because in Quave we have a completely different philosophy, we do custom work for customers, that's part of our business model, but we do it in a different way. We take their requirements, we analyze their requirements and we come up with generic solutions that extend the generic product and we prototype that in the prototype. That's what the prototype is for. So we first generate a design specification and we go to that, even before we did any Plone development, we can go to the client and say, did we understand what you were looking for? What's your problem? And then we build it into the generic product. So we keep enhancing the generic product, but this is identical for all the customers. The only thing that is different between customers is a very thin layer of configuration. So any new requirements that we need to satisfy for customers, we come up with a generic solution that we can then tweak via registry settings, switch some things off, switch some things on, maybe add in a few custom types that we can switch on. And then that all goes into a single code base. So this is a customization policy, but it's constrained by the fact that we need to have this centralized vision of where the project is going. And that obviously has implications for the business model because we can give out Quave as open source and we do that. We don't, not the latest version, but you can download Quave. A lot of the work we did is available as open source currently. But it's a bit of my fear that if people just take that and then start applying old school prone practices on top of that, they will not generate satisfied customers. The right way to do it is to then come and work with us and find a way to move all of the extra stuff that you need doing, build that into Quave, and then enhance the product as a whole. That's what I wanted to share. I think I'm also out of time by now. Any questions, any discussion? Yeah? What's the interface between Quave and Plone on the rest? No, no, this is a full set of add-ons. So we have something like 20 add-ons, 20 eggs that are currently sitting on top of Plone. So that's just all the normal Plone stuff. So we do a lot of Plone API. We have our own API, but we also interface directly with internals in Plone and do lots of stuff there. So that's out of the Plone stack. So we have a separate stack, which is just our front-end prototype development. And then we translate what happens in the prototype. We translate that into browser views, and we use the prototype HTML as basis for the ISO theme, and also the CSS, and the JavaScript is all compiled here, and then loaded here. So we manage our front-end from there. So you still use dynamically generated HTML? It's all the same. The call flows are exactly the same as you're used to from any Plone project. It's just conceptually the way you approach it is different. But the technology stack is identical. But actually, if you've seen Wolfgang's talk, the whole mock-up approach, which you have now in Plone 5, is actually created by Cornelius, the designer of us who came up with this prototype approach. That's actually the same approach there, where you have something outside of Plone, which can do JavaScript interactions, and then you're able to activate that within Plone. It looks like it's really a strong framework in here. Is it really possible to build a ERP system complete with this kind of way? I don't believe in that scenario, because I think one of the things that we're doing, which is different from Plone, is that we're very opinionated in the type of user scenarios that we want to tackle. So we have a very specific goal of providing an intuitive web interface for social collaboration. And in my analysis, the ERP, which is more like process management and goods management, so that's a bit, that's a different paradigm. And if you start mixing these paradigms, it will detract from the quality of the social collaboration interface, I'm afraid. We could integrate with an ERP, but that's a different thing than you're suggesting. I was just going to say, it looks like you've been really busy in the past year, because there's a lot of stuff in there now. Well, I actually didn't show you the latest, because my laptop is behind. So some of the stuff I didn't show you is we have now editing in the stream. We have revamped our security model, so that workspaces are membrane teams, membrane groups. So anybody who's using faculty staff directory, we have something like a replica, complete functionality of that we have inside. We're working on the news magazine. All this, the calendaring system is almost finished now. So we'll have a layered calendaring system where every team has its own calendar, but you can subscribe to all your team's calendars and aggregate them into your own personal calendar. If we have time? No. But we'll do it on a later conference. So there's a lot of ongoing development. Yeah. Yeah. Thank you. Thank you.
|
A demo of Plone's intranet toolbox and its design-first methodology. An overview of the Quaive digital workplace platform. In addition to a feature demo, Guido will showcase our design-first methodology. This enables key innovations in our technology strategy and business model, and explains why Quaive is not a "normal" Plone add-on but instead a full product on top of Plone, with its own rules of engagement.
|
10.5446/55315 (DOI)
|
Okay, good morning everyone. The long line at the entrance to the center complicated things a bit so we're off schedule so I'm going to try to compensate a bit for that. Well, so I'll just get going. My name is Carlos de la Guardia, I've been working with Lone and before that we'd sold for instance 1999 so I've seen all the evolution if you were at the keynote from Eric and so the in-memory I'm at the end of things rolling up. Yeah, we've been through a lot. And in the process we've learned that Lone is not always what you want to use for all the things that you want. When we were starting with Lone it was like, hey, let's use Lone for everything and if we have a business application, yeah, we're going to get Lone into that and let's use Lone for almost static sites. It doesn't matter that requests will take like 20 times longer to complete. So over the years we've been learning about that and also the world of Python web development has improved considerably. So right now we have several powerful web frameworks and several ways of doing things that we have learned over the years and that's what I'm going to talk about. Like I said, Lone is great, Lone is a great CMS and you can get lots of things done with that but sometimes it's just not what you want. For starters, Lone is a CMS, content management system and if there's no CM in your S then probably Lone is not exactly what you're looking for. Content management is what it's all about. It has lots of features but those features are geared into making a good CMS. So sometimes we have customers that say, yeah, okay, I want Lone, I want a Lone site but you know that feature, I don't need them, I need that, I don't need that and you end up spending half the time in the project just turning features off and moving them out of their way. So lots of features can be good when those features support what you're trying to do. So sometimes it's just not the right fit. The customer is looking for something different so they would go, yeah, that's very good and I like it but how about we could do this and this and that and all this and this and that doesn't have anything to do with actual Lone. So you can do that and we've done it but sometimes you just add a lot of stuff into a system that is not built for that. And of course there are many frameworks that are popular these days and sometimes customers will just want to, hey, why don't we do this just like for example Django does? Well this is not Django, this is Lone and we do it this and this and this and this and no, no, no, I want Lone, I like Lone but how about a relational database? Okay, you don't want Lone. So sometimes when you have a project and this is something that happened to a lot of Lone companies at the beginning, you want to, everything, every project that comes your way, yeah, we can do it with Lone and that's not good, it's not good for you as a developer because you end up doing something that we call fighting the framework which is there are these features that are not useful for you and you have to program around them or just plain move them out and fork stuff so that they don't get in the customer's way and that's not good for you as a developer, you will take more time doing things, you will get more complex development, you will end up costing more to the customer. It's not good for the customer because even if you get the thing that you want at the end of the day out the door, you will have probably substandard features because you mangled Lone to do something that it was not built to do and you will probably get bad usability because of the same and it's also bad for Lone because it gives Lone a bad reputation. The customer will go, oh yeah, we had this project with Lone and it didn't work and it was a nightmare and people who listen to them talking about that, they won't know that it wasn't really a CMS project and it was a developer trying to shoehorn something into a small space, they will only know that Lone is not good so that's why I think it's important to discuss these things. There are some common situations that you run into when you're building Lone sites as a company, customers can come to you or if you're doing it inside a university or a company, departments or groups inside the university can come to you and say we need this and we need that. One common request that we get is we have this static site and it's perfect but we want to move it, corporate wants us to move everything into a Lone or we need to move this into Lone. That's one case. Other case is we want Lone but there's this department that needs to access some information from Lone. We don't want them to have administration rights or anything into Lone but we want them to have these views into the site so that's something that not necessarily requires Lone. We can work around that. And there's also the case where you want a complete different application. Even if you have a working Lone site, you want something that works with the Lone site but it's not the Lone site and you will want something else to work in conjunction with Lone so that you get, for example, the same users, the same indexes, the same database, stuff like that. So that's another use case where you have to analyze your options carefully. The other one is you need a completely different thing and it's good from the beginning to be sure that Lone is good or could not be what you want but you want to really analyze things beforehand. So the first case that I mentioned, when you have to move a static website into Lone, your first question has to be do we really need to do this? Can we just put it in Apache or something, their directory and have people access it without good front end? You don't really have to worry about people confusing URLs or you don't have to give them the URL. You can put a link directly from your main page. So that's a possibility. Sometimes though you want to keep, not just want to show that but you want to keep the information from that static website in your own, for example, in your build out that you have and you want to take control of it. That's also a valid use case. In that case, you can just use this trick of adding it as a resource, a directory resource inside your site in your clone configuration and then access it with a URL that you can also mangle from the front end and you will get a seamless access into the site without having to get all the work to move everything into clone. Sometimes you just have to do that, copy and paste stuff into clone, though like I say, if you can avoid that, it's best. But for small sites that we've moved sometimes, well, you can create a document and just copy and paste stuff in there and it also can work fine. For larger sites, migration can be troublesome, especially if, have you heard the term spaghetti code, if the HTML for the old site is not well structured to put it mildly, you will have a lot of trouble moving it to the other. That's why it would be really, really neat if you can just avoid that and put it up front. But there are clone tools like transmogrifier that allow you to do migrations in a structured way, even if you have to get into messy HTML to get there. A more interesting case is where you want a simple application to get information from clone. You have probably these users that are important players in your organization and they want access to some information, but you don't want them looking at some information in clone or you don't want to get all the trouble of training them to get that. They just need some information and you don't want them to go or date themselves and want to go into each and every site and login and get that. Probably you have multiple clone sites. So sometimes it's good to create a simple application, it can be a simple web application using whatever technology you want. Obviously as clone developers we usually like to work with Python, but as web developers we usually have to work also with JavaScript. So there's almost no escape from JavaScript these days. So you can create a simple application with JavaScript that uses, for example, clone rest API which is a pretty nice product that allows you to talk to your clone sites and get information using requests and that could help you get information out of your sites and create, for example, a simple dashboard where you can just talk to clone using even JavaScript on the client side purely and get some information from clone and display it on a nice web page. So that's a nice way of getting things done. Sometimes the information you want is not really there like accessible to clone rest API because you want, for example, a specific search or you want to combine information from several sites. In that case, some things that we have done is are getting into the clone site adding some views that are specifically for a JSON client, for example, so you create a couple of clone views that do exactly what you want and you use the APIs to get access to those views and you can get pretty customized dashboards in that way without having to get into more elaborate clone development. If you can avoid it, please don't use web scraping. That's a nightmare for you and it is really, there's really no sense. We have tools that can allow you to get information. Like I said, you can use, to create that application, you can use lightweight things. Like I said, we like Python but you could use anything if you really want to. Like JavaScript, which is good. And usually these days, Python and React or JS or Angular are pretty good matches for getting your site going. The more complex case is when you have a clone site and it's a complex clone site and you want to integrate with other technologies. That's a super common case where you have things like Salesforce or you have a front-end shopping cart or you have a database or a common user store somewhere else that you need to integrate into. And so in this case, you would need to look much deeper into the requirements. It might be possible to do some of these things with clone but it might be a lot easier and faster to get it doing with other tools. So it's very important before you begin to do something like that to really get your requirements straight and get, if you're working with another integrator, for example, if you're working with Salesforce, you need to get your Salesforce people and your clone people and everyone in the same place and try to get a plan of what features you will need from each system. If there are other pieces like, for example, an ancient library system that you need to integrate into what you are doing, you have to do your best to get the people that handle those systems and get them into the same room and try to get a plan of how to talk to it, at least find out how to talk to the different systems so that you are able to get that going. In this case, using REST API or something like that might not be the best bet because sometimes you need a lot of information from clone and you have to coordinate across different systems. So it would be better in this case to do something different. I mean, if you are going to go to clone every time you need something, it can get pretty expensive. So what we do usually is synchronize information from clone into some other system using Celery, for example, or other queuing systems. Also Memcache can be a good thing. If you move some information into that, you will get faster responses. So usually you can get a process going at night to synchronize information. It can be daily or for some kind of information, it can be weekly. It depends on the requirements. But that's a good strategy to do. One other thing that can be useful is to use a shared catalog among different applications or sites. For example, using Elasticsearch or Solar. Those are very good tools. They are fast and built for that and they allow you to do things that clone catalog doesn't, for example, search and suggestions or other nice search things. There's a great case study of this sort of situation at this conference. Today at 2.50, David Glick will be giving a talk about integrating pyramid and clone and lots of other things together into a large project. You would do good to go and look at that if you're interested in this subject because that's a really nice use case and it goes into depth that I cannot go at this point. The other case is when you just need a completely different application. Like I said at the beginning, you're not forced to use Plon for anything. We won't think less of you. We won't feel sad that you chose not to use Plon for a specific application because we know that a happy customer that is doing what is best for the business is better than forcing everyone to use Plon for everything. There are many options. One way of getting a handle of what exactly you need that we like to use is decisions. How many decisions do you want your web framework or your web application to do for you before you even start? There are full stack things like Django that already offer lots of functionality out of the box but they already picked an ORM, they already picked several, the template technologies, they already picked several tools that are part of the arsenal for Django. So if you're going to use Django, you're going to have to work with that. So it's good if before you get started, you know if you want to go that way. There are other frameworks that do almost nothing, just give you some views, templating, and that's almost it, like Flask for example and access to a database. You have to weigh your options carefully and decide how much do you want done, how much can you take that's already ready for what you need. And then what we want to do is like find three or four alternatives for a given thing and create a table and compare each one along the axis that you want to, that are more important to you in a project. So you create that and you go to the customer and you lay out the different options, try to make the best pick. If you have people who like that or a development department, it's good if they try stuff from now and then. For example, for Python, we found that it's really, really good if you can send people to PyCon or a conference like that because they get a huge view of what's available and what people are doing and that helps. Even if you're using just Plone, having a connection into the larger world of Python can be very helpful to decide exactly where to look or what to look for when you're presented with a case where you have to do something else. So make a short list, try out things that those things are useful. When you decide that you have to use something else, what you use depends on the type of connection. Like I said before, Django is a very powerful thing that's ready to roll and can get you rolling right away. Pyramid is another framework that is very powerful but it's much more flexible than Django. Flask, I put the question mark because in our experience, it's a good framework to get started quickly with something but if you want to grow, you almost always have to just throw it all away and get started differently. Even Flask's own documentation states that when you want to do these larger things, forget about these things that we explained in chapter one and let's go with this. So that's why I wouldn't recommend going with Flask if you believe your project can grow. For one-off applications, Flask is very good but sometimes you may know that sometimes one-off applications tend to last much more time than intended so we're still, in our projects, we sometimes encounter one-off applications that have been running for five, ten years. You have to be careful with that. There are many options and this is where I make a shameless blog. That's a book from Morelli but it's free so you can get it right away if you go to that URL and I wrote that and it was published this year so it's recent information about the different Python web frameworks available. It covers in little detail most of them like 40 something, 40 frameworks but it goes into detail in the most important ones according to popularity of downloads on the Python sites which are, for example, Django, Flask and Pyramid and among others. So if you want to take a look at what's available there, that's a good place to start if I may say so myself, sorry. This is the book that you get, it's a PDF so you can download it and take a look right away. I try to get some common criteria to look at the frameworks and then there's a list of all the modern frameworks that are there with different criteria including a relative popularity rank that I gave it so you get a sense of how many people are using it. More stars mean more users and whether it's compatible with different versions of Python and all of that so as you can see there's lots of information there. And then it goes into detail with Django getting you code samples and stuff about what's good in it when you could use it and more. There's also Flask and Pyramid which are, there's also Tornado for example for Asynchronous and others. So it's, at least it will let you know how the field is doing with these states. And going back. Having said that, we believe Pyramid is a good fit for Plone developers for several reasons. One of those is it's really part of the family. Plone developers are where soap or Plone developers so when they developed Pyramid they took away some stuff that they liked more about soap, about Plone and they also took away some tools. So for example the page templates that you use in Plone, they are the same as you can use in Pyramid. So that's a good thing from the start. And you have similar concepts, the way we talk about things in Plone, when there's the mechanism that we use for traversal, the context that we use when we're using a view in programming, the concepts that are part of the soap component architecture that Plone is based on, all of those things can be used in Pyramid. So if you are going to work on both things simultaneously, sometimes it's good to have a common ground and Pyramid gives you that. Pyramid is also super flexible so you can mix and match several technologies pretty easily and it's a great framework for gluing things together. So when you have those integration projects where you need to talk to different pieces of software here and there, Pyramid is a very good choice, we think. One thing that it excels is a backend for React.js or Angular application. Pyramid has great support for having different views that render JSON automatically, for example, and it is very, very, very nice feed for JavaScript heavy application and especially frontends. Like I said, if you go to this case study at 250 from David, you will take a look at this and other interesting things about Pyramid. And well, that's it for me. I wanted to get the conference scheduled right and I think this does it. So please help us improve. There's a Ruby application that you can use to give your opinion about the conference. And well, my name is Carlos. That's my email and my Twitter. And if you have any questions, now is the time for that. Thank you. Yes, Kim? So we haven't really started anything in Plon recently that we don't feel is really, really a nice feed for Plon. For a number of years we've been doing this, moving it to something else if possible, philosophy, because we found that it works really good for us. So I wouldn't have a completely new case that I can think of. I don't know if you can think of something. Yeah, yeah, because if you really are careful in what you want to do, what we have recently is several cases where we decided to go with something else right from the beginning or at least present to the customer some options. I do have a case where we had a customer that wanted to go with something else and we suggested that he consider Plon and he refused. And we thought that Plon would be a good fit. So we ended up doing that, that's something else. So that went in the other direction. Yeah. You're right, it's probably not surprising that since we are Plon-focused company sometimes and Django for example is so popular, sometimes we end up in the other side of the coin. It's like people want to use Django and we probably don't want to use it for several reasons but sometimes the customer just wants to use it. So there's no way around that. Yeah, especially for the very first case I presented that would be like super easy to get something going with the headless server and a JavaScript or Python lightweight frontend. I think that's a possibility. And since Plon CMS like you say is headless and it has a set schema for content, there's nothing stopping you from using Pyramid for example to work with that as well. So I think they can work together and it would be something else to consider when starting a small application. Yes? That's a very question. Flash versus bottle? Okay. Well, bottle is intended to be just one file thing that you can easily move somewhere else and get your development going. In general, a bottle application will not even have modules. You just have a simple file and try to get everything done from there. Even bottle developers say in the website that if you need something more complex, it's probably better to use something else. Flash could be a good option for starting something. Like I said, Flask is a very good framework and people like it very much, but it's really not built for it. It has some problems with large deployments because of the way it's constructed. It has globals and it has some other things that get in the way of proper testing or proper element when you expand your application. So that's also something that like I said before, that's on their own book. The Flask book tells you if you learn to do things this way, the first six chapters, but from here on, we're going to do it this way because of performance. So Pyramid, for example, is always the same no matter what. You can start with a small thing and you can grow it to a large thing and there's no impact in that sense. Between Flask and bottle, I would say Flask is a better choice right now because it's more popular. There are lots of people use it, it has lots of extensions that you can use and it's very, very good for small applications. So if that's what you want to do, a small application, for example, a lightweight thing to connect to Plone, that's a very good way to go and Flask is a really nice choice in that regard. I would go with that over bottle. Thank you very much.
|
What to use for web applications without a content management component? Carlos gives us a tour of the Python web framework landscape. He wrote the book on the subject.
|
10.5446/55317 (DOI)
|
Okay, I think we're going to get started. But maybe I would start with a little story. Jens Klein and I were talking about castles a little bit. They just had the castle sprints like a couple of months ago. And it's like at this sort of castle that's been built and added on over the years. Maybe a little bit more plonish type of castle, like not so symmetrical like this. And there's like different types of castles that we're going to talk about. But I've never actually been to a castle. And yet here I am going to talk to you about building a castle of sorts. Anyways, wildcard. Well here, why don't I start with a little bit about myself. I work at wildcard corp. Been doing plon, Python, web solutions, high secure web solutions for like federal agencies for many years now. And I've been working on plon slightly more than I've been working at wildcard. I'm a core contributor. I lead the security team. I've been on the UI team and framework teams in the past. And so this is a little, yeah. And the accordion, okay, that's during the Arnhem anniversary sprint. Ramon plays it, so he was trying to teach me a little bit, which I failed. So what is CastlesCMS? It's, I guess, what we've been saying is it's a distribution of plon. Something that wildcard has been working on for the past six months, I guess. And it's a package of things that we've wanted to do with plon. So it sort of packages everything up that we've done in the past. And it's sort of one nice thing that we can deploy for our customers and sell and market to our customers. It's for large, complex websites. It's not maybe for the small site. It's for lots of content editors and sites that you need. You have a lot of content to manage. And it's built on top of plon. It's just our customizations of plon and then you get Castle. So it's, and I'll kind of talk, I'll talk about what some of those things are and how we did it and what the future is. And so I want to get through a lot of the features, but I always try to set you up here. And so we can get you to understand mostly what is provided with Castle. It is not a fork. It won't ever be a fork of plon. We would, we'll never survive if Castle is ever a fork. We want it sort of, we have to work within the community. And so I'll talk more about future goals of what it's gonna, what's gonna happen with it and how we're going to sort of hopefully use it within the community to continue to innovate with plon. And so within plon, you gotta think of like, there's a lot of different people from all over the world. We all have different opinions of how we want plon to do certain things. And so this Castle is a very opinionated version of plon. A lot of things we do, maybe like Hector would really be angry about and Andreas Jung and, and so it's a way for Wildcard to do like the really opinionated things that we want to do with plon. And maybe if we get enough people convinced like this is the way to do things, we can incorporate that back into the plon. Somewhere to innovate, so try new ideas, to try things that maybe are a little bit controversial. Things that we want to do and maybe aren't so easy to package into add-ons always for plon. And if you have all these various add-ons, you have to be able to integrate the add-ons together. And sometimes that process of having a huge add-on ecosystem is really hard to manage for someone who wants to deploy these advanced features on many, many different things like on a package. More than that, performance is really important to us because we have large sites that can get hit by DDoS attacked and things like that. So we have integrations that really help with performance. Those include like Cloudflare where we, this is really the only CDN we support, but we have like invalidation support and support for configuration with the API of Cloudflare. So I have Elasticsearch support out of the box. So all of your content is indexed by Elasticsearch. And when you go to the search form, it all goes through Elasticsearch. In addition to just regular indexing and searching, we have an asynchronous server implementation of the search API. So you can actually have all your search requests go through a different server than clones so you can handle a lot more. Because these are going to be uncached requests, so you need Dale to have a lot more performance out of your search form out of any other parts of your site. We also heavily leverage Redis. This is for application level caching. Redis is nice because if you cache things in your application level, you can share the cache between multiple threads. And we also leverage things that you would maybe normally write to the ZODB, but are sort of volatile a little bit more. We use Redis instead of the ZODB to store those things. It's like sessions and storing information on what users are doing on the site. We also leverage ZRS for more performance so you can replicate to multiple maybe data centers or just so you have multiple copies of the ZO server so you can handle more requests. And then how do we, so we have some, those are kind of setting the stage of some of the things we're integrating and we're doing. How did we integrate all this? Well, we leveraged ZCA adapters and we kind of overrided a lot of the things in Plone that we needed to. Zope component architecture is really nice for most of those things. So it was easy to do some and also like Z3C that, what was it again now I forgot the name of it. The one that you can un-configure ZCML, un-configure Z3C that un-configure. And then just like a lot of React and HTML JavaScript giving you a UI. So opinionated UI of how things, how Plone should be able to, how you should use Plone. And then when we couldn't use ZCA, we use monkey patching. That's with collect that monkey patcher and just manually patching. So this is an example of patching search results. So it doesn't pay attention to trashed search because we have recycling bin support. Security is really important for us too. So some of the things we implemented with security are integrated to factor off. We have this thing called an application shield. So if you turn it on, you can't access any of the content on the site until you've logged in first, lockout attempt. And so if you enter your password in too many times, it will wrong, it will lock you out. Stripping metadata from files that are uploaded automatically and root user restrictions. So you can only log in with a root user of Zope at the root of Zope. It doesn't allow you to log in, let's say the root admin user in the Plone site itself. So it's just a little bit more secure so you don't have, you know, able, like the super users of phone are unable to log in directly to a site. And so, I'm trying to set the stage and then I'll just, there's going to be a lot of demos and I'll try to explain things to demos. I think people like demos and see things and it looks fun and stuff. This is our toolbar and we kind of, we moved notifications into its own area and we have user actions, site actions and then everything content related to us on the left. And we have no drop downs or drop, we have no like second level drop downs on the left. Everything happens in an overlay if you do actions for context type actions on the left. Adding content is a little different. You get an overlay and you can kind of see interactively what your URL and your folder structure is going to be. So it's a lot easier for users to know what's going on when they're creating content, where it's going to be and it will automatically create folders for you if they don't exist. And you can have it public by default. It has the workflow state there, right there for you. And another nice thing about it's, yeah, so you can customize ID and everything will automatically generate it by default. But you can create and edit immediately or you can create multiple content and then edit it later. So it keeps it and I can create another page or I can go and view and edit it. And then when you're editing, everything is Mosaic. We don't have this default pages, we don't have the display menu. Everything is Mosaic for our users. So they start editing a page immediately, they're in Mosaic. Uploading is similar but you have an upload tab. It's important for our type of clients that when they upload things, they're forced to enter some metadata and this is configurable, which is forced. But you can set constraints on what things they need to enter. And they also have focal point support, so you can't really see super well here. But if you select a location on the image, you can say where the focal point of the image is. So when you display the image in certain places on your site, it will auto crop to that focal point depending on the width and height dimensions you're displaying the image as. I'll show more of that in a bit here. The workflow menu, we have a built in quality check that does some 508 compliance and making sure you have a nice title description. And it's all in one menu, so hopefully it encourages the users to use the commenting when they're actually changing workflow states. Like I said, everything is Mosaic. So this means we have no display menu, no default pages cuz that's really confusing for users to have to create a folder and set a default page for that folder. We have no collections because collections just become listings on pages. And it's just kind of like your typical Mosaic content creating experience here. We kind of updated the content selection widget as well. You'll see it more in some of the other examples. So you have a nicer browser for browsing content on your site. Video tiles, all videos are automatically converted to web compatible formats when they're uploaded and that's done asynchronously so it doesn't affect the processing of the site. We have, the videos are all responsive so they use different sizes for different mobile devices. And also has support for YouTube which I think it will show here. Yes, yes. It does most of them pretty good. It has trouble with, I haven't figured out how to convert it better but it has trouble with MOVs making them look nice when you convert them. Sorry, had to do the New England Clam Charter thing. I think it's the white. Only some people would probably know that joke. And we have mapping with all of our content. So a mapping tile and also all content can have mapping coordinates on it. So you could just select items on your sites or you can add your own markers directly and they'll show up on the map and you can add descriptions and click on them. How many of you are familiar with Green Bay? My hometown, so that's where Kim lives, Kim, New England. Osh gosh. Osh gosh. Yeah, you get the point, right? Focal point images are really cool. It allows you to not have to, your users deal with cropping, all the different sizes and everything or what they'll do is upload multiple images so they're the same thing. You just have them displayed differently in different locations. You just click on where in the image you went to focal point and then it will focus on that area. Our particular tile that we use for it gives you different display types. The display is portrait or square or landscape. And so you can see now it's focusing on the tail but if I go back to the beak, then I'll focus there. Yeah, yeah. And so it's doing it client side. I mean you're setting the focal point and storing it in the database but it's doing it with JavaScript to place the image where it needs to be. And so what you can also do with this and what CASEL does is you can load initial smaller images for like mobile devices and stuff and if you have a larger screen you can scale up depending on what size you need to fit the dimension. Anyways, yeah. We also have some social media tiles. This is just sort of like tweets, timelines, Facebook posts or Facebook pages and I think we have Pinterest and it's just maybe easy stuff but it's nice to have it integrated out of the box. So this is all of the tiles, the JavaScript behind them is integrated with patterns. So everything is a pattern inside those tiles more or less and so how it updates while you're dragging or when you're dropping them is it's re-initializing the HTML pattern inside that. Alright, we also have a preview model where you can quickly see the different sizes. So searching is kind of talked about already. Using an elastic search is really fast. You can scale really well. You can get rid of the full text indexes and clone itself. We also support search result pinning and Google Analytics integration to the point where it will scan Google Analytics data to see which pages are the most popular and sort of give them, bump them up in rankings. And we also bump content that has better rankings with like it has a more of a social media impact. So we have a lot of different configurations to be able to read social media like shares like tweets and Facebook shares and LinkedIn, things like that. And we also have suggestions and to be able to do a lot of the stuff we have a plug-in to elastic search to be able to do the custom rankings. So it's like a Java elastic search plug-in. And also I already mentioned that search, the high-performance search server, or you can have a search form point at a different search server that does not go through clone. And that will only support anonymous content. It has prepackaged queries to elastic search that only return anonymous content. So that won't, so that it doesn't, it isn't used when you're, when you're logged into the site, it's only for anonymous users. We have session management and this is done with Redis, like I had mentioned. So you can see all the logged in users on your site and you can log them out or end their sessions. Really nice thing for some of our clients is the auditing. You can see all the people who have done things on your site and what they did. And so you can sort of like, you know, point fingers and say like you did, has different types of auditing that attracts. This is leveraging an elastic search as well. That's pretty fast. And you can export the results into like a CSV if you want, statistics. Yeah. Yeah. Yeah. So it's like logged in, logged out, content edited, lots of different things, whatever we thought was interesting to track. We have, I think it's pretty nice feature. You can log in as another user. So you can like sort of debug permission issues or whatever, see what's going on with the user. You can't do something, you can actually like try it yourself. So some of the other things that we have are, I talked a little bit about the Google analytic integrations with scanning Google analytic data and having that affect your search results. We also have like screens to give you your analytic data and query it within the CMS. You don't have to go to Google. And S3, some really nice S3 integrations where you can automatically push large files off to S3 so they're not in your database. And we also have archiving content in S3 so you can have a bucket in S3 that will have archive content pushed there and then clone will know where it is and it will just redirect to S3 instead of serving it from your site so you can kind of keep your sites a little bit more lean with old content that you don't feel like updating. So SMS for like two-factor auth we have SMS support and that's using Flavio and it's also for like you can do some registration steps to confirm user accounts with SMS. And then like I said, social media, it will scan certain social media platforms to get data on how popular content is and everything. This is an example, analytics screen. I didn't want to show you the whole thing because it's a client. But you can, there's a bunch of different dimensions that you can pull down, aggregate different, and different things and you can do real-time data or historical data or check out social media data. Some other things we do, we have a recycling bin I think I touched on. We have salary integration. If you don't know what salary is, it's being able to do asynchronous tasks so it's not blocking the UI. So any like large copy operations or like video conversion, if we interact with pushing large files over to AWS, we put that all in asynchronous tasks. We override regular, clone, copy and paste operations and things like that. If we say like, are you copying more than 100 things, well then we'll put this into an asynchronous task instead of just performing it right away. Alias management for doing all your redirects and like business metadata, it's like the JSON-LD data, some businesses need that. So what's missing in it? So like this goes into sort of implementation details because everything is mosaic and I didn't talk much about the details of what that means with layouts and mosaic layouts. This is whole another talk. I didn't want to get into sort of nitty gritty stuff here so much but we don't even really use Thazzo, you could but we don't because it's really nice with the way layouts and everything is a tile. And, Castle, you don't need, so you don't need portlets, you don't need default pages. So it kind of makes things a little more simple because everything on the page that's rendered is just a tile. And the content area, there's a bunch of tiles, all the stuff around it is a bunch of tiles. We replace portlets with tiles. We have little tile group managers to manage those. So it essentially feels like portlets but it's still just the same tile infrastructure. These are things we could think about with Plone itself as well. I don't know if anybody would be happy with getting rid of Thazzo. But maybe they realize they don't need it eventually if you have this sort of different way. You still have the theming editor, you still edit your themes in the same editor and everything but you don't need the rules to transform the content as much. All right, so what's in store for the future then? We want, we're going to open source it. We just have to find, we can't just like throw the code out there. It's just, then it'll just make people mad because they won't be able to install it because everything that we have to install is like our own private stuff for like Ansible scripts and everything that we, we can't open source that stuff. So we need to, we need to get to a point where we can feel good about the documentation and the tooling around installing it to be able to solve it. Maybe we could provide a Docker image that people could test it out on. But if anybody wants to test it, we can make a test instance for you to play with as well. Some things that are, that we know about that are on the roadmap is that I have a client that really wants to be able to have integrated chat. We're thinking about, or we're looking into integrating a rocket chat into it, trying. Kind of hard, but ask Sam about it, Sam Schwartz. We still need Mosaic to be better in some circumstances. It's, it's pretty good and the editors are really good at doing some pretty complicated page designs with it, but it's quirky and it's hard to do well. So it still needs some work. We'd like to continue to try to improve that. We'd love to have built-in A-B testing. This is another very difficult thing, especially with caching to do well. Especially if you wanted to A-B test the homepage. Caching is hard. I can see you branching on specific content on the site, but A-B testing the homepage is very difficult. We want to continue to refine the UI. Of course, it's not. I'm not a designer, so we're still working on it. And then keep working on more rich tiles for it. Maybe long-term, pull-net server, make this all angular. I don't know, like scary stuff, right? That was kind of what I wanted to talk about. And then I want to open up for questions to see what you guys think of it. If you have any questions about details, this is your opportunity now and keep annoying me about when stuff is going to be open-source because maybe that will help me to actually do that stuff. Okay. Sean? Yeah. We still do that. We do that as well. Oh, he's asking about portlets with inheritance, how you can have one portlet in one section and have it be on all the sub-items in that section. We do that as well. It's like the code is like 100 lines. It's not that complicated to do. So you just look up the parents and figure out what to display in context and stuff. We might have a few constraints because it might not be as great as the four multi-adaptors of portlet managers and all this stuff. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. You can do that. It's fine. Does, yeah. Have you ever thought of generating static help for anonymous? Yes. He asked if I ever thought about generating static HTML and just publishing that on S3 instead of this. So your site would just be served on S3 only and every single time that Hager's edited, you would push the S3. It's hard, right? Because you don't know if you add a folder here where that's being pulled in and hundreds of other spots on your site. So you almost have to be continuously pushing data. It's an interesting idea, though. I guess that's, another idea related to that is if you have a CDN where you can push data to, you could just directly push it to your CDN and then be your static storage. You first. Yeah. Maybe, see the thing is it's pretty tightly coupled right now. So I might split it out maybe and that might help get to the full open sourcing part of it. Yeah. Oh, go ahead, I guess. Maybe out or using. No, I mean we kind of ripped Wildcard that media out, but it's similar to what that does. Yeah, it's not using that. Yeah. So he's asking what we're doing for video. There's a Wildcard that media package. We took a lot of what that was doing and put it into CASEL, but it's not the Wildcard that media package. Something different. Oh, did you have a question? Right. Yes, there's a lot of, there's a lot of things I need. We need a React and core and then like even I rewrote the query string widget in React as well because Christian kind of sucks. I was just really buggy, you know, so it's not so buggy anymore and I did some things with Tiny MC as well. So like, yeah, we need React and core and we need to sort of bring some of these things back. So I guess I would just write to Redis cache if I needed any progress reporting. Otherwise it was after you're finished, we send emails out to the user telling them their task is finished. So I should have had a video for that. So what happens in the UI is if you select a copy of folder that has 500 things in it, they'll say, hey, this is a really large folder. We put it in a task. We'll let you know when it's done. And so then they get an email. David? Yes. Yes, we're using React. The React is, the question is, I saw you were using Angular 2, but we were using React right now. So React is really good at creating small components and Angular is really good at creating a whole application. So for our use case, we were plugging in bits here and there and really just small components. Like a lot of our React code is actually patterns that initialize a React component. So that's why we used React. React is really good. Angular is really good. Angular is, if you're going to create a whole CMS app that's in JavaScript, Angular is a better fit. I think. Maybe I'm wrong. We can fight. But so that's just my comment, I guess. Anything else? No? If he's asking about the asynchronous integration into Plone, if that's too much code or could we integrate that into the Plone? I would like to figure out a good way to integrate it into core that is pluggable. So it's not opinionated like mine is. So anything that happened there would be in all those spots where I monkey patch or whatever to make it happen. Those would need to be replaced in Plone core with generic code where you could just hook in and say, hey, I am able to perform an asynchronous task in this case and then be able to do that. So that's something that would need to be developed out. Not there right now. So I kind of lied. So the question was collections. I got rid of them. You're right. The use case that we have for getting rid of collections. We get rid of it most of the time, but we have a feeds folder where you can actually add a collection type to it. That's actually a feed because that's the one thing that we had where you actually wanted a unique URL as if you had a feed URL. Because I was like, otherwise, how do you do that? So I mean, you can still have it. For most of the UI, we have it hidden. We have Selenium tests. We didn't even. Yeah, I don't think we got rid of all the robot tests. We just do plain Selenium and then we have a CI that will test it for the whole stack. Yeah. Yeah. Yeah. Yeah. It's horrible to debug. Yeah, I know. Yeah. Yeah. Right. Thank you. It's just a story. He's asking about using celery. Is there any reason why? Looking back at it now, I probably wish I hadn't, I guess. I don't think it would be hard to switch it out with anything else. Celery is fine, but it's probably more than I want. We need like the foot. It's simple. Yeah. Yeah. Yeah, I agree. That's probably something that we can change eventually. David. You might be right. There's some people though, if they try something out and all this is promised and like they can't freaking install it or something, they'll be like, oh, this is stupid. This is a lie or something. Yeah. Yeah. Yeah. Is there a question over here? No. Not at all. Sorry. We've done it, but it was like a specific client thing. It was all custom scripts. Yeah. Really hard. Really hard. Miko. I haven't done any tech. He's asking how heavy what the resources. I haven't tested on any like minimal system requirements or anything like that. But yeah, you want decent servers to be able to do this and. Can you like test some more? We don't run it on Travis. I don't know. It runs in a vagrant. Slowly. Yes, I do. I do it all in a vagrant machine. That's how we do it. One vagrant machine. One vagrant machine. Right. Well, any other questions you can ask later after as well? Thank you for your time. Thanks for coming.
|
An opinionated distribution of Plone. A talk focused on the decisions and innovations of the Castle CMS Plone distribution. Special attention will be given to Mosaic, tiles, Redis, Elasticsearch and UI/UX.
|
10.5446/55318 (DOI)
|
Yeah, so welcome everyone. I am Prakhar and today I will show you the work I have done in Google Summer of Code this time during the summer. So it was mostly improving the easy forms for and making it compatible for plan 5. So I will just describe my work that I have done. So first I will talk about what forms are in plan like earlier we have plan form gen it is quite stable it is quite functional and it has a lot of functionalities and all. Then we have easy forms which are mostly dexterity content type and that plan form gen is mostly arc type. So yeah, so these are the forms in plan mainly basic forms. So it is always a like fight for like porting arc type or dexterity which we choose. So now in plan 5 we are since last 5, 3, 4 releases we are basically working on dexterity only. So we are deprecating arc type content type. So yeah, so there is a package named collective easy forms that provides forms with dexterity content type and though it has less functionalities and less stable than PFG. So when earlier in the websites we use PFG for creating forms. So for dexterity we use easy forms. So yeah, so main question is why like why we are why we have worked on that project if you already have easy forms with us and it is for dexterity and it is a clone sort of PFG for arc type to dexterity. So the problems there are a lot of problems with easy forms like we cannot use it directly with the available state we started working on last 5 months before. So it was not that ready to be used in clone sites for specially clone 5 there are lot of problems. So yeah, this is one of the like test cases are failing at the initial stage. There are lot of functional flaws in functionality of the easy forms and there are also a lot of problems with like let us say example mailers and all. So they break sometimes they are not working properly according to the behavior and they are behaving and not normally like what we have expected from them. Yeah, so basically I not created a project from scratch it was already there the idea was already there I have done a lot of enhancements in them. So these enhancements include basically improving test cases for the easy forms also improving lot of functionalities like how the form should look in the website, how the functionalities should be put so that users can edit them, users can use those functionalities and all these things it was not according it was not optimized though it is not optimized right now also but yeah we have tried to improve those things from what it was earlier. Yeah, so first thing I worked on is like improving that mailer thing. So what I faced when I started installing collective easy forms that the mailer thing is breaking and it was showing some errors of a soap and all that you cannot proceed with this thing. So there was few changes in the code that I did and it got more than now the mailer thing is working. Also yeah also when I was working also we improved the test cases and during improving those test cases we have faced a lot of when we have detected a lot of errors in the code base like it should not be like this it should not be like this. Let us say as example of that for the email IDs, emails for the mailers. So if you do not provide an email for that mailer it should pick the default mailer from email ID from the site right. So it was not there. So if anyone forgets to configure the mailer settings and directly tries to create a form with this email ID and all it was showing error like you have not configured the email IDs. So it should not be like that and it should take the email IDs and configurations of the users. If not defined in the mailer it should take the default configuration from the website. So yeah so while this is what we have decided to work like. So while I was improving the test cases Steve he was my mentor for the summer he started making the code more readable more understandable. So the code was scattered everywhere in the repository. So he started combining the code make it more modular and more easy for me to start working on that. So first he aligned all the like shifting all the interfaces at a place so that I can know that what type of functionality that code provides and what we can improve. And also when I was improving test cases I detected a lot of functionality changes like functionality flows that it should not be done like this because in PFG we do things like that. Planform gen we do things like that but here we are doing like that. So should we stick to the current behavior or should we do as it is in the planform gen and main thing is we do not want to make drastic changes because users are already using planform gen right. So we do not want users to get lot of changes that they will be like disturbed and they would not be liking to use easy forms. So we do not want more changes from the to the users. So we decided to change few of the functionalities accordingly. So I will talk about that testing thing. So yeah so first thing when I started talking with Steve about the project so I was like it was already there and it was a huge repository. So it was quite tough for me to get the understanding of the project what it wants to do and what each each function is doing and how it works. So Steve told me like go to the test cases read the test cases of each module and try to understand what they are they are doing there and you will get the understanding of the whole project. So it is really important for I guess I think it is really important for any project to have test cases and it really helps not to just keep the idea what you are thought at the first time that what your project should be driven to till the end but also like if someone comes and wants to know about your project and how the functionalities work especially with functions. Though you ever read me or docs for that but still if you want to know what that specific function is doing and what it is doing in the web the repository the test cases really helps a lot. So these things really helps me a lot and also what I feel is what I feel is that test cases really helps in keep the track of the project like it was a 3 month project. So there are chances that we have thought something and later we came to that discussion after a month. So there are chances that we have missed some points during that whole month because we were working on some other thing. So this test cases really helps us a lot because what we do is we started writing test cases for the things we have decided like we should go through this we should implement this thing. So first we have implemented test cases for that and then we started developing for them. So even we delay a little bit of development but we know what we have to achieve at the end because we have test cases and we have to develop things according to the test cases. So these things really helps during the project. So when I was working on those test cases there were lot of improvements in the design like I will say let us say you can see these fields and actions I cannot show you. So these fields and actions are like hanging around in the sidebar randomly without any icons and all these are for easy form. So this is not a good design like this is not there in plain form then like you do not see any of the extra fields in the sidebar which are hanging around without any icons and it does not look good. So when we are writing test cases and behavior test cases for browser test cases we have decided to move these things to the actions tab. So what we have done is we have shifted the fields and actions for the easy forms in that main action tabs and it looks more cleaner if you go to easy form and you just have to go to actions and you can define their fields and actions. So it is more cleaner, more good looking. So these are the things that we have detected through test cases and tried to improve them. So that was the one part of the project and this is like the I was like it was like a nightmare for me for the first time when I came to know about that we have to migrate the forms. So earlier when I was proposing the project and proposal creating proposal so I was in the impression that we have to migrate the forms. There are lot of ways that we migrate the clone sites. So we already have a pathway to migrate contents of the clone right. So we can use that path and we can migrate the forms. So yeah so what I was like there are lot of options we can do anything and we can adopt any of the options and it will gonna work fine. So yeah for after discussing with Steve and lot of people on IRC what we have come to the conclusion that we will give a option to the users in the control panel. So that there will be a setting option for the easy forms. Users can go there and they can they can choose a option like they want to migrate the previously present forms in the next state of the content time. So we proceeded with that thing and then the main thing comes that after that design thing we have to write the functionality like how to migrate the content time. So I was using I was thinking about using clone app content type the custom migration thing but it is not that easy like PFG has its own content type and it is the content type the PFG forms are not flat they are hierarchical. So we cannot directly just pass the content types and we can migrate it from our type to dexterity. So it is sort of tree so we have to we have to we have to go through the tree and check for the fields and types which are custom content type for the plan form gen and then we have to migrate it to dexterity. So yeah we started looking I started looking for clone app content type then the answer was no we cannot directly use that to migrate the forms. So I started discussing with people and Steve said what are the other ways we can use we can use to migrate the forms. So he suggested me to write we should start writing we should get the idea from clone and content types how they migrate the fields for the clone sites. So we should start writing migrations for separate fields and content types. So still here also we started with test cases and we started migrating few of the fields and yeah so those test cases were working but it is not complete we have to work a little bit more on that because PFG is not a small project so that there are only 5, 10 there are lot of content types and we have to work on individual content type and we have to create a path to migrate those from hard type to dexterity. So we are still working on it I will still have a discussion with Steve about how we can proceed with it and also I had a discussion with other people here. So they also put some comments on that. So this is the snapshot of the migration we have started writing for fields like we have we have started migrating in it fields from next archetype to dexterity like normal string field from archetype to dexterity and text field from archetype to dexterity. So this is the first sort of migration we started working on it needs improvement we are still working on it and the amount of migration we have written it is it is working it passes the test cases and it is working according to the behavior that we want but yeah still we want a lot of efforts to migrate these things. So this was the whole project that I did for the summer though we implemented last things maybe but it was quite tough for me to get through the project and to understand what it does because it was already present I have not created anything from scratch. So to get the knowledge of the project to know about what the project wants to do and how why developers have developed this thing there and not this thing there. So it was quite tough for me at the initial stage to get the understanding of the project but yeah people on IRC, Steve, DHE there are few guys Tom they really helped me a lot during the whole time period of the project and these are like the summary like we have improved the test cases and we have made it stable for plan 5. So now we can install it and it is not breaking apart from that yeah we have improved some design as I told you like actions, fields we have also discussed few things like actions and fields like when we click it shows pop up we should show in it pop up or we should redirect to the another URL as a normal page. So we have discussed these things also and yeah there was no install we do not need install and install profile apart if we do not have any migrations for that. So for migrations we are we have discussed that we will have install and default and install profile for the package. So whenever a package is installed it will it will create a settings for the package in the control panel and that will give us the option for the users to if they want to migrate their forms from text type to dexterity and when we uninstall it it will remove that thing yeah and we are also introduce the framework like how we should start working on migrations. So it is not done but yeah we have start working on that yeah. So what I felt like I did the Google this type of project last year also it was on safe HTML transform. So this project is quite on tougher side for me than the earlier because earlier sorry earlier I worked on the project and it was from scratch right I have to create my own add on and I have to work on that. But this time what I felt like first I need to get the understanding of the code that is already there and then I have to start working on that and yeah. So people really supported me during the whole project like the stupid questions I used to put on IRC on the email on community dot long and they are they took some time to give responses on this and Steve, Demi, Tom these people really helped me a lot during the yeah. So this is the work that I have done for the Google Summer of Code and I just want to share this thing with you and also please read this thing if you get time for 6 feet up. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
A 2016 Google Summer of Code project. Prakhar will describe the work he did on collective.easyform as well as sharing his experiences on Google Summery of Code.
|
10.5446/55319 (DOI)
|
My name is Doug Feffer. We'll be talking about a front end build that ended up sitting on top of Plone without knowing anything about Plone. I'd know nothing about Plone. Oh, this thing's not going to work, huh? Oh, it's a line of sight thing? Yeah. Maybe it just doesn't want to work. Well, okay, that's fine. Okay, I'll stay over here. Yeah, it worked like when we first hooked it up. That's all right, we will survive. No, that's all right. That's a little too needy. I can do that. Oh, there we go. Okay, cool. All right, back to Plone. Don't know anything about it. Still don't, except that I looked it up the other day when this thing came up in my email box. So I've done a lot of front end only work, front end work, a lot of server side stuff, mostly custom builds, rarely had to use CMS in any sophistication. Occasionally, I had to use WordPress for something that was kind of unpleasant. So I'm not, it's kind of new to the world of, you know, getting deep into CMS things. And I certainly never did a high level like integration effort like a big custom site would require. So what we did, I was working with a company at the time called Hard Candy Shell, it's a design strategy company. They do a lot of app work and big content site redesigns. They did the slate rebranding redesign, which a lot of people had opinions about New Republic, a bunch of things like that. The next thing, probably what we're talking about right now, which was the KSDRW website. And KSDRW, for those that do not know, it's kind of a big West Coast music institution, a lot of live bands come through and they radio head does like a thing there and stuff like that. People like them, pretty like legitimate in that world. And but the website was kind of a weak, like kind of old fashioned, just homemade, like a local radio thing. It didn't really have the backbone that like a nicer musical kind of platform should have. So Hard Candy Shell was contracted to kind of rebrand it, kind of reposition it, redesign it, shuffle the content around. And we really made it, or they, I'm not the designer, they just, you know, did their whole discovery phase thing and they were out there. So they had the new Look and Feel and they worked with a branding company to come up with a new actual brand and we lost that one and ended up looking like this, which was just much nicer, right? And a big emphasis on this thing was like, it was a real rich media player experience. The idea was like it's not like a blog, readers reading about the fact that Radiohead came. It's like Radiohead was there and there's all these archives, interviews and things like that. So you should be able to go back and listen to the live show, which they always have, but also dig into all their archives they got. So that was kind of the premise of that. And what I was doing with Hard Candy Shell at the time was a lot of these companies, a lot of these design companies, their deliverables after the end of all its research process and working with designers and the brand people, their deliverables to their clients, in this case, KCRW, might have been just Photoshop files. This is like four or five years ago. This is much more common. They would just give these companies Photoshop files and be like, yeah, this is what the website should look like. There's a hover effect probably, you know, figure it out. Here's a slide show also. You can also figure that out. And so we would drop these things off, or the designers would drop these off with the client and these CMS people or database people, whoever, their dev team, would that be on the hook to kind of tease out the interactions from just these comps, which doesn't work. It barely worked before, especially doesn't work now, because you have to take responsive stuff into account. People want things to touch and slide right. And there's just like too much to convey if you just give someone like a big JPEG. So what we started doing, what they started doing was trying out, really building out, like more than our prototype, it was like really building out the whole front end shell sort of, complete with all the hover effects, interactive effects, and responsive processes, and kind of faking all the different interactions in the client, in the browser, so everyone could take a look at it and see it on their phones, see it on their big screen, small screen, look at it on their web TV, whatever. And you get a much better sense of like how this thing is going to work. And also the people that are actually on the hook to implement it, in theory it saves them a little bit of work. The jazz cart of folks might think otherwise, but in theory, like, in theory it saves them a little bit of work because they don't have to actually then go pull in front end guys to figure it all out. They don't have to sit down and hash out the structure because it's already been done. And technically, you know, if I did my job okay enough, all the cross browser stuff was mostly sorted out and everything that was like, how to be plugged in was like mostly ready to go. That's what we got. That's kind of, that was the gist of it. So what I was tasked with was building out this front end for this thing and I didn't know anything about what was going to be actually, how this thing was supposed to run, how this thing was supposed to operate. Yeah, I had no idea Plone was behind it. I didn't know what was going to be behind it, but it wasn't really my problem, which is kind of a weird position to be in because I also, as a server side developer, sometimes I'm going through the front end and I'm just like, I guess this is how the data is going to work. I assume like, I assume that there's going to be some kind of response comes back when you sign up for this thing. So like a lot of times for the sake of the demonstration and the feeling, I had to fake a lot of like load times and submissions. Like this newsletter sign up button, just some jQuery. I kind of had to like fake a loader and stuff and fake this kind of confirmation message that would come back. And it was just full of comments like, you know, server side folks, this is where you're actually going to want to hook into some kind of API. This is going to be some JSON or some XML is going to be coming back. I'm going to get good luck with it. And I felt a little funny about doing that because I'm like, well, someone's got to figure that out. But it's, again, in this case, luckily it turned out and this is the magic of Plone. Apparently it was just fine because the website works really good. We're just like the demo is supposed to do. So an odd thing that comes with doing these kind of front end like blind front end mockups is you have a lot of this kind of place to hold the code where you just like, we're just like, yeah, this should really do something. Put a loader indicator on there and just, I don't know, you know, toss it over the fence and someone else is going to figure it out. And I guess it works out. I kind of always was wondering if I was going to get jumped by the server side people because they had to deal with all these assets and JavaScript and piles of garbage that had to send over to them to figure it out. But it seems to be okay. So thanks, guys. So that was a weird thing. It just, that's like a weird thing. I don't know if, are you all front end developers or you all clone developers? I don't know. Anyone touch anything? Is anyone only a front end developer? No. Okay, great. That doesn't really matter. It's pretty interesting. It's a mix, right? Everyone, you have to touch it all. So okay. So let's talk about the workflow sort of and how like what we actually, what I delivered to Jessica to integrate into clone. So I found using these static site generators. I saw there was another talk actually about static site stuff. I missed. I don't know if it was there. It was yesterday. It was yesterday. Okay. Was that the tool, the system? Was this static site generator? Yeah. Okay. Okay. There's a ton of them. I just looked it up. There's like two, at least two just listings of these things, which is great. Now, so what is that for those that do not know? A static site generator is just like some toolkit. If you need to make entire, a lot of content sites, for example, like if you're trying to make, if you've got archives of articles and content and streaming videos and stuff like that, most of the time you don't really need like a rail server side or even a Drupal thing or a clone or whatever on the back end. You can just pretty much just dump out. It's just HTML, right? It's all you really need most of the time. But the difficulty is not like serving it because there's just a bunch of HTML. The difficulty is as someone who's actually producing it and you've got to update the header because it's like, there's a promotion coming along. You obviously don't want to go and update 3,000 HTML files. You don't need PHP. You could just do with PHP and server side includes and stuff. That's just weird and lame. And also then you've got to host it anyway. If it's just all static stuff, you can just put it on S3 bucket or some host and it doesn't matter. It's out of your life. It's beautiful. So that lent itself to these kind of front end only bills because I could have the tooling to not just be playing with plain HTML files and plain JavaScript files and CSS. I would get the tooling of a somewhat developmenty environment on my end. But the deliverable was actually a stack of HTML files we could all put online and review in real life. And also I had the artifacts of the build that the server side people could then kind of leverage, which I will get into. I used one called Middleman. There's a lot of them three or four years ago. This is the one that I kind of liked. It's Ruby-based, which I like. I like Ruby. This is some of the stuff that it kind of gives you. Basically, you can put a bunch of HTML, a bunch of templates. So ERB, excuse the typo, ERB is like a template language, a Ruby-based template language. And you can just kind of use your includes and you can loop through stuff and there's some image helpers. It just kind of makes it handy. It gives you a bunch of helpers. SAS, do people know what SAS is? Yeah, everyone knows what SAS is, right? It's like a compiled CSS thing because writing just CSS is kind of a pain and it's not that great. So SAS lets you, like, sits on top of CSS and compiles down to it. This is a really useful, if you're doing a lot of SAS stuff, Compos is a really useful kind of toolkit that you can use with it. It just gives you tons of utilities. Like it will take care of weird server side hacks so you don't have to copy and paste the same stupid CSS, like seven lines of CSS for your different float fixes or whatever or vendor prefixes. It does a lot of that stuff for you. So this all makes it pretty easy to go about building out this front end because, again, the end result is just HTML, just HTML at the end of it, but you don't want to sit there in a big sea of HTML because that's a nightmare. So it kind of looks like this. What I ended up with as I'm building this is pretty much a page for every page, an HTML page for every page. It's not entirely HTML. So this is like the layout. This is like the big master template, basically the header that all the other stuff gets plugged into. And it's mostly HTML. We've got some helpers and stuff like that. We've got the partial, as I mentioned, which are just shared components that we can use all across the site. And a lot of content sites do have that because you've got the authors' bylines on every article. If you've got seven variations of the article, you don't want to actually be copying and pasting the HTML. One, because it's a headache for you as a developer. Two, it's a jerk move to the people that are implementing it because they've got to go fish out the same HTML 100 times and try and just, I don't want anyone looking at it. Being like, is this footer indented differently on page X than page Y because it's a different footer or just because he forgot to tab it? So this way you can kind of mandate, like, just trust me, this is the same markup. You can use this everywhere we're using a footer. This is the footer. So it gives you some amount of organization that's nice. You have directories for things. People like directories. You put your fonts in one thing and your image is something else. And you can kind of build that. So this is like the live site. Just for navigation purposes, we generally have a table of contents. So when we host this thing so everyone can review it, it's client review time, we can be like, okay, everybody, go to the staging server and let's review the music episode page. Everyone can go and click on it. It's an easy way to make sure everyone's on the same page. I find it's helpful to version the directories. So the person that dialed in three days ago, they pulled up on their Amazon Kindle or whatever they're reviewing it. They're not looking at the three-day old version. We just, otherwise, you go and be like, hey, please clear cash and reload. And that's a pain. So version of these things, host them. You guys can see them and everyone kind of gets a little taste of what, how the thing is actually going to work, which makes reviewing it easier. It makes like, it just makes feedback easier because otherwise you've got these JPEGs and you've got a slider and you've got a carousel on there and no one really knows how it's going to work. And you go through all the effort to build this carousel out or some like inane thing. And the client's like, I don't like it sliding like that. I want it to slide like this. And then you've got to go back and code it again. So this way, if it's out the gate, it's a working prototype. You just are farther along that approval flow and you spend less time looking back around because there's less surprises. Especially if you were starting the prototype stage, there's like no real room for the client to be like, I didn't know this thing was a vertical thing. I thought it was a left to right, I don't know. Whatever, you know, whatever stuff miscommunications happen. If you've got a working kind of prototype, it just makes it easier. So this again is an example of using the shared code thing. It's easier for me because I've got a little blog post kind of a module and I don't want to be copying and pasting that code 100 times. Plus for the sake of the visual, sake of the review experience, the content's got to differ because we have to show like what happens if there's a blog post, you've got five blog posts in a row. What happens if one of them, you know, has 300 characters in it? What happens if one of them is like, I don't know, has some other blur on top of it or something like that? It's all, but you don't want to just keep copying and pasting that. So that ends up looking like this. This was like a standard like module and we could show like this is like if there's an audio player, it's going to look like this, you know, you got what this looks like, right? And it just kind of provides that framework so you're not actually doing that and then other people that have to implement it come along and they see like, oh, clearly there, there's no question. Every blog post overview is going to look like this and granted, yeah, in the code like if there's an audio player, this markup has to be kind of emitted, but it's not on anyone to guess it. It's like it's not a formal, it's not a formal proof of like how anything works. It's just all a suggestion, which is kind of a funny, it's kind of a weird thing to be providing this markup and being like, I swear this is pretty much how it's going to work in the browser, but it's still on you to actually make it happen in the browser. So especially as it gets to its launch, like things are going to change and maybe blown or whatever you're working isn't really going to want you to do a certain thing. So in the best case, you get the stack of HTML and it's kind of like a suggestion. It's like you really ought to do it like this. Like we know it works good in the client approved it, but it's up to you how you want to do it. So we can even click around a little bit just like for the sake of it, because why not? Can I open this? No. I can't even get out of this thing. Let's see. How do you do this? Great. So, there we go. Right. So this is just the thing of, this is what we delivered. This isn't what necessarily went live, but it's an example of like how we built out almost everything as if it was going to be a real thing. So we've got our spinning music player. You have to make a lot of assumptions. This thing is technically, technically we wired it up to be a player. Like it went through the trouble of making sure you could drag and drop it, making sure the timer is ticking, making sure there's a mute state and always we just work is then as far as the interface goes, nothing is left to interpret when it comes time to do it. If you're good, you can sneak in some Easter eggs. This card will let us get away with this for a while, but we got a, I think it's not there anymore, but I think we got the Konami code in here too. It looks like, got my spinning egg in there. Because it's like an actual Easter egg. It wasn't my idea, but I'm like, they're like, do you think we have time to put in the spinning egg? And I'm like, I think so. I think we can make that happen. So this is cool because everyone just, you know exactly what the music player is going to look like. You know how the progress indicator works. It's not bulletproof because it's not a real player. And the more like complex the functionality gets, kind of the goofier the prototype gets because I really have no, I don't even know if the media player can stream in this way. Like it might turn out just not to work at all, but it's like a best case scenario. You get this kind of working functional thing. The client likes it because they understand it. They're like, yeah, this is what the media player should look like. The implementers kind of get a sense of it too. And the design, most importantly, the designers can sweat it out to their heart's content because as I'm sitting in the middle here, like the designers are also my client as well because I'm a freelancer. So Hart Canny-Shell was my client. Like the KCRW is technically, but like in my mind, my clients are the designers that I'm trying to, you know, I have to please them. So they're not like this guy's not giving us any support because we're trying to make this thing look good. It's got to be good. It's got to build good. So this guy's got to do it. But I also know in the back of my mind, my client is the plowing people in this case because I can't really in good conscience build a crazy unworkable thing that is only going to work on like the nightly safari built because that's just like, that's just like not, it's not fair. I don't know. No one's going to be happy with that because then the client's finally going to get it. KCRW is going to have this thing and the designers are going to be like, well, it worked in your demo. I'm like, well, it worked in your demo because you have like the 10K screen. Like it doesn't work for anyone else. So you kind of got to be like that sense of reality, being like, this is awesome. Like what you're suggesting is awesome. Like there's no real time blurs not going to work. It's cool. It's working in our like real prototype. And it's cool. You found an example of a CSS online, but like I'm not letting that go into like production code. So you kind of got to be the bad guy too like that when you're in this position. So it's useful. It's useful because everyone just like gets what gets what's happening here. I highly recommend it. What else we got here? I don't know. Stuff. We got stuff. Let's go back to the slides. There's just a little bit about like the beauty of kind of doing this thing, this prototype. Like you could, I know a lot of people will forgo, depending on the technical chops of the design shop, they might forgo doing a lot in Photoshop or sketching. Whatever they're doing at all and just jump right into prototypes in the browser, which is great because then you're really, you, I mean this isn't like novel. This isn't a brilliant observation I'm going to make. Other people have written books about it. Like this people's lives, like the prototyping workflow. But you get a lot more out of that kind of thing because everyone's looking at a working thing immediately. You don't go down a weird rabbit hole of someone's fantasy that only works in this strange limited resolution in Photoshop. And like, it's like, well, it worked on my browser. Like so many times I'll get a design and I'll be like, well, what happens to the browser? It's wider. They're like, oh, what, I mean, like, you, the browser gets wider? Yeah, there's just these implications and you have to fight those things. You deal with those immediately when your thing's in the browser and they can't like hide from it because a lot of times when it's just a design file, they can kind of just dodge the bullet a little bit. They can be like, I don't know, like, that's your problem, you know? And sometimes I can say like, well, it's the implementer, it's the clone people's real problem but I don't want to do that. So you kind of have to fight it out. You kind of got to get it done earlier. You see pretty easily like design decisions are going to be actually just stupid. Like, say a fixed header in theory with a certain rollout effect, say it's like, it sounds really cool. It's like, oh, as you scroll, you see like the, I don't know, something spins or something as you scroll all the stuff. Like, oh, it's really useful and awesome. Then you can see it in the browser, it's just like, it's garbage, just distracting and a nuisance. You can throw it out and you haven't sunk hours into designs and hours into like multiple responsive versions of it just to find out that it's going to be an unhelpful feature. The client likes it too because they can sit at home and look at it in the computer, which is dicey too because you don't want the client to play with the markup that's not fully vetted for too long. So if you give them markup of a prototype, you got to take it offline immediately after the meeting. So otherwise it's going to be emailing, because otherwise it's going to be emailing it around and they're going to be looking at it on their, on their Kindle or something like that, some crazy thing that's just not ready for production. So you've got to, you've got to, there's a lot of like, hand holding I guess, like you are, yes, this is working in the browser, but this is not, just because it's in the browser, this is not what's going to be on the website. Like, just look at this, just look at the header. Do you like this font? Like, okay, good. This is the way it's going to look, but leave it alone, but you can't hit it that hard. And again, like, when you have the markup and the thing launches and the slideshow is just broken or the hover effects don't work or the loading indicator is off to the rider in some stupid place, then I can blame the clone people, the implementers, can be like, look, I know the designs, like the designs weren't really that big, this didn't happen, but in theory. I'm like, look, the CSS is here, like the markup is here, like technically, like technically you know how this thing is supposed to work, you know on mobile, like we have the momentum figured out, like we have that easing animation that we had to sweat out with the designers, like that's working in the browser here. Like how come it's not working in the production thing? So that's something you can kind of, you can't really, no one can fake a certain amount of it or say, I didn't understand it or I didn't know it. When you've got the actual markup. I'm trying to think what else we got. That's it. That's all I got. Are there any questions? About the Internet? But yes, sir. I feel like a bunch of my questions are probably like group questions, like structurally and how close did what you guys launched with, you know, like how much did the code change? It might be useful for you to just do a two minute. Yeah, that'd be awesome. Because it's not like it's both of us. Yeah, yeah. Yeah, okay. It looked great. When it launched, it was like that's exactly it. Except for like things that like I knew, like everyone knew was going to change, like the streaming implementation, no one knew how that was going to go. Right. So what we got from that was perfect for what we needed. That's nice. There's the middleman setup, which you know, it's ERB, it's Ruby templates, it's a little weird for us by now because it's a templating language, right? And there was a SAS setup, there was all this stuff. We just took that index.html that you saw there. The one that generated version, so the compiled version where all this stuff they'd get and pull back. Where there were low module elements, we pulled that up. We made that a D-AZO thing. The compiled CSS, they came out of SAS and composed, put in a D-AZO thing. The D-AZO thing sounds awesome by the way. That was like super impressive. I wrote one of the email again. You can just take a regular markup and make the CMS understand it. Is that what you're telling me? Yeah. More or less? Yeah, you take the markup and you have this intermediate thing that takes the CMS markup and your markup and uses rules to put the CMS content into your markup. And so that's what we did. And we continue to this day, for the KCRW site, which I do kind of maintenance on and new development, to use that middleman tool to build our theme as we make changes to it. So we can test out frontend changes, prototype them in the static site generator, which we keep using. And we're working from that same repo, we just have a branch on it that we use for the production site. And just continue making these changes. It generates sprites for us. It compresses all the images. It does some special Ratne image thing for all the team images. Right. It just does all of this great, magical, wonderful stuff that us back-end developers don't normally think about. And then that's the D-AZO theme. It gets you sort of the wrapping of the site. And then for the dynamic things like this, in some cases, these are tiles, in the clone sense of tiles. There was already a piece of HTML behind this that she put data into. Right. So we just took that little modular snippet of code that was called like show.showmodule.erb or something like that and turned it into a ZPD and made it a tile and rendered it with a content provider. And so we already had these little HTML snippets that we could turn into ZPD and put a little class behind. And so we have all these modular things that we can place wherever we need to in the page. Sometimes they'd be affordly. But either way, we've got the HTML already for them. All the CSS is being built with this static site generator. And so we use that pretty much unmodified. There are a few, what we did do is we went into, we're using collective.cover for our sort of flexible page layout engine because it wasn't around at the time. And so we had to add like a custom grid layout engine because it's got some portable grid layout. To use the grid layout, it was implied here by the CSS and all that. And we probably could have done it the other way around and modified all of his stuff and the grid that it used. But why? That's the point. Because we already sweated it all out. So the hope is that it's workable enough. And it sounds like, to bring back to Plone, I guess, it sounds like you're able to do that. I think if I gave this to like a word, I don't know, what's the laughing stock of CMSs from the Plone point of view? What do you all make fun of? It's probably word pressing. That's pretty much it for everyone. I would imagine it would be unpleasant to try to bend this around. I'm sure it's doable. People do whatever they want. But I imagine it would be unpleasant. But it sounds like you were able to, Plone, was variable enough that you made it happen. We did our best. Yeah. It worked. It worked. It was, you know, most of the difficulties had nothing to do with, sorry, most of the difficulties had nothing to do with the front end itself. It was more about the logic of getting things and fetching things and what shows and pages of the design process fit in place where this other thing was. And just abandoning functionality that it turned out to be cut to four. Yeah. Yeah. A lot of times when we were building it out in the design size, the designers have these big ideas that they invented when they were like, out for drinks with the client. And then it comes to me to build it out. And I'm like, I don't know if this is going to happen. I don't know if this is technically feasible if they had this data even. Or if they can stream it like that. Or if you can hop around like, I don't know if the audio servers support this. But still, again, like there's like a funnel of feasibility as it goes through the layers to like the final implementers and what the servers are running on and stuff like that. Where is it hosted? They have their own ops? Yeah. We host everything on AWS so I managed the ops. Okay. So, yeah. Anybody got another question? Great. Any clones CSS trickling through or is it all? That's a good question. What's the question here, Peter? How much of clones CSS ends up impacting the design here? We minimize it. So I think there's like one or two CSS files from clones that we call in through the resource registry. The theme itself is coming through DS. We don't register any of the themes CSS. Theme CSS is one huge CSS file that gets compiled basically. So the clones CSS is in there. Most of it is only there for logged in users. The only logged in users are the content managers. But yeah, they edit in the theme UI. So the UI you see when you visit kcrw.com. That's what they edit in and it's got the green editor on top. So there's a clone with the music player on top. A bunch of past-of-blocks and tiles and things. A lot of tiles. A lot of blocks. There's a lot of modules in this thing. Yeah. Oh yeah. They shared. So Doug just popped this up. These are the modular elements that he developed for us that we were able to use in various places and reuse. There's like a million of them. There's so many. There's so many. This site specifically feels like I had a lot of tiny content chunks that just spread out everywhere. Right. And a lot of these popular episodes, that's a module but it's filled with a whole, it's filled with one other module. Yeah. And it shows up multiple times. I mean, it's just like template reuse. It's nothing crazy. But this is like, this just helped us organize it. And how it's ultimately implemented. So it's like, oh, but it sounds like having it cut up like this ahead of time at least gave some kind of guidance for how you folks might share it. Yeah. Absolutely. Seeing that like looking within say the home page and seeing that there was just a one line that said, build look at this module. Right. It's a clue that we're going to need to build a reusable module for that purpose. And so, yeah, I think a good takeaway from this is like building something in a static environment like that. So it's a really helping thing. What goes into pages and how to use that. Cool. How's it? Anything else? Great. Great. You got it. Thank you.
|
When designing the new KCRW website, the design firm had to hand off the complex design to be integrated into a CMS (Plone) they knew nothing about. Doug explains the process and tools that made this transition go smoothly. KCRW is Southern California's flagship National Public Radio station. The redesign by Hard Candy Shell of this multimedia site won the 2015 Webby for best radio and podcast website.
|
10.5446/55320 (DOI)
|
Deci, dacă vă mulțumesc pentru a organizați și a fost oameni în care se va face plon 5, și să se reuștează a trebuie să se reuștează acest ticet pentru mine. Deci, nu avem mai vizibilă releasă, dar oameni trebuie să se reuștează acest satură. Deci, nu avem mai vizibilă releasă, dar oameni trebuie să se reuștează acest satură. Deci, nu avem mai vizibilă releasă, dar oameni trebuie să se reuștează acest satură. Deci, în pai-pai, releasă este oameni reuștează. Deci, nu avem mai vizibilă releasă, dar suntem oameni estetice de care o să reuștează dacă se reuștează și să se reuștează și să se reuștează. Deci, această versiune este încât acest plon 4. Am droguit arhetaie de dependenție și am adăugat zopschema NZ3C form. Am droguit magica GS, CSS Browser, vizibilă VGS și vizibilă ADGS și am reorganizat javascript și CSS în resursi bani. Deci, dacă vă mulțumesc să văd, nu aveți să adăugă versiunea aici, dacă nu, o să adăugă versiunea prețării, care este 9.8. Vă mulțumesc un plon 4. Deci, aici, am adăugat un plon 5 să se reuștează. Am adăugat un plon 4 să se reuștează. Am adăugat un folder. Alec Geek a fost prețării dacă nu am adăugat văd să văd să văd ce este diferit în versiunea prețării. Deci, văd un plon 4 că am început să facem configurăție, configurăție de făcetea de aici în versiunea prețării. Încă încă încă este acolo. În plon 5, acest este un de bach că appear. Deci, să văd folder versiunea și din menua de aici, îl să fie fost navigație. Și văd să văd un nou menu de versiunea de aici și văd configurăție de aici, îl să fie portării. Încă își văd o. Ce îl să fie? Văd versiunea prețării. Îți aveți făcetea de făcetea de aici în făcetea de făcetea de aici. Îți aveți să îl migrezeți din schema arhétai în schema zoop. Și aici îți văd alsoarea schema în schema zoop. Îți aveți schema zoop. Aici, avem o formă de zi 3C. Îți�i justi se reprezintă o schema în acest決. Aici, o schema de făcetea este o clasie în făcetea de aici. Îți îți importez și se reprezint o schema cu un disple. Încă se reprezintă o schema în făcetea de aici îți va trebui acest disple. Și mapping să explicează că sunt câteva schima este usat JCID, lumea și beatingă dividere, co badgele sitcom, Fastsure какоеisoare sunt în comentariu M fully metade inMAN in dimusă baza in după lä dando I�in butter bundle, magic bundle, and first edit. So now you will have, you will find all the resources in portals CSS, so first edit navigation. And here we will see all the widgets register in CSS. And in the bundle there is still this bundle, so when this get, if I select develop off, save you will get the bundle with all this first navigation widgets in CSS. In plom 5, there is no portal CSS in portal JavaScript, so you will find the bundles here. What else? So we are going to sprint on fixing remaining issues tomorrow on these issues. If you want to join us please to have the final release this weekend. Questions please. Yes. The layout is still the one from plom 4. Even in plom 5. But we can work on that. You can submit an issue and we can work. As yes, there is an issue here that it does not have the responsive design. But this can be easily fixed with the other team, I think. Anyone else? Ok, yes please. The widget is empty. Let's see. In the docs you says. Because the widgets.txt is empty. Because the documentation for widgets is per widget. So if you go to the text widget, you will find widget.txt. So I should state that in the widget.txt look in the. Online at the point. I know that. Anyone else? Ok, we are on time. I finished on time even if I start late. Thank you. animal只is
|
EEA Faceted Navigation, one of the most popular Plone add-ons, is now compatible with Plone 5. We will show you how to upgrade from Plone 4 and how to develop custom Faceted Navigation widgets and views.
|
10.5446/55321 (DOI)
|
My name is Kiddus Stevens, I'm the managing director of Cosend, I'm also the founder and project leader of Quave and today I'd like to share with you the story of an intellectual quest of how a Maverick professor becomes a freedom fighter of the mind and develops the equivalent of an atomic bomb for the freedom of mind. So I'd like you to meet my intellectual hero Michael Polanyi and this will be the tale of his extraordinary work. But first we have a bit of a quest of our own to perform because as I found out in doing this research in this day and age we're not accustomed to that mindset anymore. We live in an age of distraction, we have our devices all the time, we have so much stimuli that we're accustomed to that the whole concept of people who are doing multi-year research or multi-decade research and writing on topics that have been covered for millennia, that's just not done anymore. So let's start where we are and then slowly backtrack from there to see where that came from and I'd like to offer my personal journey as a reference here not because I'm that interested but because it brings all the connections into play that I've discovered in this research and frankly it's the only way that I can tell the story as well. Who of you has heard of Scrum? Now keep your hands in the air if you have used Scrum a daily stand up in the last two weeks. Oh that's less than I expected actually but Scrum is a well-known framework for software management. Yeah, more volume. Of course I've been using Scrum myself also as part of my Agile Methodologies toolkit and combination with stuff like test-driven development, continuous integration, sometimes extreme programming. It's just part of the job description if you're on software development nowadays. And I never really gave much thought to where Scrum might have come from. It's just there and probably it was someone if I still put up by a couple of dudes on the internet around the turn of the millennium that's typically how it goes. And you typically also don't look back that far because anything that happened more than 10 years ago in web history is ancient history and not very relevant anymore anyway. As you may know I'm the project lead for Quave and we're building a digital workplace platform on top of Plone. And as part you can call it a social internet or enterprise 2.0 or whatever. We're building an environment where knowledge workers can communicate and collaborate. As far as I'm concerned that means we're in the business of knowledge management in addition to being in the business of software development. We develop software solutions that help, you know, that facilitate the knowledge flow between people. In this context I did a lot of research on knowledge management and that was actually the start of Quave, this research. I summarized my findings in a little book with a long title and now we're executing a modern roadmap. The first thing you notice when you look into knowledge management as a theory field is that there's two names that are cropping up all the time. These are the two most cited authors. The left one is Nonaka and the right one is Polanyi. Once you read Nonaka you see he frequently sites Polanyi himself. So this is like the monoload of all knowledge management theory nowadays is tracing back to these guys. The core concept they're working on is tacit knowledge. I will get to have a deeper look at tacit knowledge later in this talk. I started absorbing this theory by reading contemporary research quoting Nonaka and Polanyi and reading parts of Nonaka's work but I never really got around to reading Polanyi's work. The guy's been dead for more than, you know, for nearly half a century and he wrote a lot of books and then, you know, if you see what they say is his most important work that's actually not his most quoted work so where do you start? While reading up on knowledge management I tried to get a grip on this. What's going on? The left one is still on? Then I'll just keep talking. So while reading up on knowledge management I tried to get a grip on this difficult concept of tacit knowledge by applying it to my own experiences as, you know, being part of a team in software development. I need that screen. I started to understand why writing documentation and scripts is not enough. You actually need very rich personal interactions with people, very rich channels of virtual presence to overcome the downsides of remote collaboration. And again an appreciation of the importance of water-cold conversations and of overhearing snippets of conversation in a room where you're collocated as a team and how you need to do dedicated efforts to overcome those deficiencies if you're doing remote collaboration. That software development practices evolved in line with what our soil knowledge management theory seemed to be no more than just a logical progression given the pressures of the job. It's like, you know, this is a validation of the theory in practice. And then it hit me sideways. While I was reading up on an Anakas theory, I came across Scrum. And it turned out that this was a paper about Scrum and about Anakas. And this Anakas guy was not just some secondary reference in a paper about Scrum. He's actually the guy who coined the whole concept of Scrum. He wrote the, he co-wrote the original 1986 Harvard Business Review article that codified or that brought to presence the modern concept of Scrum. And it's the first use of Scrum outside of rugby in management theory. And while this article is based on knowledge dynamics within the Japanese auto industry, to us, reading it back now, it's very recognizable as, you know, the Scrum methodology that we use in software management, at least, you know, the essence of that methodology. And the exact mechanics of that in software were codified later in 1995 by Jeff Sutter and then Ken Schwaber. So suddenly these two aspects of my job, software management and knowledge management, they come together and they are not disjunct and just coincidentally co-evolving. They're actually part of the same intellectual tree, the same intellectual heritage. Both knowledge management as a field and Scrum FMS methodology were created by Anakas and its collaborators. And all of this is founded on a single monologue of inspiration provided by Polanyi. So at this point I made two decisions. One is that I need to give this talk and share with you this discovery of, you know, how we have this amazing intellectual depth beneath what we're doing and how far back that reaches. And the other one is I really need to read up on Polanyi himself. To better understand Polanyi's theory of test and knowledge, it helps to have a bit of a background in his life and the world he lived in. Polanyi was born as the fifth child in a wealthy non-religious Jewish family in Budapest. Yeah, in Budapest. They just moved from Vienna and they moved in the upper circles of what was then the Austro-Hungarian Empire, you know, the Sissy movies stuff. He published his first academic paper on chemistry at the age of 19. He witnessed, he participated in and he survived the first world war, got married after that, moved to Berlin. And there he participated in weekly seminars with Einstein, Schrödinger, Planck and, I don't know if, yeah, Planck was also there. All the leading physicists of the day that were working in Berlin at the time. His career really nearly broke down when he proposed a theory that got shot down by Einstein and another prominent. Only two then, ten years later, it emerged from new research that Polanyi had actually been right and Einstein had been wrong. That's kind of funny. But, so what we have here is a professor in chemistry who is performing at the top of his game. And this is very relevant because this grounding he has in research in the natural sciences deeply informs his later work in the philosophy of knowledge and the philosophy of science. Around the Nazi takeover in Germany, Polanyi is offered a job in Manchester. He declines at first. Even more, yeah. I will up the volume a bit more. Yeah. So he declines at first, but in time he realizes that he just made a big mistake and he accepts and moves to England and there he spends the rest of his life. And then he also travels frequently to the United States. While he was still in Germany, Polanyi traveled several times to the Soviet Union. This is already in the 30s. So he visited conferences there on chemical research. And on one of his later trips he runs into Nikolai Bukarin. That's the guy here on the right, Nexo Stalin, who was then not yet killed by Stalin and still held a prominent position as a leading theorist of the Communist Party. And Polanyi has a conversation with Bukarin. And in that conversation they talk about science and the role of science in society. And Bukarin insists that the whole distinction between pure science, pure research and applied science, that's just some weirdness of capitalism. They don't have that problem in communism because in communism all research will inevitably be performed in harmony with the five-year plan. It will serve the current five-year plan. And actually writing that down in the plan is not, you know, that's not an act of imposition. That's kind of like just a confirmation of the pre-existing harmony between science and the Communist leadership. So that might sound like so much abstract Communist blah blah to you, but Polanyi recognizes that for what it is. This is a direct attack on the freedom of thought. If you follow that line of reasoning, all research has to be done along lines that have been pre-approved by the Communist leadership. That means that all thought needs to move within lines that have been defined by the Communist leadership. Polanyi sets out on a research program that will take him two decades to come up with a philosophical approach that invalidates that whole way of thinking and that demonstrates clearly that this is not the way science can be done at all. This is just not how it works. At this point I can't resist making a small digression. In a previous life I was trained as an economist, so I'm aware that at the same time this was going on there was a debate raging within the economy profession, which is called the Socialist Calculation Debate. This is basically what's the most efficient way to handle information in a society. Can you do that top down? So can you make a big plan and then plan a whole economy? Or do you need bottom-up decision-making in a market-driven economy? And I'm afraid that gets you yawning, but this is the kind of stuff that decided the Cold War. Because if you look beyond the politics and the rhetoric, if you look beyond the whole military threat, underlying all of that is economic might. But ultimately decides, you know, it's very costly to build nukes and to build submarines, which is what the North Koreans are firing and finding out. The leadership has the bomb, but the people are starving. So the efficiency of your economy ultimately defines how much military might you can project. And this was already argued convincingly in the 40s by Friedrich Hayek, who showed that this bottom-up decision-making in terms of information in economics is way superior to top-down decision-making because you just lack the necessary information at the top to make those right decisions. And then it just took another 40 years for that to play out in real life until the wall fell. So, such is the power of ideas, and that's the point I'm actually trying to make here, that we live in a society that has a tendency of treating ideas as something ephemeral, something incidental, something that's just an instrumental building block towards doing things that actually matter, like buying a bigger house or driving a big car. And Polanyi, he recognized the power of ideas. So he dedicated his life to developing an idea, a fear of knowledge, so strong that I have called it the equivalent of an atomic bomb. And it's a peaceful weapon, but it's an extraordinarily powerful defense of the freedom of thought. It drives his power from the fact that it's closely aligned with reality. It deeply penetrates the structure of reality. And ultimately, you can't just ignore ideas because you don't like the reality they present. So, the idea which is most strongly aligned with the nature of reality, that's the idea that will ultimately win. So now, after all these detours and all the context that provided, we're ready to actually look and take a deeper look at this concept of test and knowledge. And you will see that this is actually congruent with the way test and knowledge operates, that you need to take in all of this context before you can actually get to the essence of the matter. And we've been talking about weapons of mass destruction and knowledge. I thought it would be funny to make a short digression here as an introduction to a knowledge philosopher gone over to the dark side. This is Donald Romfield. And he has this famous quote about unknown unknowns. Let me show that. There are reports that there is no evidence of a direct link between a Baghdad and some of these terrorist organizations. There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say we know there's some things we do not know. But there are also unknown unknowns. The ones we don't know we don't know. Excuse me, but is this an unknown unknown? I'm not several unknowns. So I'm not going to say which it is. But the questioner, can you hear me? I'm right here. I'm right here. So what he does here is he employs a classic 2x2 NBA matrix. And he actually skips a quadrant. And that's the quadrant that we're talking about here. Because he talks about known unknowns. He talks about known unknowns. He talks about unknown unknowns. But he doesn't talk about unknown unknowns. And the most succinct way to summarize Balani's thinking is his quote, we can know more than we can tell. Many people make the mistake when they hear about tested knowledge to then conclude that there is these two classes of knowledge, tested knowledge and explicit knowledge. And those two are disjunct. And knowledge is out of tested or it is explicit. And this is wrong. But you're forgiven for thinking so because it's a very common mistake. And actually the same research I read to show how important Balani is in quotations and management research and knowledge management. That author also did an analysis. And he said, more than half the people quoting the non-ni either have not read them at all or completely missed the point. So that tells you something about the quality of cetaceans in scientific literature. The best way to illustrate Balani's theory of knowledge is to use the image of a man with a stick. So a man with a stick who probes his environment with this stick. Maybe the man is blind or maybe he's in a cave and well you have an echo of Plato's philosophy there if you're classically trained. When you're that man fumbling around with that stick, where is your attention? Is it on the stick or is it beyond the stick? And this is the whole point. Your attention is not on the stick. Your attention is right here. Where the stick touches the surroundings and you feel the surroundings via the stick. The moment you feel the stick, the way it moves in your hand, the pressure against your palm, that's the moment where you don't feel the environment anymore. Then you're feeling the stick. So here Balani says, we have a basic structure of knowledge which is that we have a proximate object and we have a more distant object. The proximate object is a stick. We use the stick, the proximate object, to gain access to the distant object which in this case is the surface of the street. This man is walking along. We can't have access to the street in this perception without the stick. At the same time we can't have access to the street if we're conscious of the stick. Because our consciousness rests in the ball in the top of the stick, not in the stick itself. Let me translate that for you to a more familiar setting. That's Matt and Philip. When you're programming, where is your attention? Where is your consciousness when you're working? Is it on a keyboard? On the layout of the keys, on the position of the A versus the K, on the way the keys travel? It's not when you're programming. When you look at the screen, is your attention focused on the individual pixels that constitute the screen and how the pixels shape into letters, how the letters shape into words, how the words shape into statements? No, that's not where your attention is. Your attention is beyond all of those. All those hardware interfaces, they recede into the background of your consciousness as your consciousness moves between the code flow and the data structures and the user interaction that you're creating there. All of that low level stuff completely moves away and you move beyond all of that into conceptual space where you're actually with your consciousness. As you see, you need to have internalized a lot of things in order to be an effective programmer. You need to know how to read, you need to know how to read Python, you need to know how to type, to type blindly. The shortcuts of your editor must be in muscle memory. As a gazillion things, you need to know, yet not be aware of that you're knowing them, not have your attention on all those things that you're knowing so that you can properly focus on the thing that you're actually doing. This is the basic structure of knowledge that Polanyi has described. The moment that you start paying attention to all these things that have dropped away, that is also the moment where you drop out of conceptual space. The moment a keyboard key gets stuck or the moment you get a dead pixel on your screen, that's the moment where you drop out of conceptual space and your focus is on the hardware again. There's actually a resonance with Hiredagger there, even if Polanyi was not aware of that. Another example, more simple, to drive home this point. Who of you knows how to bike? Who of you doesn't? Everybody does. Suppose we had somebody here who didn't know how to bike and we would couple them with somebody who didn't know how to bike and we gave them this small conference room here next door and we said, okay guys, you guys are going to sit together, you have a workshop and in an hour the other guy will know how to bike as well. That's not how it works, right? You can't ride a biking manual. Well you can, but reading that manual won't teach you how to drive a bike. You need muscle memory for that. So we all know things like driving a bike, which we cannot explain in such a detail that the explanation itself is sufficient for somebody else to pick up that knowledge. There's a tacit component involved there. That sounds simple enough, but then Polanyi uses this theory as a platform for a wide-ranging set of subjects where he dives into. He pointed out that Polanyi was primarily motivated by a quest to defend the freedom of thought. And if we accepted all knowledge, all knowing as a tacit component, something we can't explain, something that's deeply personal, then it follows that all knowing is inherently subjective or at least all knowing has an inherently subjective irreducible component. And that means that you can't tell somebody else what they must think. You aren't even aware, you're saying, you know, you can't tell, I can't tell what I am thinking fully. So how can I tell you what to think? And this strikes a hard blow at the positivist tradition in science, where the goal is to formalize everything and to get this detached view of reality, which has no observer in it. In Polanyi, it's less than useless to try and formalize all knowledge and reduce, you know, remove the observer from the knowing, because that would in effect destroy all knowing since the observer is inherently present in all knowing. Polanyi also has a goal at emergence, what we would nowadays call complexity theory. Complex phenomena emerge at a level that cannot be reduced to simpler mechanisms at a lower level. And biology, for example, it's impossible to explain living things as just constellations of atoms and chemistry. You have to accord for function, there is a goal. Atoms they can change, but they cannot die. Living organisms, they fail and they die ultimately. So the failure in function we call dying takes place at a higher level than physical processes only. And for Polanyi, in this example, the physical and chemistry processes are the proximal term, so the tacit term, that gives us access to the more higher level of emergence, the more higher level of complexity where we can reason about life. And at an even higher level, he then posits morality. So by using these proximal terms as probing sticks, we get insight into the higher level emergence of life. And here again, the consequences are profound because this approach invalidates reductionism. The whole idea that you can reduce anything to component building blocks and that you gain an exhaustive understanding of everything if you just have enough understanding of the separate building blocks, that's not valid anymore in this approach. If there's a higher level pattern that emerges in non-obvious ways from lower level mechanisms. So in the process of carving out a platform for the freedom of thought, Polanyi here strikes heart blows at both objectivism and reductionism. This is basically a frontal attack on the objectivist reductionist paradigm in science, the Newtonian paradigm where the universe is just a, you know, it's a deterministic game of atoms and balls. And Polanyi already said that Polanyi was born in a non-religious Jewish family. What should start to make sense by now is that later in life he converted to Catholicism. And what we can see here is that his whole intellectual journey is a search to infuse meaning and basically spirituality into science and into understanding and into knowledge. And in the end, he even writes about evolution and there's clear parallels with Thayl Hades on the International Day and who he quotes, who reframes evolution as a progression towards a universal consciousness. So that was a lot to take in. I hope you're still with me here. So let's try and fast forward and see how did that play out and how did it end up influencing our current world. And for that we have to take a look at the Japanese connection. Only in the 1930s and accelerating the effort after World War II, the Toyota production system was being iteratively refined at the Toyota Corporation. Through focus on waste reduction, quality and continuous improvement and continuous learning, the Japanese automobile industry outcompeted the Americans. By the 1990s, Toyota consistently achieved four times the productivity and 12 times the quality of their newest American competitor, which was General Motors. And as you know, 10 years ago, General Motors filed for bankruptcy. You might call this lean, but that's actually an American concept that was later derived from this work through a study done here at MIT. For Nonaka and Takayuchi, who came up with this scrum concept, the essence is scrum, which means a cross functional teams engaging in the dynamic conflict of ideas from which innovation emerges. And in that theory, BA is an essential concept. And BA is something like place, but it's like a conceptual space in which teams collaborate and in which they fuse ideas and from there they generate innovation. So in 1986, a Harvard Business Review article, the new new product development game, and Nonaka and Takayuchi described scrum as a set of principles of new product development processes, centered on self-organizing teams and continuous learning with a relentless push towards innovation. This specifically shows how the old waterfall model is much slower in driving product innovation than a new, more incremental process. Using a sports metaphor, they compare this to a relay race in athletics, where one individual athlete gives over the baton to the next, and they contrast it with a rugby approach, where the whole team moves upfield as a single unit and passes the ball back and forth as they go along. And that's why they introduced the term scrum for this. Later in 1995, Kent Schwaiber and Jeff Suttelin took the scrum label and combined the underlying principles identified by Nonaka with stuff they were doing in software development, and then codified the scrum methodology that you all know. The scrum article also introduced the notion of BA, which I just explained already, and this is a shared space for innovation. And then these Japanese scholars, they went a step further and they took Nonaka's theory and they built upon the scrum work they'd been doing before, and they came up with a knowledge-creating dynamic in an article in 1991, the Knowledge-Creating Company, later a book, 1995. And this has become the foundational model in modern knowledge management, the idea that you have a dynamic, a spiral through which companies create knowledge. This model is called SACI, and SACI stands for Socialization, Externalization, Combination and Internalization. In the socialization stage, tacit knowledge is exchanged between individuals through joint activities. So the Japanese describe this in rather mystical terms as freeing yourself to become part of a larger self that then includes the tacit knowledge of the other guy next to you. And in the context of software development, this clearly resonates with extreme programming practice, for example, where you sit together at one keyboard and one workstation and work together as one unit. And this is a great way to learn stuff that you wouldn't learn otherwise. You can see where the other guy is looking, how his cursor navigates the software, you can see the type of stuff he starts making and the corrections he makes. There's all this very low-level stuff that you're hardly aware of, and that really gives you an insight into thought processes of your colleague programmer in a way that you wouldn't get otherwise. The next stage, externalization is where tacit knowledge is articulated into explicit knowledge. So that's like our docs and our training efforts where we start writing down that knowledge. Then you have the opportunity, once it's externalized, you can combine external knowledge. By recombining knowledge, you can create new knowledge. And then finally, it's internalized. And that's like training where you embody the things you learned again in such a way that you can actually be proficient in using them and writing new software. So what we see here is that Polanyi's project of re-humanizing science and bringing back a sense of wonder and awe is taken up by a group of Japanese scholars who then combine that with very deep traditions, oriental traditions in ways of thinking and ways of doing, which then in a final twist is brought back into Western management science again. By these same guys working here in Boston at MIT and at Harvard and translating that back into an American context. We still haven't talked about filter bubbles, which is the advertised topic of this talk. I think it was a brilliant move by Sally to call this not an inquiry into the history of knowledge management with the focus on a guy who's been dead for half a century. Filter bubbles, much better, much better. Even if it may raise some eyebrows in the truth in advertising department. And actually we did talk about filter bubbles. I mean, if you look at filter bubbles from this test and knowledge perspective, they start to make a lot of sense. What is a filter bubble? A filter bubble is a echo chamber effect in social media where you select your social circle and your information sources in such a way that they just reconfirm your existing prejudices. So basically that's a combination of a tacit knowledge framework that constrains your worldview with an intellectual laziness to step out of that framework and you just keep echoing the same worldview back to yourself again. The way in which tacit knowledge frames your perception in your worldview here is completely consistent with the way that Polanyi articulated tacit knowledge. The way these bubbles remain isolated is completely antithetical to his approach to science. So in his vision, science would be a collective endeavor where all these bubbles would overlap. So he envisioned science as something where, as like a network of communities which had some truth standards and truth balancing between them which would connect all these bubbles. But we're not talking about science here, we're talking about Facebook leaks, right? And in a way Polanyi did anticipate a lot of the dysfunctions we can see in the media nowadays. His day in age was no stranger to creepy philosophers and wacky politicians, on the contrary. That was one of his pushes for his work. And he identified two extreme positions which he regarded as unbalanced or evil. One was moral skepticism and the other is moral perfectionism. So moral skepticism is the worldview of existentialism. There's only facts and there's nothing beyond the bare facts. So there's no meaning to life. And the other extreme was moral perfectionism, of which he saw communism as an expression of moral perfectionism. So in communism we have an ideology who does a very deep moral appeal for the good of mankind and they sketch some utopia. They do that based on claiming that they are scientifically founded and then in the same graph they deny all morality in science or in public life. And that's like a contradiction built into that. We can see a weird fusion of both of these tendencies in politics nowadays. The ongoing presidential campaign has been called post factual because of the blatant disregard Trump has displayed for the truth. And that's an extreme form of nihilism. But on the same time, Trump does a very moral appeal with his slogan, make America great again. So he taps into these emotions as well. So what we see here is a combination of a moral vacuum when it comes to facts with a deeply moral stance when it comes to an outrage and put Clinton in jail and make America great again. We should be aware that this is not an exclusively American phenomenon and even within America it predates Trump. If we go back to Romsell, which we just saw, the Bush administration at the time was proud of not being part of the reality based community. So Polanyi strongly held the opinion that there is such a thing as a objective interpersonal reality out there, even if you can never fully know that reality. And a solution to avoid fragmentation, like I just already described, is that we have this society explorers, this scientific network where people exchange views in such a way that we arrive at a common shared understanding, at least about the fundamentals of the environment that we live in and the reality that we live in. So his vision is that of a bottom-up, co-created effort in which many minds participate to collectively access that reality. To close off, I'd like to point out that this whole concept of tacit knowledge opens some very fundamental questions on the feasibility and morality of artificial intelligence. Is a layer deep learning neural network sufficient to capture the dynamic of a proximate object and an emergent understanding of a deeper reality? Is it true that a neural network, that lower layers in a neural network can encapsulate that tacit proximate aspect and then really gain an understanding of the deeper layers of our reality? I would like to believe not. And as a clear consequence of the way Polanyi leverages this theory of tacit knowledge to address emergent and evolution, is that moral consciousness emerges at higher levels of consciousness. It's not a coincidence that the last word and his key text, the tacit dimension, is religion. So we can operationalize Polanyi's theory into a simple but crue test as follows. If machine intelligence does break out into a singularity, will it be a kind God to us? I'm on the null hypothesis there. I don't think so. I love my code, but I don't have the feeling that that's a mutual emotion. And the crudeness of the test resides in the fact that I'd rather not be around to find out whether I was true. Thanks for listening to my rant about life, the universe and everything and back to the ground. No questions, only thoughts. You don't know if you have a question. Well maybe if you start speaking you will find out. That's just one extreme. The value is in the middle. So Polanyi constantly seeks to balance these approaches. He says there is a reality, so that's away from nihilism. There is reality and there is morality. The truth is to combine those two. The extremes are to just have reality without morality, which is existentialism, or to just have morality, but then without any connection to facts, which is totalitarianism. And he says we need to balance these two. So we need access to, we need to recognize that there is an objective world out there, even if we cannot fully access that. But he also calls for a recognition of the fact that there's more to that world than we can pin down, and that there should be space for moral thought, essentially. Can you guys document this on the wall? Could you put in some references for some of his works? Yeah, sure. That would be great. Maybe you could send me a few references and I'll add them to the description on the website. Yeah, that would be awesome. Thank you. Okay, thank you. Thank you. Thank you.
|
Guido riffs on the links between knowledge management theory, agile methodologies, the cold war, and more. The Scrum methodology and the leading theory in knowledge management share a common ancestry. A new philosophy of knowing was created by a maverick genius, with the explicit goal of promoting freedom of thought and resisting communism. We'll explore how this plays out in 21st century practices like Scrum and "work out loud" digital workplace platforms. Finally we'll apply this theory to filter bubbles and the challenges of a post-factual media environment.
|
10.5446/55326 (DOI)
|
So I'm going to talk today about how to make workflows work for you, which is more or less an introduction to the Zope and Plone workflow story and actually the workflow stories. It's intended for a pretty general audience, even so the talk says it's intermediate, but developers will definitely also get something out of it of what we have been up to, some really interesting technical pieces. So my name is Stefan Richter and I have been involved in the Zope community since 1999. I was there at the first sprints. I developed large parts of Zope 3 and I also have to admit I am the creator of C3C4 which you all love to hate. I've heard this conference. So today I'm going to talk a little bit about workflows and hopefully give you some historical context. The last time I thought I talked at PloneConf in Seattle till several people here told me that I actually was at 2008 in Washington DC as well, which had been completely erased out of my mind and somebody even told me that I gave a talk that year. Unfortunately that was the first talk after the party and I have zero recollection of this talk and I have been told the talk was pretty awful. So apologies for that, I really thought it was 2006 that I gave last a talk. What happened since 2006? I had two kids, Tessan's, Anton is now nine and Conrad is already six, which is great but it also means I have less time to play around with the technical stuff. A couple years back, about seven years back, I had some cardio problems and I decided I need to work out a little bit more and of course because I do everything in the extreme for the people who know me, I decided to do a couple of iron man's and trained up for that and completed that. In 2007 I also decided to leave the consulting business and try start-up life and so I have been involved in four start-ups. Kiosk was in the healthcare industry, they tried to improve people's lives. I had a broadband, they had Comcast as a large customer, I developed a multicast IPTV program, at Cipher Health we did some ObamaCare related stuff and now I'm since three years at Shubox where we handle all of the governance of early stage companies from incorporation to equity management, board management. If you have a small corporation or in Germany, in Argy, you know that there's just a lot of overhead with maintaining that and so Shubox does a lot of this for you. All right, now enough about me. So here's a sort of the history of ZOPE and in that sense also Plown's workflow story. It all started in February 2001, really with the first commit to the CMF and I dig this up, the atresivas did that, commit was already a reference to CMF core workflow core and I remember the story actually quite nicely. Somebody in the community was excited about CMF coming out because ZOPE core for a long time or digital equations had for a long time announced that it would be coming and everybody said like, but there's no workflow. You have all this workflow core definition and there is none and ZOPE core said like, well, you should, community, you should develop your own workflow and everybody is like, no, but we don't want to. And eventually digital equations got pressured into releasing their internal workflow to a based on CMF core called DC workflow that Shane first publicly committed in June 2001. So it didn't take too long for the community pressure digital equations into that and it filled out all the functionality and as I have learned this week DC workflow is still in production and running, which is quite interesting if you think how old it is. Actually now I think by that time it was called ZOPE core paid ZOPE core in the con was paid to develop another workflow product and basically it was for the Navy and they required all these very stringent government processes and that didn't really fit well in the DC workflow story. So then looked around and found the WFMC standard for which editors existed and developed a workflow engine on top of this, which is known as ZOPE.WFMC and he tried to attempt to standardize this so multiple communities could share sort of this sort of workflow approach. Clearly with the departure of ZOPE core doing this type of work, it died in 2011 and actually the story around this is that Jim likes to tell Paul likes to tell Paul Everett is the Navy paid ZOPE core a lot of money to put workflows into the system, but then they didn't like and didn't want to actually follow the process and they paid them a lot of money to rip it all back out. So ZOPE WFMC never really saw it in saw production life for a long time in ZOPE core world. So the last sensible commit was in 2011. As a reaction to ZOPE WFMC because it was the only ZOPE 3 story, oh wait I didn't turn this on, it was the only ZOPE 3 story, is that, I get an echo here. Can you guys hear me back there? Okay, so I'll leave the mic off, that's better. I don't know whether you guys remember, I remember this vividly, Martin Fawzen ran around the community and said like everything is too complicated in ZOPE 3, all these adapters, these utilities, that's crazy. And he had this tick for a couple of months to develop the hurry namespace, everything is done in a hurry and very simple. And one of the outcomes was that he wanted to reproduce what DC workflow does for ZOPE 2, he create a hurry.workflow for ZOPE 3. I checked, he didn't seem to use it anymore, doesn't seem to use it anymore and it ended up dying in 2013 was at least the last real bugfix that I commit. Well, along came my new startup shoebox and it was a perfect match for workflow engine, so we decided to resurrect ZOPE.WFMC and see what we could do with it. And this talk is mostly about what we were able to do with the base and where we went and the problems we solved. But let's go back. So what is DC workflow and hurry workflow? It is known as a state-based workflow, right? That's what the community usually calls this. And the way it works is you have a target, in your case usually a content object in PLONE or document and you store, you have a single attribute where you store the state of that object and then really the workflow engine consists out of defining transitions between these states and including permissions and all this kind of good stuff that belongs to it. So the state machine, I want to really call it a state machine, manages the state and the transitions and then the progression of the states and enforces the rules of the transitions. Right? And it's very simple. It's very like, I think, hurry.workflow is only 150 lines of code which really represents the core of such a state machine. So I went to the PLONE documentation and I found this example that I had seen already 15 plus years ago as the example for how to publish things on PLONE and it's still in the documentation so I just redrew it and put it up here. So here's how it works, right? A state-based workflow. You have three states, draft, pending and published and you define the transitions which are represented by the errors and I put the actors or participants, the role or the roles that can execute that action here as well to simulate the security. Now really the start and finish here makes no sense. I just did it so you know how to read it and that it goes from left to right because you really maneuvering states, right? You don't think really about activities. It's also very interesting. You can then take a published object and put it, retract it back to the draft. So this is what you guys, if you use PLONE, do all day long. You might adjust this at more steps or more states, more transitions, less transitions but that's basically how it works. Now the reality is state-based workflows are not really workflows and there are two big problems with this. First of all, you can only manage a single target like a single content object and you can really only maintain one state at a time unless you define a second attribute and play this entire game again. And I was asking actually all week around here how is it done, especially the single target that really bothers me as a limitation and I was really hoping for the answer that somebody tells me, well really we create like a folder-ish object and dump all our resources like images and so on that belong to an article or blog post and publish that but nobody gave me this answer. So it was all about like, ah we open it, transaction, we push both all the artifacts through and then we commit the transaction. It all works out. So sort of that was the answer. And I was like sort of disappointed. I thought it had gone a little bit further than that in the last 10 years since I have been paying attention. And now then I thought like, you know what, I really think we invented state-based workflows. So I went out there and looked for state-based workflows and ironically the dot net world especially Microsoft SharePoint and even the Java world have picked up this term but I have not found a reference prior to 2001. So are we to blame of having invented state-based workflows? Probably so. It's quite ironic. There are lots of things in the Python and large attack community that were actually invented by the zop and clone community that people now just accept as a fact of life. We start with the term sprint that was taken by Trecevers and it ends with relative and absolute imports in Python which were definitely a ZCML invention. Even so, Guido will never admit it. I was in the room when he picked us up. So let's talk about activity-based workflows. Clearly at ZopWMC is an example and it is really based on BPMN. And BPMN is a graphical representation of processes and it is widely used in many, many different industries. So for example, I was in China a few years back and of course at a manufacturing company for electronics and there of course all ISO 9001 certified which is a quality assurance standard. And what do I see? A gigantic wall with a gigantic process looking exactly like this. Oops. Let me see that I can turn this off. There's a gigantic list of these boxes there describing how the quality assurance process works. And non-surprising if you talk to companies like IBM, Siemens, Deloitte, any of those big IT companies, when they talk about workflow they mean that and they all have super sophisticated tools to run these type of processes in any sort of industry whether it's manufacturing, doing data warehousing, whatnot and execute these workflows. So let's get back a little bit to the history. So BPMN exists for a long time and it was mostly meant so everybody speaks a common language like everybody knows how to read a construction plan, you want to read a workflow. That's what BPMN was. The problem was people wanted to automate these processes but BPMN 1.0 did not define a serialization format. So a bunch of the big players got together and created the WFMC, the workflow management coalition and defined an XML based serialization format called XPDL. And so XPDL was then used by all the early workflow engines as the input format. Now in the last few years BPMN 2.0 was released and it finally specified an XML serialization format of its own. It's part of the BPMN 2.0 standard. So in some sense XPDL is now obsolete. But there are so many processes out there still using it that it will be around for a very long time. Now the reason Jim also chose XPDL is because back then even in 2004 there was already an editor around called the Java workflow editor. Was known as Java. Sometimes also we find references together workflow that can edit these XPDL files including the ones that Zope WFMC could digest. And here you see a screenshot of such an editor with one of our more complicated or usual workflows loaded up. And the nice thing about a Java workflow editor is it's highly customizable because it is intended to be white labeled in for large corporations. So the guys have a lot of bank customers and other highly structured organizations. Get to this in a bit. So let's convert our state based workflow into an activity based workflow. You will see a few differences. So these are our actors here and these lanes are known as swim lanes. And you basically and you always almost always have a system swim lane where the back end executes something. And basically instead of describing states and transitions you only describe activity things that you do like more like the transition. But it's very different right. So you create a draft of your article. Then you ask the reviewer or the manager or the editor to review the article and then publish it. And you can see publish is already a system step that is not even a human interaction. That happens once the article has been reviewed. This little diamond with the X is called an OR gate in BPMN. And it's basically just a simple if statement. You can have multiple more than two coming out of this. At Chewbox we always make it a Boolean decision because it makes it much much easier to read and follow the workflows over time. And you can say no. Then we go back into draft stage and we do yes. Then we publish it. I also added here if the creator wants to retract his article he can use an exception flow that's why it's red. They can retract it. So you can have multiple end points. I just wanted to demonstrate that as well. Are there questions about this because we're building up on this example for a little bit. Yeah. No exactly it's not. You see that look how I did not use nouns. We read in Chewbox is part of our quality insurance that these boxes are never announced. They're always verbs. It's create draft review publish. So these are actions. So I have not made a single statement about what artifacts this workflow creates. Does it create one document. Does it create a document with lots of images. Does it create emails. All sorts of artifacts. I have made no statements about that yet. So there's actually no state from a content point of view involved here. And trust me I have when I started this in 2013 again I had to retrain myself too. It is very very hard to make the transition initially. But once you get it then it's easy. But it's yeah. So instead of just saying create draft you could just say create. Yes. Review publish. And you could if you wanted to throw a noun in there you could just say create thingy. Right. Whatever it is. Right exactly. But we specifically want to stay with our publication workflow and publishing an article. So let's keep that in mind. Yeah. Because well let's let's look at the issues of this high level workflow. This workflow as I wrote it is too high level to be useful. It is very very hard to write machine executable code based on that workflow that actually will execute anything sensible because it leaves too much open. And that's and that means the engine provides really trivial value. So there was almost no point in writing a workflow engine just for doing that. And that's I think the reason why Zop WFMC didn't survive initially is because it was way too high level for people to do anything useful with it. And really the only benefit now we get right that you didn't have with a prone workflows or the DC workflow is a graphical representation. But the graphical representation of that high level is so trivial that how much does it explain to you what you wouldn't have understood before. So now I actually took that very trivial translation of the state based workflow and actually implemented it as it would be runnable in shoebox. So I created that workflow and stuck it into our system. And this is what I had to come up with. First of all we never talked about who the creator and the real are. So the first thing is you need to assign them several workflow systems hook their workflow engines up to LDAP and they auto detect these things. We have found that doing that implicitly was not a good idea. So we have switched completely off assigning these swim lanes manually. And a sign a creator basically says, hey assign this lane to the person who initiated the process. And the reviewer because we are a legal processes thing is I assigned to the president of the company. Only the president of the company can approve these articles. It's simply so it fits in the system nicely. We don't have a good concept of managers yet so it wasn't that easy to. So I changed the language a little bit to say author article which basically means enter the title, the lead in and the description. I will show you this in a second. Just because I provided the data, this is just a simple form input similar to what you would auto generate with all the nice prone tools. You then have to generate the article. In our case these are mostly PDF documents so in my case I create a simple PDF document for review. And then let's skip that incoming orgate. And then why? Because the creator never saw the generated article. I need a review step. So I need to give the, do I have actually, oh I have even laser points so I can sort it further away. So you want to give the user the ability to review the article and potentially go back. And we do have capabilities to step back. So we skip this for now and then this is an endgate which means execute things in parallel because we send things off to approval to the reviewer to approve the article but you know what we don't want the creator to just sit there and not know what's going on. Because this approval in our, in Chewboxes, in our real cases is many, many steps long because first the lawyers have to review it, then the board might have to review a document, then you know who knows who else have to give a sign off that's all depending on policy. So you want to give the creator some feedback and this work item rates until this activity rates until that activity completes. And then this gate rates until both errors, both transitions arrive. Okay, then it goes on and depending on what the approval was like, if it was not approved we go back to review. That's why we have to reset the rejection reason, right? So we have to clear it out so it doesn't go to no all the time. And then you go to yes and you see yes is orange which means it is the default flow. So if nothing, if it doesn't find anything or this, if any of the conditions are false then it always, it's like the else statement, that's basically the else. And then it's a system step to publish the article. In our case this means setting a finalized tag to it, assign certain tags to put it in the right folder, et cetera, et cetera. And then you really don't want to let the creator go from way to approval to nothing else, you're going to tell him, hey, your article is now published, you can now do x, y, z. So it is a convention at Chewbox to always have a next steps where we explain to you what happened afterwards. The other thing that we are doing is we are defining wizards as you will see in a moment and I want to point this out here so you can pay, so we see this and I'm working in the system. So we say this, this, this and this work item are in one wizard, you just say you will have a wizard and so you will see that for those four steps we have a nicely created wizard on top and it auto generates that all just by us putting a little marker on there, you have a wizard, you have a wizard, you have a wizard, you have a wizard, you have a wizard. It scans the process definition, figures all this out. This also has a wizard. So I ran all, so I ran through this and I thought like I created a great Chewbox process. Well, all QA failed because there's even one more step missing here. How does a reviewer even know that he has to review or she has to review an article? Well there should be probably an email step somewhere here that is only sent the first time that says, hey, an article is ready for you, review. So I hope you get a sense to what level of granularity you have to get to to make these workflows really useful. I want to turn on my screen so I can see. So I need to be behind the Pult now anyways because I want to show you a demo. And so all our Chewbox example are modeled after Silicon Valley with Pied Piper and all its characters. So if you watch the show, you will get the references. If not, it will just be normal. So Erlek is, he's a wannabe president of the company, but he's not as you know if you saw the show. But he is allowed to publish an article. So this is our interface. I can explain more of this later. Come to me after the talk if you're interested in some of the other things we did. But we want to publish the article and from now on everything is defined by the workflow definition we saw. You can see the visit here. You see all the same titles show up and I can just now type in BroomeConf article in, it's 2016 so I won't forget. And some text. And this is super trivial, right? You will also notice I can discard a process at any time. So we made that possible as well. So we continue. I can now review my article. I can download the PDF. I'm not going to open this right now. But we can also view the article in HTML because we can generate out of our XML document templates and we can, for example, change the title. We can move to the next section, et cetera, et cetera. So I save this and now I said, okay, my article is perfect. I send it off. And I didn't put a nice message here. Usually we put a really nice message here, but you get a rate step. You're notified that you're waiting for somebody to approve this. We even put the person here with the approval and whatnot. But on the other side of things, there is Richard Hendricks, who is actually the president of PipePiper. And he already, and we are using, yes, we're using web sockets to push things through. He now sees he has an article to approve. He can get a little quick overview or he sees, oh, it comes from Ehrlich, et cetera, et cetera. The document, unfortunately, is not showing up here because I didn't assign a certain tag, I have been told. But we can go in here and he can now approve the article. A couple cool things. Our document templates can be rendered into a visual format. So any condition, any loops have a visual representation. So our lawyers can review even just the template. If you would have multiple versions, we can even visualize version differences. And we ended up developing our own XML diffing library based on the famous white paper for this. So we continue. And Richard is done. Unfortunately, he doesn't get the next steps. So that would be probably another QA thing that would pop up. But if we see due to the push, thanks to the push, the next steps show up. Every time we press continue, we send actually a task for salary. And if the tasks take a long time, there's a spin out that will push back and tell you what it is currently working on. So there's a lot of cool web socket based technology and async technology and all of this. So we can go to the next. Unfortunately, Ehrlich is not privileged enough to see these documents. So we go back to Richard and we can see if he go to his documents. So we can go to articles. The ChromeConf article showed now up. And it didn't show, I should have shown that, it didn't show up before it wasn't published because certain tags get set. So we have actually this concept of states as well. But we just call them tags that are namespaced. And we just have multiple tags on one attribute and we manage them through our workflows. OK. Other questions about that example? Cool. So yeah. You mentioned the OR gate. Yeah. But there's, you didn't elaborate on it. You said, we'll move on from that. Yeah. OK. Sure. So yeah. We have learned over time that OR gates always come in pairs. Most often come in pairs. Because you say XOR, but then you usually with one, you go back. And you have to join together. While the XP.DL standard allows for many transitions to come into a main activity, because this is also just an activity, several other not so sophisticated editors do not allow it. And it is much easier to reason about your workflows and read them if you make another OR gate there. So OR gate only means it only waits for one incoming transition and not for both. The end gate is exactly the opposite. It always does set of both, not just one, and it waits for both. Or for N. It can be any amount. And we have some where we do five or six things at the same time. All right. Any other questions? Cool. So just a little summary of what did we, what are the big things that we had to build in order to make all this work? Security but Plone has that too. Check. We are using the Zope libraries after all. We use Zope security, of course, and Zope security policy. So we have exactly the same type of capabilities as they are in Plone today. Machine executable. DC workflow does that too. This machine executable. Check. Full user interface generation and any other I.O. generation. That only works for DC workflow because it's super, super trivial and you can do one size fits all basically approach, right? Because you only have to show states, the current state, and what other state you can get to. There's very little metadata even around any of this. So and for the publication workflow, you know, that's not that manner. But all of these screens that you saw, they're all standard activity types, applications they are called. So that's that. So you need to implement back and forth. I didn't demonstrate this, but you can go back and forth. You saw the back and forth error, right? Because people expect if they see a wizard, I want to jump to this point. You can also click on the wizard itself and jump to a particular point. And then you want to go forward again. And then the problem that arises out of this is, well, you should really remember the data that I typed in in the past and in the future, which is also not necessarily trivial, especially if you now make changes in the past that affect the future. So you have to be very careful of what data you retain. Discarding and of course, discarding with cleanup. So you say, I don't want this workflow. It was a crap shoot. Ignore it. So you want to make sure all the articles and everything gets deleted. Exceptional handling. That was what broke the WFMC's neck originally. And then migrations. Migrations are super, super important as we upgrade our applications and our processes. You know, if you update your process, you cannot just run on the new version suddenly. Because where are you? Do you have all the data? So we do a lot of version management and then migrate data very carefully. And that's a big problem. We haven't really fully solved it yet. OK. So we did a bunch of WFMC enhancements. And all of these enhancements are, we put back into the ZOPWFMC core. Unfortunately, not yet in the ZOPFundation repository. It's on our shoebox. We have a clone on shoebox. But if there's interest from the clone community to use ZOPWFMC, we would have certainly good motivation to clean things up, write the necessary tests and documentation, and publish it further. But until now, nobody came along. So we haven't had much willpower to do it. So we, XPDL 2.1, the original one only supported XPDL 1.0, we added extended attributes. So to any activity, to any process, to any participant, you can put any amount of metadata and then entire XML substructures called extended attributes. And that wasn't implemented. We used that heavily to hint the UI, to hint, as you will see, the simulations and the testing and many other things. So this basically our approach to do a lot, a lot of the automation. Candidate support. So let's say you have two presidents of the company, and you might think that's not possible, but yes it is. You can have co-presidents that can approve. So you have multiple people. We have the capability of saying, yes, multiple people can do this, and somebody takes the ball and says, I'm going to do it. So candidate support, which is a very core feature of workflows. We implemented subflows, think about it as macros, like metal macros, task scripts, which are basically arbitrary Python scripts. You did see one with the reset. A data field support, which gives you data typing. Parameters didn't support initial values. Otherwise conditions, the else case that I showed you. Script tag support, because you can specify on a process what scripting language you're using, and arguments to the activities, because, dang it. Uh-oh, you lost it. Do you guys have it over there? Oh, OK. I hope the clicker works far enough, so I come over here. I have been warned that that would happen. So arguments to activities, because you pass in variables, you get variables, are used to be lists, and we converted them to dictionaries, which was a big pain for all migration, because it's much easier if you have key value pairs to do migrations if you get new arguments. Unfortunately, this was thanks to Jim. I didn't tell him about this little thing when we met earlier this week. And then deadline support. You can say, if an activity hasn't been touched for a day, what do you want to do about it? You can raise an exception and take an alternate path through the system. All right. We also enhanced the workflow edit a little bit. We worked with the together workflow guys. They do a lot of stuff in the Java community for banks, especially in Switzerland and Austria. The owner is originally from Austria, but they're all in Thailand. So we paid them to do a couple of things like small bug fixes and larger bug fixes that made it possible to use, how the labels can be positioned, script support, and external editor support. So you can click on anything. And if you specify Python as a script, it opens it up in a temporary file that has a right file extension. So you get the syntax highlighting and all that kind of good stuff in your external editor. We could do a lot more, but we simply haven't gotten around to it yet. OK. So let me talk about a couple of really hard problems that we had to solve. Discarding a workflow. Well, originally, a ZopwMC didn't keep track of any of the activities it had already finished. So we had to start keeping track of all the finished activities. So when you discard it, we could undo them. It requires for all of your activities or these applications that drive these activities to be reversible. Right? You have to be able to say, this generated a document, now we need to remove the document. There are some cases where you have irreversible actions. For example, you created a user. The user has received an email to log in. Well, just because you want to undo your workflow, you don't want to create suddenly a broken link for the user because you deleted the user. So it's better to just leave the user existence. They don't have any access anyway, so it doesn't matter. Therefore, you make the workflow simply nondiscardable. And the prime example of this is, once all parties signed a document, it is legally binding. The document cannot suddenly go away because you discarded your workflow. So in those cases, we make the workflows at that point nondiscardable, which sometimes really makes our users mad because we make it so easy to execute legally documents that they don't even think about it that way. Back and forth, I mentioned that already before. You have to be able to revert the activities. You have to restore the workflow state properly and remember the inputs, as I mentioned. And that's a very, very hard problem. We spent a loan just to solve this. There are six amendments worth of effort in that. The wizard, I mentioned that before. You saw it nicely going up on top. The hard part about it is, it needs to be generated before any workflow state exists. And as more workflow state comes into play, the wizard actually adjusts and finds the path that you're actually going to take. So once you reject, it will readjust, for example, the wizard. This is also a very hard problem to get right, especially because of all the splitting and joining that you can do. We spent roughly at least three, if not four to five, men months just solving the wizard problem. What happened? Whoa. Okay, there it is. Now, if you have a lot of processes, and I don't know how the Plonero does that, I would love to talk to some people how this is done, and I'm running this video, which is a simulation of our workflow. Run in Firefox, fully automated, is testing, right? And so you want to automatically test your workflows. You want to have coverage. So the big part is you have to provide a strategy for completing your applications. And that took us also two to three attempts to do. We ended up writing extended attributes on each activity, on each interactive activity that the user has input to, define how the user would input stuff, and then complete the activity, and then it would keep going. You want to have save points. We have processes that take 10 minutes to run, or large ones like this. If the developer works on the last two work items, you don't want to have them wait nine minutes before they get any feedback. So we create save points that you can create, and you can start the workflow anytime from that spot. Debugging, you can stop at any activity. We can take screenshots at any interactive activity, and we can parameterize all simulations so that we can create very rich setup environments. For example, the Piper example that I showed you, we just completely simulate that. This is never typed in from hand. So let's see what's next. So I don't know how we're doing in time. Almost good. So quality assurance. So I'm going to just show you the quality assurance instead of going over the steps. So we created a very rich dashboard because we deal with lawyers and they have zero tolerance for problems, not just functionally, but also from a language point of view. So we have 98 processes that can be started from the UI. There are about two or 300 of them with all the sub-processes. For those 98 processes, we wrote 300 sims, and then we have all sorts of document templates, and we test all the permutations of these documents. And in those 98 processes, we have 3,500 activities. So that's a lot of stuff to run. And yesterday, our QA dashboard was all green and it was quite boring, but today I have some interesting stuff to show. So we test all our processes on the four major browsers, and then we run also a bunch of QA checks. So I can show you a little bit of output. So let's not take the stock transfer. So for example, it ran in Internet Explorer, this simple workflow. So we get all the Internet Explorer output, including a screenshot for each step of the workflow. In this case, it was just one. So maybe I should choose a really a little bit more interesting example, like the stock transfer. So here you can see Internet Explorer. It tells you it was Internet Explorer 11. And when it ran, and here are all the screenshots of Internet Explorer. And so our UX department can go through these PDFs and check that all the browsers render the stuff correctly. If the simulation was successful, you can also see all the documents that get generated here for your review. We have lots and lots of process controls. So for example, whether it's completed, whether we get the same output as last time, so this is all good. And here's our coverage. So if you want to see how is the process covered, what is it transfer? Transfer certificate. This one is actually 100% covered. But let's say we go to the upload case. Though that's just one simulation. And of course, one simulation can never cover the entire article. So we can, but we can see that simulation went through this specific path. So we have insights of what paths we are actually taken by our simulation. And then basically the top one combines all the simulations, and it's 100% for this particular case. So let me just find actually the one where we have also PQA failure. So here, for example, the long-term stability output change, so we keep track of any state change in the system. And we create a diff, and then we compare the diffs. We use a JSON diff tool to do this. And so we see here, for some reason, this process now has a new email that is being sent and somebody didn't accept that properly. So that's our QA tool and how we keep, make sure that all our processes function all the time. And this is run. We can run this on a database or even more often. It takes about what these days, 45 minutes to complete for full test run. OK, so what's up in the future? I would love to write the shoebox chronicles. That's also something that I tried to do early in the days. With the, in the Zope community, my dream was always to write these rest documents and describe, hey, run something like this in the browser, take a snapshot of the browser and insert a screenshot, and so that we can create support help in an automated way. And then a funny story to this. About two years ago, our customer's success person was really enthusiastic about writing lots of support articles. So she kept writing, and I said there, like, in a few months, you will have to revise all of these. You will have to revise all of these. And no, no, no, we keep writing, we keep writing. We had 72 of them. One month back, we finally did a review. We threw them all away. All 72 were completely thrown away because, of course, they completely bit-rotted. Nothing in those support articles described anything like the system worked. So I would love to just create these support articles via this. And it's not super hard. I would also like to automate video generation, right? Then you could have little self-support videos. And thanks to Selenium, that shouldn't be a big deal. I just have not gotten around to hooking this up. The holy grail for us is to really support the power user. If you can use Excel, or if you are a blown user, not a developer, but a user who can configure a system and create articles, you should be able to create processes in our system. So our idea is that eventually people with legal expertise, lawyers, paralegals can create new processes most easily. That is probably still about five years away because we would have to rewrite or do massive customizations to the editing tool and put a lot of more security in a lot of the Python expression evaluation, et cetera. In the ploneral to your value of what it takes to allow people to enter arbitrary executable code at whatever level. And on a personal note, I would love to bike across the US before I get to Alt. So, OK. You never get to Alt. You just have to go very slow. Yeah, but I also have family. So my goal is to make it in 30 days. Questions, comments, and before I forget, please go to ploncon6feetup.com and rate the talk. Yeah. So as far as actually creating the workload, that was one thing I didn't quite get. So I saw this GUI tool. Yeah. And it looked like you're able to edit the visual representation. Right. And I saw the end result. Yeah. So the web interface. Yeah. So we upload that XPDL file into our system. It's part of our source code. And I did a bunch of other things in the back end, such as creating tags, creating a new folder for this data room, creating an entry for the actions, all these are configuration very much like you used to from the plon world. And then it just works. So I literally wrote zero Python code. Like zero, I did not have to modify our code base, our Python code base, in order to make this work. So that is a very significant goal for us. But high level developers can do this. And I'm more than glad maybe after the talk for people who are interested to show them the editor tool and show you a little bit more of the magic that goes on behind and why the simulation work, why the result shows up, et cetera, et cetera. Yeah. You spoke about having multiple objects that travel together. Yeah. And I wondered if you had, well, what we have is kind of an idea where multiple objects travel together, but they travel through separate workflows that sometimes share states and sometimes some of them will break off and come back. And so what we've got is these cascading workflow guards and rules, but just a way out of control. And I wondered, what did you do? And that is an outcome of the state-based workflow approach. So what we are doing is basically we collect data. So I did not bring up any of our sophisticated processes, like incorporating a company, which generates five documents, for example, post-incorporation. When you set up your board and stuff, creates about 10 documents. So what we are doing is we are collecting the data. We have forms. You could do this in prone very easily with all the machinery that you have. And then we generate these documents in system steps. And we can create many system steps, we create many documents in all sorts of ways. And then we bring them back in. And then we move them along. Because your action is not really, so what's your action? Or what's your goal? A process always achieves a high-level goal, for example, incorporating a company. These documents, if they interact, they are not living in isolation. They move together. Like, for example, I could imagine in the content management world, you have a lot of these things. As I started, you might create an article that has images to it. You might want to, you need to push, you want to publicize this. So you push to some external website. So you have all these versions and artifacts around. But your goal is still to publish an article. But in a case where, give me a concrete example of what you're doing. I'll take from that. In a case where you want to say, okay, we have 10 documents, but the people on the other side are in a real hurry. And they can't see anything until we've published a parent. But we have two that we can't publish yet. But we want to pre-publish. We want to say, this thing is now being published. But the state of the main object is not published. But those eight have been published. And when that comes in, you can see. But these two are still out. So either you can't see them or you can see that they're there, but they're in a different state. So the state of the inside of those 10 is all over the place. But somehow the work for them needs to be able to say, when those two become ready to be published, you need to be able to say, okay, well now publish. And that must occur. So we would create one process instance for each document that gets published. So you create 10 publishing new article processes because you need to move 10. You need to be authored and whatnot. And then we would create, for example, you might have something else that is called pre-published. You can say, oh, let me pre-publish on that. It would just be another process you execute. So we are not thinking of this as necessarily as one workflow, but it's many workflows. So you could publish the main container and you could ask it then to push each of those 10 through its publish and two of them will fail. But you see, you're still thinking very much in terms of state-based workflow because you're thinking about publishing a container or talking about making the container available. I would not even, like, the way I showed is like you wouldn't even think about a container. That's just a site artifact that they live in the same container. Like, for example, the same folder or whatever. What your real goal is to generate these articles, these documents, and that they live eventually at the same, like, I would not make my permissioning based on the container, right? You want to really make your permissioning work against each document itself and then you publish the document one by another and thus you can apply different rules. The code that you said is currently living somewhere else, please, would you let us, I really want to have a look at that. Yeah, absolutely. Actually, one feedback when I did my practice run this morning with the company was that he said like I should have added the links. I'll add the links to the slides and then put it on slide shares and then it should be available or whatever Selly does with the publication and you can look at it and get to it. Thank you. Any more questions? We can talk afterwards. I'm here. So any other questions? Yeah, Carlos. Another question, a comment. Yeah? I saw your historical overview. There was an activity-based workflow for SOAP that was built in Italy. It's called OpenFlow. Right. Oh, yeah. Why didn't you tell me this yesterday? Yeah, absolutely. Yeah, that was OpenFlow. But I think it now really, yeah. It was, for a while, I guess it was used. It used the same symptomatology of the work items and the activities. Yeah, but they didn't use XPDR, I think. No, no, they didn't. They weren't graphical. Right, obviously. Oh, yeah, that's right. David Devencenzo did that one, right? OpenFlow. David Devencenzo developed OpenFlow, I think. I think the names of the developers are Richard Lemmy or something like that. Oh, Ricardo. Oh, Ricardo. Yeah, yeah, yeah. Yeah, yeah, yeah, right. Devencenzo de Barone. Devencenzo de Barone. He's in the... That's why I mixed this up. No, Devencenzo de Barone, no, sorry, sorry. No, disoma. Disoma, disoma. That's right. Yeah, no, that is true. I forgot about that. That was OpenFlow at some point. Well, of course, there's no committance to that. Yeah. That would have been a good one to add. That's true. I tried to use it once and I forgot. I didn't quite use it. I reduced it for a bank. Any other questions? All right. Thank you very much for coming. Thank you.
|
Stephan will describe the Zope-based workflow technologies he is currently working on. The Zope/Plone community has tried multiple times to make use of formal workflow standards to drive online workflow processes and UIs, without much success. In this talk Stephan will analyze the situation and try to identify why not. His current company, Shoobx, has finally been successful in creating BPMN-based workflows using Python- and Zope-based technologies, which they extended and which are available for everyone to use. Some of the hard problems that were solved: fully automated UI generation, wizard generation, ability to revert to any point, exception handling, simulations and QA testing. He'll also provide a quick update on what he has been up to for the last 10 years since he presented at PloneConf 2006 Seattle!
|
10.5446/55328 (DOI)
|
So, yeah, it's full. So thank you for coming to my talk. I want to speak a bit about PLON, security and front text. First of all, what is security? The two parts, security, IT security and safety. So we do normally speak about those infrastructure, confidentiality, integrity, and security. Availability. The problem with that, those are the attack vectors the normal systems could apply to. The other part is safety. So the functional correctness. Does the software behave like it should? And it's all about reliability so that we can work on it so that it works how it should be. But if you're speaking in context, we should compare to others. Because saying one system is secure without any context to others doesn't make any sense. So what is the CMS market and normally like the ReStory Vendor Map? It shows a lot of the competitors on the market. Those are the relevant ones. So normally if you look on web content management, that's the light blue line here. We have a lot of systems that is known in the different context. So Drupal, Plone is on the type of three WordPress and all the others. But are all those systems equal? Are they comparable? Actually, no. There's another level of it. The ReStory Group have some very good things. They divide between products and platforms. So a product solves one problem and solves it very good. It's very easy to start with, but if you want to extend it, it's almost impossible. A platform gives you the possibility to adapt and do it for your stuff, but it does not solve it that proper than the products out of the box. So it depends where you go, which type you should choose. I always work for universities or larger stuff where a product never solves the problem because universities are special. There's always something you have to adapt so the platforms are making it. So if we look in the open source context, Drupal, Plone, type of three are the three systems in there that normally counts. WordPress, Jomla, easy published Magnolia and so on, which are also very good and popular used content management systems more belonging to the product side. But yeah, there's one fantastic approach in Europe. There is the CMS Garden Project. It's a combined effort of several open source community, marketing, open source content management systems because every product in there solves a different topic. So it's not like we are competitors, we are partners and we learn from each other how the other works, what they do better. And even members of the different security teams attend to their systems so we can interact and see what they do better and what they do worst so that we can learn from each other. So is Plone secure is one of the main question? So if we should look at it, we should say it depends, the own core itself is pretty secure. But security of an installation depends on the installation itself and how you maintain it. If you have a Plone installation, a Plone 2.5 and it's still running, it may be insecure if you don't have patched it. So basically yes, Plone is pretty secure. But why is it secure? What are good indicators for security? And that's one of the problems we always see with marketing. Everybody tries to look at, okay, give me some CVE numbers or CVE scores, what's applied to it? Well, it's not a good comparison because the larger the market share, the more researchers look at the system, the more they found probably. The number of hacked sites, yeah, well, is that really a comparison? It is the matter of the admins to keep their system secured. You can even have WordPress which there is thousands of hacked WordPress sites out there so secured that nobody hacks it, but that doesn't mean that WordPress itself is that secure or it is so insecure. So sorry, no, don't compare with such a metric. There are other better things and you should stay objective on comparison. So how to prove security? Security is a process, not a state. You can only work on that process. Every momentum snapshot on looking at this offer security of a system is just for one moment. The next day a zero day exploit couldn't pop up. Somebody found something. So you could not test against vulnerabilities or security issues. You can only test against known vulnerabilities and that's the problem. So if we say, okay, there is a wide variety of problems out there that is known that affects web services, there is the Open Web Application Security Project which every three years does a report on the top 10 security vulnerabilities of web projects. So they make it statistic which are the top 10. So look at your system. Does that apply to your system? That's a good way to look of it. To analyze the process of the developer. How do they work? How do the process of the security team goes? Are vulnerabilities announced official? Is there a bug fix release? Is there a hard fix release? Something like that? What is the release process? Do you have the difference between a bug fix release and the hard fix release? So some don't have. What's the information policy? And another thing, did members of the community or the security team are depend on one company to do some stuff and even to do hard fixing before release of official hard fix? So the top 10 was report, the last one is from 2013, name injection, broken authentication and session management, cross-site scripting, insecure direct object reference, security misconfiguration, sensitive data exposure, missing function level access control, cross-site request for gallery, using known vulnerable components and unvalidated redirects and forwards as the 10 major vulnerabilities in the web at the moment. If you do a review of all the cores and all the add-on products that are relevant for a system, you probably find some of those in each system. And there are several studies on doing so. So there has been a study in 2013 for a few counter management system and that's the compared source of vulnerabilities in common-seam S. The report from 2013, it's from the German BSI which is the Federal Office of Information Security. So I guess in US it would be like the US third at NIST or comparable level at the NSA, probably. So there was a study that compared Drupal, Plone, WordPress, Jomla and Type 3, but just by analyzing the published data of the last three years. There's one thing, if you look at it, how much of the vulnerabilities are found in the core or in an add-on. Every other system has the major vulnerabilities in their add-on story. Plone has the major vulnerabilities in its core. Why is it so? It's not because the Plone core is so insecure. We do have a story for add-ons that keep your add-ons secure. So if you add an add-on to a counter management system, normally you make it more insecure for Plone. That's not true in that way. Not necessarily true. You could have some add-ons that make it worse. Well the problem, if we said always, the security track for Plone is the lowest of every system. As I said before, it's always a point of momentum. The BSI does a second study, 2016, it's not published to today. They do a much deeper approach. They now did penetration tests of five major systems. So they said they take the three most used systems in total, which are PHP systems, that is WordPress, Jomla and Type 3 for the German market. So not the word market, otherwise it would be Drupal. And they took two other technologies, Python and Java. So Plone is the Python content management system. So we go through Plone. LiveRay was the most used Java technology in this share. So they do for that. The first column is vulnerabilities in core. The second one is vulnerabilities in add-ons. And the third one is based on the common documentation for hardening your stack on an installation. There was one hardening description fault in the Plone documentation that they saw that could be something. But it's medium level. All of those vulnerabilities was reported in late April beginning of May, 2016 to the security teams. All of the vulnerabilities in there has already been fixed. But that's just a momentum. The problem with comparing that is you compare apples to oranges. WordPress, Jomla and Type 3 were bug fix releases in a minor version that was compared. Plone 5.0 was the base for the reported on Plone. So if you just look at raw numbers, it doesn't say something. And that's a problem. But the systems beside, so what they took another look, what do the systems do to make your system or your installation secure? So there's the question. Is Plone more secure in comparison to those other systems? So WordPress, Jomla, Drupal, Type 3 in comparison? Well yes and no. Now the other CMS itself are also pretty secure. Security is a process and depends on the setup and the maintenance. So if an institution hosts WordPress, Jomla, Drupal, they can do it right. But then you have a system that nobody uses it because you have a very limited set of functionalities because you have a very limited set of add-ons in those products. The problem is yes, Plone is more secure because the CMS itself is not enough. For most of the other CMS you need a lot of add-ons to get functional comparison to Plone. With those are neither maintained or with then security state that the core in Plone itself. So and the other thing is the empirical results we see. Every university has a variety of content management systems. We do see a lot of hacked WordPress, Jomla, Type 3 pages on our university. Each week get newly hacked. I've never seen a hack on one of our Plone installations. Never. So but that's empirical. But why seems Plone so more secure? Well, Plone has a different focus in some way. All the errors just publishing web content. Plone has very strengths. An intranet and what it's intranet about, it's about confidentiality of data in some fact so that you can limit the access to different groups. That makes one of the strengths. And Plone actually is more than a CMS. That's why the real story group also added us always in the portal engine type and in the areas before of social stuff. And there's a lot of more things going on there. Python itself makes also a bit of the strengths of Plone because the Zen of Python, if you can read and understand the code, it's easier to keep it safe. The system design, restrict the Python access control, all the stuff that it's doing can keep the system by design a bit more secure. And there is no SQL database so you do not have the major injection problem itself. And Plone always choose the best of reach approach. If we don't have to maintain a technology like Pound before, we switch to something HAProxy. We don't use KSS anymore. We switch to pure JavaScript and the libraries. We don't have to do it ourselves. So if you take the best on the market to do your stuff, you have a greater community that cares about that. And one thing that always was named as one of the good things in Plone is our code skeleton generators for add-ons. Because that gives a secure base path for adding features to Plone. And that makes you a bit more secure because you don't do the beginning faults. And there are additional reasons. And some of them sounds pretty obscure. So Plone is very complex. And if the security researchers or hackers don't understand it, it's security by obscurity. Sorry, it is. Plone did not have the large market share we would like even to have. So it's not interesting for bot networks or hackers that try to get a lot of sites hacked to do something with them. And it's too hard for them with Plone. So you just have to be above the limit to getting attacked. That doesn't mean that you're more secure than the others. And another thing is if somebody tried to invest time and money to hack a Plone site, we have so many high-valuable targets, say FBI, government infrastructures, United Nations and so on. They will get hacked first. And they work with the security teams. So if those exploits come to the wild and go into an attack for one of the normal sites, the security team already knows it or has already had a fix for that. That uses. And the other thing is Plone is always proud of their security concepts and they teach about it. So the users of Plone are normally a bit more aware of what is security and how to keep your installation secure. But it's like with all complex systems, they are inviolable. The law of John Gaule, a complex system that is work, is invariable found to have involved from a simple system that works. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with the working simple system. That applies to Plone. We have involved a long time to get to that state. The other systems grown from a very small block platform or simple website generator to something where Plone actually already is. Plone is one of the enterprise open source content management systems. So back to the basics. Safety and IT security. If you look at the safety part, yeah, we do have the sun of Python. Readability counts. Explicit is better than implicit. Simple is better than complex. And we do apply to it. We do use it. We try to do it for all of our stuff at the moment. We do have all the code conventions. And as we have seen, even for a typo fix, there are robot tests that acts if a user has signed a contributor agreement, we enforce them on Core 6. And there are lots of good tools in the community and widely used to ensure that they apply. The other thing is the code skeletons that we use to generate our plugins. But if we look the wider, we do ever document all the 10 problems from the O-Watts report. How Plone used to not, or why Plone is not runnable in total to that. Or how the Plone community tried to avoid being attacked by that. Also yeah, the whole stuff. But if you go in detail on that, what does it mean? So if you look at confidentiality. Do you have the permission and workflow? Roads, permission, guard expressions. Did you know that we have read and write guards on every attribute on Plone data? I've ticked a lot in the restrict. I've ticked a lot in to restrict the pattern and access control in the last few months. It's pretty astonishing to see what in the whole security design has been done to protect a script or something to not write on an attribute of an object in the ZODB if it's not, the user is not allowed to. And the one problem normally the BSI or other institutions said, if we rate content management systems, the highest level of security is a medium security because for high security even admins should not see data. Well, Plone out of the box said, the manager role can see everything. But we can do custom workflows so that even the manager decide admin role did not see the data at all. So what's about integrity and authenticity? So yeah, restricted Python and access control. All attributes and objects has guarded methods for read and write. Permissions to read and write on objects, attributes, views and everything are applied. See on all objects and all the things you look up through the catalog. We do have the history of objects. So we see if someone has changed an object automatically and who has changed it. Setup and Plone development. The common approach is as done in the unified and slower or the answerable playbooks. We do separate between a build out user and the running demon. We disallowed to change the application itself. The problem in all PHP systems, well, if you're looking in the basics of computer science, we do have the Von Neumann architecture or model. So data and code live in the same memory. Computer can override code. Plone did separate it. You could not access the file system of your application through the web without or with the basic Plone core. There are add-ons that try to do that and could do that. But normally you do serve on a different things. If you're looking at PHP systems, your PHP is just the document root of the web pages around and you have mixed everything. So if you go around, you can touch the file systems in most of the cases and modify your code. So there is a large problem and that's why the PHP system suffers from the add-ons. They can modify everything because they live in the same application root. The other thing is something like sanitized input. We do filter on every input through the rich text fields. We escape HTML entities on other fields. So if you write an add-on, look out, don't have the structure keyword in a tar render. Thank you. Thank you for saying that. Another thing is the CSRF protection that is built in now for a Plone 5 and with Plone 4 CSRF protection also for Plone 4. If you look about availability, Plone itself, if you just run Plone, could be hard fucked up by availability. There is a lot of things that are long running and don't answer that quick. So it could be. But the documentation always talk about how to do scaling, how to go even into an infinity scaling if necessary, how to do load balancing even during upgrades without being down for that time. And we have the caching tricks. So we can do all the stuff with the things we learned in Plone to do a system that is always up and always available and even cheap rendering. Was it Dylan with, I think, for the Australian earthquake protection? They said, do a short term caching, even one minute or five minutes is fair enough for your site. If you don't get any problems on an emergency, if everybody would like to look at your site and you don't have already scaled up your servers for that problem. And there's also a documentation and in the answerable playbook to active bands if there are misbehaving clients, so use it. You can secure the installation. So if you look at the work of security teams around, I do like this picture from Anheim. Anheim was a city that was attacked by the Germans in the Second World War and mostly completely destroyed. That's one of the memorials and there is a sentence in Netherlands, the most men's and three in Inklings, Dead and Art, which says, the most people stay silent, only if you act and that's the normal problem. We do have very few people that act and doing the background stuff where you never get the honor for or something, but to keep the system safe and those are the security teams mostly. So if we look at security teams, what do other security teams do better than us? Most or some have better communication chattels settings. So if the people want to inform you and want to do it over in secured communication layer, they publish the GPG or SMIME keys for doing so. There is more usage of issue trackers in their communication at the backside and the Jammler community for example has a fantastic setup on communicating with the hostess in Germany. For example, they announce to the hostess, especially if there are known vulnerabilities and to shut down the systems of their client in some ways or to get the security team from them in if there are any problems of their systems. So it's giving and taking on both sides. So the hostess can provide a better secured system and have the feedback how many users using actually Jammler in which version on which system. The phone home functionalities we do see in Jammler WordPress and so on. They do look if there is a new version available. They do automatic updates functions and so on. In the control panel, well, we don't want that explicitly, but we probably want to inform the users better and we are working on that. And some others have better communication out on their CSV reports and so on. There are, I think the Jammler community really do a lot of good stuff in the last few months and years to do a better communication. So we should really look at these things and say, could we learn from them? But on the other side, what do other systems do worse? Some security teams are just attached to one large provider and has done the wrong thing to supply a patch to some customers before the official release. We might on the right side, it was named the Drupal Gaton. For PHP systems at the moment, we do see after a release of Hotfix or the bugfix release, because they couldn't do Hotfixes like us, one and a half hours later, the first automatic attack to larger providers starts one and a half hour. And some of them did not have a one hour timeframe like we for providing the Hotfix to say, in that time, you should apply your Hotfix. Most of the systems have problems with that. Well, the other thing is they only do bugfix releases. So you never know if there comes any functional change into the system. If you just want to have the security fixes, could it be done? For major sites, it's not applicable that you just upgrade to a new version without ensuring that everything works. And if you only have a timeframe of one and a half hours, that's not doable. Only secure, or there are in some communities, no security information available, what's done or there's no security process on add-ons. And we see in those communities, the add-ons is the major vulnerabilities. So if we look in our conclusions for ourselves, security itself is one thing, but it depends on the users of PLONE to keep PLONE itself secure. And it's like for all systems, never use a system as is. If you use PLONE just out of the box, you have a fantastic CMS. But you haven't scaled it up for your use, you haven't secured it more. You should do it. And my base recommendation is take at least 15 minutes per day to take your system secure. Look at your logs, look at if there is any updated plugins, systems or something. Look at it, do it. So I say thank you. Go and supply information and any questions. Yes? Yes. How did they choose the add-ons that they are using? They did for both studies choose four stories they want to apply to. And they asked the security teams of each community which add-ons should be applied to solve that problem. So this was even the meaning of having the communication with the security team and seeing that there are a lot of vulnerabilities in the other systems. Yes? So I know that for PLONE it's a very small list. It was 30 in total. Now for PLONE it was four add-ons. Four add-ons and some of the others presumably much larger. Yes. I think for WordPress and Drumlight it was about 40 or 50 even. So for PLONE it was the known things like a PLONE form gen. I think, yes, as I said we should do it or propose them the add-ons that the normal user would try to use. So there were no specific securities like Honeypot, your collective Honeypot add-on also, but the normal used in the wild PLONE form gen, could you remember which else it was? Yeah. And actually they misidentified a vulnerability behind PLONE form gen as a core of vulnerability. And one other applied from the core to PLONE form gen. So any other questions? I know PLONE has developed sites for the API. Yes? So my question is, were there any extra features besides a single PLONE relation be in the API case or is just a typical PLONE feature? The next talk on the next slot is from Nathan from Gen. I think most of the things he does for the CastleCMS is probably the same ideas that they have done for FBI in some way. If you have a strong security need check out the Zope Replication Server. Indeed. It is your best friend. You will find you can achieve a great deal while really making a quantum leap. Yeah. In your protection model. You could do a lot. So for example, I have the same setup that I have a ZRS replication to read only a ZO server and the front ends are read only clients to just attach to this ZO server. So the front end, even if it gets hacked the next system, take the system down with another virtual instance in and it runs with the same data as before. You could not go further. There is a lot of possibilities you can do to protect your sites. And I think the last hack of the FBI was some kind of media wiki page that was accessible to the public. So it wasn't the PLONE site. So yeah. And there are other large institutions that are pretty much depending on security. For example, in Europe there is an institution called INESA which is the European institution for network and information security. It's a level of the European Union federal officers that is responsible for all European nations. Sweden states as one of the security teams and they use PLONE as their websites. Any other question? Yeah, I do a shameless promotion. Yes? The FBI, everybody uses the FBI as the example of why PLONE is so secure. I work for Radio Free Asia. We host content for Asia to bring news content to central China. We are constantly trying, China's government is constantly trying to find attack vectors in us. And we have a pretty much a perfect record. That's not me. That's not us. I'm the developer. I'm the one who puts the holes in there. But it's actually the PLONE. So I love that the FBI gets lots of love. But nobody ever talks about Radio Free Asia. Or CIA or all the others. We're a target. And we've got it. And a lot of... Anyway, that's shameless promotion for Radio Free Asia. And that's the same for most of military's around. There's a lot of military installation on using PLONE as the base. And it worked. So, thank you.
|
Plone is one of the most reliable content management systems and it has a very impressive security track record compared to other relevant commercial and open source CMSes. Alexander will provide insights into the security concepts of Plone and other systems. He will also explain why Plone is a good choice for security-critical portals and websites, referencing several security studies and techniques that analyze various security concerns. In addition to a general overview of security concepts and how Plone utilizes them, Alexander will also describe how the security teams of different CMSes work, making this talk interesting to decision makers as well as system administrators, integrators and developers.
|
10.5446/55331 (DOI)
|
Hello everyone, this is my short demo about WIM and how I use it. Why should anyone use WIM today? It's a really old editor, but it's available everywhere, nearly. Each web server with Translinux or other form of Unix most likely has it pre-installed. You can really extend it to your liking, you can really configure it as you want and use plug-ins so that it gets really good to work with. For me it's an editor which I think supports me very efficiently in my work. Of course there are plenty of other editors out there which are really good, but this talk is about WIM and I'll show you how I use it. I did a short survey about which editor people are using out there. Thanks for everyone who participated. I will just open it and look at the results. I have some problems with the display, unfortunately I can't mirror it or something so I have to work on the screen there. This is what I was asking for. There are some popular editors, for sure there are some missing and I actually find the results very interesting. Most of the people who participated said they are using WIM as their favorite editor. That actually surprised me because I thought most people are using sublime these days. This is the second largest group and one is using text-made. It's still a good editor I guess. By Jarm. Only eight people are using it. I would love to look a bit into it. I've heard that it's a really good Python editor, then Emacs, five people and Atom. This is also something which surprised me, even five people using it as a main editor. WIM IDE. I was very happy about this submission because I was using WIM IDE once and WIM IDE is actually very nice for editing Python projects. It has really good code completion support. This is something which I missed for years in WIM but now I get around to it and have a solution so that I have something similar also in WIM. Actually not code completion but jumping to a definition of a code which is more important for me in my mapping. So let's go through the talk. An ACAD WIM is hardly usable for me. I mean I'm not a core WIM user or something. I have a lot of key bindings which really hardcore WIM users for sure would not use or so. I have a console and AHS to mark everything because I was sick about this long comment I had to type otherwise. And I'm using the cursor keys a lot, not H, J, K and N, how others are doing it. And let's start WIM without any configuration so that it can show you which problems I have with WIM when it's not configured. The dot WIM I see here is actually more or less empty. I want to load some things just to see how with these basic comments you can make it work much nicer. And now I go to insert mode and edit something and I use the cursor keys now and you see it doesn't do exactly what I'm expecting. And the backspace key does not work but I can insert something. Does the leapshine work actually? I have a hard time to move around. Yeah it's not actually the way I like to work with him. So I quit this without saving and do the set no compatible setting. If you write in your comment line SO and the percent sign then this file is loaded, provided I have saved it, I have to save it before. Unfortunately you cannot see my keystrokes because of the moment of setup. And now if I go into insert mode again I can move around with the cursor as I'm used from other editors. So this is in my opinion a very important setting. Then which I found annoying when I started with Wim was if I copy something with Yank Yank I copy the whole line and I can paste it somewhere multiple times. But now if I delete just one sign then this sign is now in my copy register. And I Yanked just this one sign and I was really annoyed about that one because I often delete something and had still something to paste somewhere. So I undo all these things now and if you do this setting normal mode, no remap, delete key to the black hole register then this doesn't happen anymore. So if I now delete something, copy again something and delete just some signs I still should have, I forgot to load the Wim settings. So again this is my Yank buffer now to delete some signs and try to paste again what I copied before and it works as I expected. So this is another thing. Does that map the X key as well or just the delete key? The X key which one is it? Delete with X. You don't delete with X? No? Use X to delete. Yeah. On the X on the keyboard. No, as I said I'm using my own keystrokes like I was used from other editors. So try deleting a character with X and see a command mode you can delete with X? But I mean try if it's now in the buffer. Yeah. It is here. Okay. Okay. I think you can remap with that. Remap what? Yeah. Like I do. I use the delete key because this is what it's for. And I guess if you use all those building keys from WIM, the standard configuration, then you might be more efficient in typing as well. But that was not the main focus. I just wanted to be productive very quickly. Then you can set syntax on and you get syntax highlighting. Highlighting. Uh-huh. So now we can't read it. Yeah. Let's. Uh-huh. I want to do it now. Is there a different super-sylactic and what's better on background? Is there a different super-sylactic? Yeah, sure. There are a lot of different and a lot of different color schemes. But here in this configuration where I did not load my whole WIM configuration, I don't have this available. Then with set number, set cursor line and set color column, you get some other nice things. Here now you have a row which shows you on which row your cursor actually is. And the color column on the side shows you the 80 character border. And you get those numbers on your WIM editor showing you the line number. But let's switch back to my WIM configuration. On this GitHub page, I have my WIM configuration available and I borrowed a lot from. Look at Gabas and just in case you want to look it up and use it. If you WIM uses on Linux WIM RC file which is loaded first when you start WIM. And I point it to the WIM directory and it looks like this. I have my configuration split on the different files and I source them. Can you read it actually? Can everyone read it? Yeah? Okay. And the byte and config WIM file is something I want to show later. This bar says byte and bar from my build out directory so that I can use it for Yeddy which I use for code completion, not actually code completion but jumping to definitions of statements. And I use a lot of plugins and configure those plugins in a separate file. I use a file chooser which is called Ranger which I really like. Actually, I use it not statelot but it's an alternative to choose files to other options. And this is the main configuration of WIM and the key maps. I have separate files so that I can look up different key mappings very quickly. Yeah? What are you doing for Jddy? Jddy is, I use it for, it's an auto completion library which... You said you do something specific so it includes stuff from your build out. I want to come to that back later. Just how many? Like three. Three, really? Oh, shit. It's a short demo. Yeah, I started some minutes later. Okay. Let's go to the, yeah, byte and config. This, start with Jddy in that case. This is a script which I wrote which passes a byte and path file which is in the bin directory of my build out and it adds these byte and paths to the byte interpreter which WIM uses. So Jddy can import those byte definitions from there. And for example if I go to project and start WIM here. Sorry. And open up a file. Sorry, it's the wrong thing. Now I should be able to jump to the definition for example of blown auto form directive order after. It does not always work. Okay, this worked. So actually the other way around would just to search in the file system which was really cumbersome and I'm happy that it works now most of the times. So you will find the whole configuration in the GitHub directory I've shown you before. And just as a hint sometimes you have to go to the file system if you use Linux then there's a cache.cache directory in the home directory and sometimes you just have to remove the Jddy directory in there. But let's show off some plug-ins very quickly. This is something I wanted to do before. For example in WIM there is a minimap. If you like this feature from Sublime you can use a minimap in WIM also. You see it on this side. Of course you can use build-in features like forwarding of code and so on. What else? In important I have to go back to the instance of the editor and here it works auto format samples and JavaScript. It has quite a... I use WIM auto format for auto formatting code and here for example you have some minified JavaScript code and with auto format you get actually something which looks quite nicely. Auto format is... I shouldn't have opened up this file but... This on the side is the new registry which I... I use a plug-in manager which is called WIM plug and the auto format plug-in is here and you can... in the defaults file you... or the auto format are configured. For example for JavaScript... For JSCS or whatever is actually available on the system is used to auto format code. And what I really like is the code-linting feature of Synthastic which is a very essential plug-in. Two minutes? Thank you. And sorry, code-linting plug-in. For example if I open up a test py file which has a lot of errors or just linting errors in there it just shows me what is not a web aid compatibility. I use actually Flake aid as the linting library and for example here it just tells me that the date time to date is imported but unused so I can just remove it. Or here I have some spaces which... Or actually dubs which should be removed. And also I can format... Auto format this code and with key combination of leader and I I use I sort to sort the imports and it's actually quite helpful sometimes. Another nice feature is if you open up a JSON file and have it written in just one line you can use external tools to format your buffer and it's written back to the buffer and it's done like that. Percent which uses the whole file then the exclamation mark which you can use external tools then I use Python minus M and JSON.tool and it should format me like that. It was unfortunately a little less time that I hoped I had but I hope you got something from that. I will post it somewhere like github.com.tat.fels.wim. Thank you.
|
Vim is everywhere. But out of the box, it is tedious to use and doesn't aid you much with your programming tasks. Due to its flexibility, it can be extended and configured to perfectly suit your needs. This demo shows you what is possible with vim, how it can be configured to efficiently support your workflow and help you writing better code with auto completion, code analysis and auto formating.
|
10.5446/55332 (DOI)
|
Nice chan! 와.. Kiitos t後所有 podcastillw AC St dark AC St AC St AC St AC St AC St AC St AC St but the world can offer you. So be excited. And actually this is my background. I used to do a biton, I used to do a blown. And this very morning actually it was funny because Facebook reminded me that I have been in a blown conference nine years ago. And that was the first blown conference I ever attended. I think it was on Naples. Vienno. I haven't been in Vienna so at least I don't remember. Okay, and let's go a little bit backwards like the definition of the problem. Or why we need to create web frameworks. Why not everybody is not using Django. And if you don't remember anything else from this presentation, remember this. That I think the job guys, the blown guys and the pyramid guys are the smartest Python developers out there. Like the things like a travel cell and object database and stuff they do. It's very clever. And it makes, if you use it correctly, it makes you very productive. Like the tools you are using able you to do more tasks in the same time than what would happen if you would be using flask. Or let's say even something worse like Java, Enterprise, Java pins or whatever they are called. So productivity means that the code is easier to maintain. It's more secure. It's easier to extend. And you actually need less lines of code. When you write, you are saving your fingers and your wrists. Maybe as an old person you have functional hands left. But with the pyramid itself, even if everybody loves it, there's a problem. So like David was putting out in his presentation yesterday. It's a framework of frameworks. Or framework for frameworks. It's basically pyramid is aimed for the guys who know everything already about web development. How to create a session. How to do a database migrations. How to create forms. You want to choose your own form library and stuff like that. And if you are just like the poor guy who is not the hardcore web developer, he just wants to make his first application out there. These patterns are not accessible for those because they want to start with something working. They don't want to go out there and evaluate all three form libraries. Like what's the correct one to use from those. And this means that it's not like Brandon was putting in his presentation. It's not very approachability. It's not very simple because you have options to choose. It's not very easy to learn because the information is out there in different places. And also what these guys, they are very technical. So they are not very good in marketing unlike Django girls. So when you come to this, I don't say it's crappy, but this not so fancy website. It's kind of a turn of there. And those are the problems we are going to address in this presentation. So this photo is from a blown event called Sona Sprint. It happened in Finland 2011. It has nothing to do with the current presentation. But I just put it there so you know what it means to be in Sona. And about web Sona. So we have, I have been working on this project since 2015. So it's two years old. There are some public sites. The most good example is something called token market. So if you want to see how web Sona works, you can go to that site and sign up. We have an active community. We have a chat in Giddar. And I know that there are at least two other people on this planet who are using the framework. So we have at least three users. So we are still a little bit baby. And we have a Twitter, which is misspelled. And we have at least 40 followers. So we are not still like on the same scale as Prandran was putting Django and Flask. We have a few more users to come that we can reach the same level of popularity. But it doesn't matter because even if you are not popular, you can still be better. Like millions of flies can be wrong. So the thing is that we are solving different problems. Like at the other end of the planet, it's like it's very small core and you need to bring everything yourself. Then you slowly move towards Flask and Django. And you can get more tools out from the box or more batteries included. And in web Sona, I have tried to go even Bayon Django. So that default functionally you get out from the box is enough to run your business in a web. So you will get a default team, who is web based, you will get a login and sign up. So you don't need to start inventing. How are you going to log to a website because that's kind of a boring problem and so on. And the design goals here have been that it's easy to approach. So it actually works out from the box and you only add your own business logic and you don't need to do this boring task like a login. It's very well documented and it's actually using a Python 3 feature called type hinting. So if you have ever worked with... Oh actually, how many of you have ever used type hinting in Python? Yeah, so it's coming, it's coming. It's very fancy feature. Python 3. Yeah, it's built in in Python 3. Python 5 actually comes with... I don't remember the model. I think it's called type hinting. Typing, yeah. Yeah, and yeah, and it's a new project so there's no Python 2, there's only Python 3. And everything like Python 3.5 plus. And it's a simple. So you get the default scaffold, how to create your application. So the files are always in the same place. You get the use.pi, you get the models of pi and so on. It's very secure. I used to work... Or I'm still working in finance. So I know how to make sure that the Russians don't get into your website. And also, there's no... This is also a huge... This advanced for Pyramid because it doesn't have enough standard core functionality. So building a back package like admin or sign up on the top of Pyramid is not good because... Your Pyramid users, they bring different template engines, they bring different session backends and so on. And if you try to build a plug in on the top of Pyramid, it doesn't work because there's no like the underlying components are not there yet. And you need to write the tons of adapters to make it work. And it's like a tons of extra work for basically nothing. And also one way to describe WebSona is that it's like Django, but it doesn't have a few months of the bad size of the Django in it. Myself, I was a Django developer since 0.96 or basically the first release. I worked for Django sites for 12 years, so I know what's good and what's bad there. And I want to not repeat the mistakes of the history again, but actually create something better. Like what we have learned since 2005 and what better tools we have in our toolbox today. So this is basically what the WebSona will offer you. It's a Pyramid routing, SQL Alchemy models. It has GINSA2 templates, PyTest testing, Celerity tasks, some top tier with the components and interfaces. You have a default deployment playbook with Ansible and it even has integration for iPython notebook. So you don't need to open your terminal for the Python cell, but you can do it straight from the browser. And what WebSona adds on the top of that is integration. So we have a package layout, we have documentation, we get the sign up sign up. We get social authentication with Facebook and Twitter and so on. And very important factor that Brandro was highlighting today is that you get an admin interface straight out from the box. So if you create an SQL Alchemy model, you get the web interface where you can go add and create and delete those objects. And another way to see what WebSona is doing. So this is basically the run function which sets up the framework. And it's a split up to a method in a subclass. So if you want to change something in WebSona, like you want to change the template ensing, you can just go there, there's a function called configure templates. You can override with your own version and bring in your own template ensing. If you want to use different email backend, there's a function called configure mailer, which you can override and you can add your own Miling package instead of the default, that's the Ramit mailer and so on. And in WebSona core you get functionality that Ramit has, but when you start the project, it's not there, like a default use, like not found and so on. You get the sessions so you can track your users, you can do actually something useful. You get some view configuration and you get the template engine set up out of the box with some useful filters and variables you can use in your templates. Like especially if you deal with the date times, you can actually format them with the different time zones and stuff like that. Then you get the admini interface, which is unlike in Chang'e, it's placed on travel cell, so you can actually have multiple levels of paths there. So you can have a tree, you have an organization and under organization you have customers and under customers you have orders. So you can have this kind of flexible pattern there that Chang'e can't do because all the URLs are hardcoded. Then you get sign up and social login forms out from the box and there's a new pattern I'm very happy about, it's called passwordless login. So basically you are using your email address as an ID and every time you want to login you just send a new link to your email address. This does make sense because all the websites have a forward password functionality in any case and people are not writing down their password. So they keep hitting that link every time they want to sign up to the website. So it makes sense to make the session last forever and not give them the passwords in the first place. That's how if your website gets hacked you don't leak any passwords because you don't have them. It also comes with the teaming, so it comes with the basic site layout and it has a boost app tree team. So what you can do when you install a web zone, after that you can go to www.wrappboostrap.com and buy a team for your website for 10 bucks. So you don't need to hire a designer, it's unhappy as an engineer. Because it's boostrap it's not like the best of out there but for most of the customers it's good enough. So you save the cost of having your own designer doing all stuff. Or if you have a designer they can focus on doing actual productivity stuff like logos and stuff like that and they don't need to worry about how to style a button. Then you get the forms which are based on colander and the form packages. You get this crowd functionality so you have a base class that does add, edit, delete and lists your SQL acme models in your database. You also get some security features, cross site request for forging protection, it's actually built in the pyramid 1.7 now. And you get the throttling. So if somebody tries to approach for your login form with the different usernames and passwords it will just after like 60 attempts it will block the guy. And that's very important like in the modern internet environment where you have a lot of these botnets going around and they will just try to hack any website out there. They don't care if you are a high value target or not. They just come to you and push everything through your login form. You get tasking so you can actually have this kind of chrono style tasks in python that do something every day. Or what's very important for a scalable website you get the delayed tasks so that in any when you are processing a request you can say send an email to this user. But do it after the request is complete and don't block third party API call or don't block third party SMTP until the request is complete. So you have a very low response times and everything happens background on the tasks. So you can send an email and it actually comes with the default HTML email template so you don't anymore send this crappy looking emails to your customers but they come with the like a default logo and food or and so on. And you don't need to also watch actually HTML email is not very simple because most of the HTML features don't work in email. So you can use the default base template where you are only using that subset of HTML features which we know that will work with outlook and Yahoo and so on. You get the static media so you the website builds CDN friendly URLs so that if you update your CSS file it will change it in the user browser and you never have a problem that you have. Your template and your CSS file would be out of the out of the sink so that you will be using a old CSS file on your customer browser. And you get the security so basically I mentioned my favorite thing nowadays is this password listing. There's SQL alchemy so you don't get any SQL injections. It has cross site scripting protection in the templates. Also it's using a SQL transaction isolation level to it's by default it's the maximum value so you it's the same as in so DP that if you try to write to the same row at the same time one of the transactions. Rows back one one completes and then it will just. Retreat the HTTP request again. That's what exactly what blown is doing and why it's so freaking awesome. It comes with. Any base configuration files and because those files are not very flexible there's a little layer on the top of that that allows you to use includes so you can have a one. Base configuration file for your site and then you can have a different layers on the top of that like a development production testing and so on. And there's also separate file for secrets, which is basically API keys. And there's a framework support for that that you never commit your API keys to your repository, but you handle those out of the low. And. Here you can see the actually type hinting in the action. So. So it's very easy to read. You know what what goes in and what comes out. This is a sping documentation. And also on the top of that there's some custom tools to make documentation for every template you write for every template variable you use us using those templates and. All the teams are filters you also using those templates so. You have a complete a complete reference of all the stuff you can do with the web sona. And it comes with. Default. Ansible. A deployment playbook, which means that if you have a web sona app. You have committed to. Kitterepo story. Tell that hey here is my new linux server. Install my app on that and the playbook will do everything for you. So it's basically a single line command and telling hey deploy my. Web sona slash pyramid application on this server. So it will go to the server. It will install. Postgres ready is any kings and stuff like that. It will run your migrations if you have a school migrations and it will set up your email and everything. So it's very very friendly because. Development is only half of the story. The other half is like DevOps like how to run servers. And even if you learn. If you're a newcomer to python and you learn how to write your first view. And then somebody comes to you want to make it to. Internet and then somebody comes to and says you and I'll see hey yeah it's simple just install endgings. Here is the link and you go to endgings website and you are like oh my god. So we will just give you this one common. It's like a hero kupa without hero kuku and it works with every server out there digital ocean and so on. But hey if you have a web sound app just go and buy a server and it will run there and it's safe. And that's pretty much about it. So check the website and what I would appreciate is that even if you don't intend to use web sound come to hang around in the chat. Because I like when there's a lot of long list of people in the chat. Even if they are not very active because it means that the community doesn't does care and maybe in the future they will convert to a website users. And if you have any questions I think I have a little bit time left. Yes. Okay. So let's do so that the Nate will ask one question and then somebody else asks a second one. First I just like to say that web sound. So we still call for web sound. We write it this time and you will be done. Preferably call it. How is it taught to the user tonight? Yes it's on the application level. It use something called rolling time window on the top of the redis. Which is something I didn't come up with myself but I made it a little bit better. So that every time you hit the view it marks it in a redis that okay we get the hit here and here and here. And it has a backlog of let's say 60 entries and when you exceed that during one hour it says that okay this guy is hitting it too much and then it will send the new HTTP code saying that you have too many requests or something like that. Three ways what you can do. One is global on the system level which I recommend because of these bad guys. I know that they have tens of thousands of IP addresses and with the IP6 you have, I don't know even how do you say this number in English but it's a huge number with a lot of bits. So yeah, mud load. And you can't basically anymore block anybody by IP address on today. Then you can of course you can do one is by username. So if some user tries to request money like 1000 times per hour you know that it's probably not real. And there are a couple of other tricks you can do but it's flexible so you can add it in your own logic. Excuse me. Unciple what? Yes, it's actually using default playbook is using it. But the thing is that you still need to do secrets outside Unciple like on your own laptop because you are not going to run Unciple on your laptop to run it on your P server. So you need to have some kind of tool for that. I don't have a good solution but I know person who is willing to work with that if we give him enough money. Yes. I think the first comment on that wrapper is that this is temporary, never use this. So I think who is currently mind is it Michael Mendington in Pyramid? He wouldn't like it. Yeah, I know I tried that first and it was horrible so I meant something little less horrible but still kind of scary. You don't want to show it to your kids. So for the configuration files, my plan is that after some sponsorship and funding and stuff like that move everything to a YAML with built-in extension and inclusion mechanism and make it also like stand alone so that other Python projects can use. Like SQL alkemi, what else we have. Unciple itself, well Unciple they are different play but you know like you can use it stand alone and not just with your web server. Yes, it's a Alembic. Yes and no. The crude base classes are flexible. So you have a crude and below crude you have a SQL alkemic root. And I have made one side which was using only a redis. But it's still something you are probably going to write some parts yourself because I haven't been using that many of other databases myself. Yeah. Yeah, it's abstract in a sense that it's there but it has only one implementation at the moment. Yes. No. Oh yes and no. No, it comes with default bootstrap hpm files but we have one project with some former Berlin based developer where they were developing a client side on React. So I know that web sound and React works well together. Yeah and I think in a long run I'm going to move the whole admin to a React or something else because it makes sense when you have well when you have a lot of data. So I have all kind of this cool stuff we saw in Lawrence's presentation going around. Yes and no. Basically you are looking to present an SQL alkemic model implementation which is the most common problem. And WebSona itself doesn't do it but there are a couple of other libraries out there which do it for you and they are integrated on the pyramid level. So whatever works for pyramid you can use with the WebSona. So it doesn't take an opinion of that but there are solutions for that. Okay, anything else? Thank you for a lot of questions. I think it was more than I have ever had so it's good to hear that people are. Oh and there is one more thing I would like to say. I would like to thank Mr. Tarek Alam. He was the first WebSona user and I forced him to go to the tutorial and he is not very well in the pyramid and stuff because he has blown background but he found a lot of pain points and he suffers for you so that you will have a better life.
|
Websauna is a Pyramid and SQLAlchemy-based high level web framework with a lot of Plone influence. It is aimed at building customer-facing web applications. Think of it as Django without too much Django in it. Core features include automatic admin interface, Bootstrap theming, sign in and sign up, social media integration, Deform-based forms, Jinja templating, traversing and ACL. Plus of course all the Pyramid goodness.
|
10.5446/54544 (DOI)
|
The all trainings are being recorded. They will, we don't know how long it'll take to get them published, but they'll be up as soon as possible. All right. So let's just begin by doing a couple of some introductions. And just state your name and why you decided to attend this training. So my name is Steve Piercy. I'm a web application developer who is a freelancer, self-employed. So I am a one person business. So you'll get perspective as a one person business pyramid developer. I'm also a core contributor to Pyramid and all the pylons projects underneath the umbrella. And I'm here today to do training for Pyramid. So with that, let me pass it off to Peacock. Okay. Hello everyone. Nice to meet you. I'm Peacock and I'm from, I'm connecting in Japan, from Japan. I work in Japan, CMS-Com, Telerosans company. Why I decided to this training is I have this flask and bron a bit, one time. So I'd like to talk about it. So I'd like to running other web framework for Vax. So nice to meet you. And please help for English. Thank you. Don't worry. I will speak slowly and clearly as best I can. And if you have any troubles, you can use the, you can also use the chat. And I can use Google translate to copy and paste something. So if you have a question in Japanese, I'll rely on Google translate to translate it to English. Yeah. Okay. I use that when I visited in Japan, I was talking into my phone in English, showed it to the waitress, said, and they go, oh, yeah, yeah. All right, Lucas. Yeah, sure. I'm Lucas Gutzi from Germany, and I'm working at Interactive GameBH in Cologne as a software developer for some years now. And I've chosen this training because I've worked with a couple of frameworks now, but not pyramid. And yeah, that's why. And yeah, also, yeah, I would like to try to see how this works, how fast you can build a website for which purpose is it. And I often hear that in the other platform conferences already. So, it was last year, there was a battle between pyramid and Gautina and yeah, so I'm very interested in seeing how fast it can perform and for which solutions it's good. Excellent. Thank you. Alexander. Hi, I'm Alexander Leuchel from also from Germany. I'm an IT manager at Germany's largest university, the Ludwig Maximilian University. Well, and I'm also a long term, a Plon community member started with Python and Plon 2003. And was in the Plon Foundation Board of Directors for six years. And now for four years in the Plon security team, and mostly doing now other topics, mainly management things but sometimes still some code, and that's mainly in pyramid with open API. So, like Nick has done the training last year. It's very nice and I hope to see some of the parts that I'm still not totally understand so the permission and authorization system, a bit covered in this training. So, not much more the overall content and view part but more some of the advanced parts of pyramid. Just in case you did not know, Alice, there's going to, there's going to be a talk on December nine pyramid and role based access control. I know, I would be there definitely. Yes, I would highly recommend that you attend that one. Okay, what's next, Jordi. Hi, I'm Jordi. I'm from Catalonia, Barcelona. I work as a tech lead at Vinisimus. We are wine e-commerce from Spain. I just arrived there and found an old rope installation and then I started there as a Django developer but then I started digging inside job and all the technologies around it. And it was amazing and also I discovered it. I started at Guillotine later I joined at Ona where I worked with Nathan and Ramon with Guillotine and I like a lot all around job blown community. And for pyramid, it's something I like a lot, how it's designed, how well it's integrated. And I'm just here to learn things because I never use a pyramid on anything. And I just want to start with something. And it's, I'm happy to be here. Thank you. I'll come with Ramon. I thought on Monday, on Tuesday around Guillotine and plans on the future and how we want to evolve it. Just perhaps it's, it's, I know that Guillotine is another thing, but I'm using it as a web framework and I'm also super interested in pyramid because I know there are a lot of nice patterns to build a framework. And I'm here just to learn things like this kind of things. And I thought it's, it's on the end it's something like a child of the of the parents, no parents are prone pyramid. So, you know, it's a baby, it's just just new when it's just starting and we have to fix a lot of things but it will. All right. Thank you, Jordy. Chris, do you go by Christopher or Chris? Chris would be just fine. My name is Chris Callaway. I'm in Chapel Hill, North Carolina in the United States. I work for the Renaissance Computing Institute at the University of North Carolina. My first Plon conference was in 2003. I used to work with Plon a lot, but for the past several years, I've been stuck on the project from hell. I'm a dev ops for this Django based content management system called HydroShare for hydrologist. It stores hydrological data and model configurations and kicks off model runs. And I'm here to take this pyramid tutorial because I want to do something interesting for a change. Chris, thank you. Holden, you're up. Okay, let me try this. Can you see me? Yes, we can hear you. Yeah. So hi, everyone. I'm happy to join this. I'm going to start training. Steve, during the 2018 conference in Tokyo. Yeah. We were together in the Plon training. Yes. I read a lot about pyramid, but haven't really tried it. I actually have here an old violence book. I've never really used it. Because we developed in Plon. So I'm very interested to learn. So I'm happy to be here. All right, thank you, Holden. Paul, Paul Roland, are you participating or just monitoring? I think Paul is just monitoring. All right, let's move on to Dio. So I'm going to read a little bit about what you can see is using chat. So if you all can see the chat message, I will not read that for you. But let's see. Actually, I'll just summarize. He just wants to or she, I'm not sure. They want to have an idea of what makes pyramid different from other web developers. And they're a solo developer in Nigeria. Next up is Paul Greenwald. Hi, sorry, my video is not working. I think audio works for us. My name is Paul. I am from Germany. I work for the university in Tristan as a web developer. I'm a web developer in Tristan. I'm a web developer in Tristan. My first project was in 2014 for my master thesis. And I was curious about how pyramid improved over the time. And I also want to refresh my knowledge about pyramid. Excellent. All right, that's everyone. Did I miss anyone? Okay, so we'll be, we'll be, I will be leading the training. I will try to speak slowly and clearly. If I go too fast or if I go too slow, please type in a chat message. And let me know. If you have any questions, you can either type it into the chat session or if you know how to raise your hand, you can do that. And I'm not sure how, where the raise hand feature is here in zoom. It used to be somewhere, but I cannot see it now. Maybe because I'm going to members of the meeting. Then you have on the lower part, the different. I can solve for raising hand. Disapprove. Quicker, slower, and so on. Gotcha. There we go. And I can also mute people if they're too chatty. Or forget to some mute their mics. Okay. Thank you. And we're going to start off by just giving you an overview of. The structure and organization of pyramid and, and where we fit it. So I'm going to show my share my screen with you. So this is the first one. So this is the first one. Hopefully this looks okay. How does that look to everyone? Is that too small? Too big. Perfect. Excellent. Okay. I think I also. Want to. No, this is good. I'm going to go. All right. So pyramid. Is one project underneath the pylons project. The pylons project is the result of a merger between pylons, the web framework. And pyramid, which used to be called. BFG. Or is that right? No. So it's a project that's a bit different. But it has its origins in soap. Chris McDonough, who's the author of pyramid. As well as Paul Everett and. Tracee were. Were employees of the soap corporation at one point. And together. They pretty much. Started web application development. And so. One of the. I'd say first half dozen or so people who were right at that. Right at the start. And of course. Plone itself has its origins from soap. There's a lot of. So components and interfaces that come from soap. Both of which have found their way into pyramid and into blown. And then. Just give you some resources available. Pyramid is a really small, fast. Web application framework written in Python. And it has. Eiffelon documentation. 100% test coverage. And it has a lot of add-ons. So. And it's a lot of things that are available. And it's a lot of things that are available. And I think they call them add-ons or plugins as well. In pyramid, we call them add-ons and. Lots of things are available. So. As an example. If you wanted to find. Things about authentication and authorization. You can see that there's quite a few. And this is by no means a comprehensive list of. Different authorization and authentication. And there's a lot of different types of documentation. And there's lots more on Pi Pi. But these are the ones that seem to have stuck around. The longest. Pyramid also has. As far as its documentation. It's available in several formats. Today we're going to be using the quick tutorial for pyramid. It's one of the tutorials underneath pyramid. And. Our repository is up there. It's a github.com slash piled slash pyramid. Okay. Any questions so far. Seem to have lost the window for viewing. Participants. There we go. That's better. Put it over here. And. I want to also see. Where the chat room go. Participants. Or. I think we have lost my chat window. Alex, do you happen to know where that went? If you. Presenting your screen, then you should have. Kind of. Menu bar. Over the presenting window where you can open the window. But the window is normally directly closed. If you. Open the window. There we go. I found it finally. Thank you. Okay. Cool. So I got it all over there. Okay. So. If you haven't already opened up. The documentation I'd like to have you have that. Available in the background or just pasted the link in the chat. And. We're going to. We're going to have a. Not with this, but with pie charm. So how many of you. Have are using pie charm as your editor. I'm using it. I have a pro license as a backup, but I'm normally using code. I'm using this code. Okay. I have pie charm from work. Okay. I is. Okay. I've downloaded it. I've downloaded it. Well, I'll do a quick demo of maybe one reason to. If you haven't been using. Pie charm. Maybe this will. Get you interested in using it. And just show how quick it is to get started with it. So we're going to do a quick demo. I'm going to start by creating a new project. And I'm going to do a pyramid project. And I'll just get started with this. And we're going to do actually a pyramid project. I can. When I get this started, it'll. Create a virtual environment based on. I use pie and to manage my Python versions. And I can specify. What I'm going to do is I'm going to do a pyramid project. And whether to use. No database backend. SQL coming with a postgres. Or any other SQL database or ZOBD, which. If you're using clone, you may be familiar with. So as I create that. Pie charm automatically creates a virtual environment for me in my project. And I can. Install. Packaging requirements and upgrade them to the latest. So that's PIP and set up tools. Then it will install cookie cutter. And proceed to generate a project from the pyramid cookie cutter starter. Installing all of its product, all of its. And then it will. Make a pyramid project. Once it's done that it pops open this helpful. Read me filed. And its contents tell you what to do. We've already done. Automatically the first three steps. So that just leaves three main steps. So I'll go ahead and do that right now. And then I'll just pull up the. All of the things that I've built in. With a database backend. We use Olympic. Olympic is. A database migration tool. It helps by database migrations. I mean. That. It's a great tool to. Basically put your database under version control. So you can instead of mainly going in and typing raw SQL scripts or using another database management tool. You can manage it by code and put that code under version control. It's super awesome. Highly recommended. And then we go to the front of the database. We can do that in the next few days of frustration. So to start it off, we're going to generate. An initial migration. And then we will actually upgrade to that migration version. Now we're going to load. The default. The default. And because we installed testing requirements, we can run tests. There will be a warning, but that's okay. We see that two tests passed. And one warning will go away in the next release of Pyramid coming out very soon. And then lastly, we can run our project. When I open up a browser window, we see we have our Pyramid starter project running. Welcome to demo. So that's how quick and easy it is to get started with having your very first Pyramid project. Any questions? Okay. Yeah. That's a plug-in in PyCharm. That's the number one feature in PyCharm is the Ncat. The template language that I was using was JINDA2, but you can use any template language that's available. You can even write your own template language if you want to with Pyramid. But the big three are JINDA, Chameleon, and Maco. Okay. There's a couple of other things too within PyCharm that are kind of cool features. And we do have time. Actually, I'll just show you a couple of things that make PyCharm really fun to work with. There's run configurations. So let's say here's one that I like to use by default. PyCharm also automatically generates a run configuration for you. And you can actually start your project. And because I configured it to open up a browser window, it does that for me automatically. There's also a plug-in from JetBrains that allows you to do debugging within the browser and back into PyCharm. I'll stop the run configuration. We also can do debugging, which is kind of cool. So if I wanted to debug my project, go into the default view. No, I don't want that one. I want my init. Here we go. I'll set a stop point or a break point. And I'll start running the debug. It starts running. It puts it up in the background. And what's really cool about this debugger is that it can show you all the variables that are happening during the request cycle. So for example, when it gets to this point in the code, I can see that the get request has been made. And here's all the environment variables that are being sent. Isn't that delicious? So, yeah, this is a really cool way to learn how to learn all the inner workings of Pyramid and understand what's happening in the background while doing debugging. I'm going to step into the next line of code and we can see some more stuff that takes place. So one thing that happens is that we're now going to run this next line of code. So I'll step into that. And I can see that a query is constructed as a result of the previous line being run. And it actually says select. Here's that select statement. Now this line will actually execute the query. So I'll step into that and we can see that here is a database model and it has a column, it has a row of data that it retrieved where we have the ID equal to one. Its name is one and its value is one. That's pretty cool, huh? One more feature that I wanted to show about this. I'll keep stepping over, return stuff, it dives into some of the other code, let's step out. But let me go back to my default and I'll just say run to the cursor and there it's done executing. Yay. One last feature on that's really cool with PyTarm is that when we created the database, if I wanted to view it, I can just double click it to open it. It shows me its schema. Olympic is where it stores the version numbers and stuff like that. But here are my models and you can see that my models table has this data. If I open up the models table, I can actually view it, I can edit it and execute the query. I'll pause right here if anyone has questions about getting started with a Pyramid project in PyTarm. Okay. Seeing as there are none, we're going to actually go through creating a project so that anyone can do it. I just wanted to whitch your appetite with PyTarm because I love using it. It's super exciting. And if you want to play with this later, I'll give you all a chance to do that. Hopefully, with new technology, there's this plug-in for PyTarm called Code With Me where I can share my environment with you in real time. I don't know if it really works yet, but we'll find out. So to get started, I'm going to close out this project and I'm going to open up a folder where I have all my phone call stuff. And we will get started. So let's go back to our, we got the schedule, our quick tutorial for Pyramid. Okay. So does everyone have this open in the background? Is not if, or say yes in the chat or something else? Okay. Good. We're going to go pretty fast through this. Let's start with the very first step with requirements. Hopefully everyone has already done this. We're going to be, if you have Python 3.6 or later, that's a requirement. So is there anyone who does not have Python 3.6 or later? Okay. I hear nothing, so we're all good there. You probably have Vemv installed if you have 3.6 or later, as well as PIP and all that good stuff. And we are going to be using Unix commands primarily, primarily through this tutorial. So you will have to, if you're using Windows, you'll have to adapt your syntax accordingly, but it shouldn't be too difficult. Chris said in the chat about not doing or using VM, but instead Anaconda. Oh. You know about that? Kanda. I am not familiar with Kanda, so I will keep my fingers crossed if you use Kanda for package management. I know that it's a little bit different. That's all you know about it. What I will be doing is trying to keep it as basic as possible by using what comes with your system. And I know that some people just use Kanda as their Python development system. All right. So first of all, we already have Python 3. So I'm going to skip over that. We use this a lot. So throughout the tutorial, we're going to be using them as an environment variable. So for Mac OS, please copy and paste that in, or not actually copy and paste it, but you'll need to create the virtual environment, create the environment variable them and for Windows something like this. So I will go ahead and do that in my. Now I'll go ahead and actually create my virtual environment. What this command will do is tell Python to run the module them and it will create a virtual environment for me at the location that I specified. So for example, if I do echo them, that's where my virtual environment will end up. All right. And there we go. Hello there. I'm going to get rid of that because I don't need it. All right. I will now do an upgrade of HIP and setup tools because I like having the latest and they should be the latest with Python 3.9. Okay. We're good there. And then lastly, I'm going to install Pyramid as well as Waitress because Waitress is going to be our web server here. All right. So that's the very first step. Everyone should now have a directory where they're doing their project. They should have a virtual environment setup and inside their virtual environment, it should have been, actually I'm running 3.7. That's okay. Within your site packages, you should see all the lovely things including Pyramid, version one, ten, five, inside your site packages. Okay. Is there anyone who does not have something like this? All right. That sounds good. So let's keep moving. Let's go on to the next step. This is just an overview. Actually we'll be having a directory tree, what we'll be doing a lot of, so just make note, is that this tutorial builds on previous steps. And as we go through each step, we will be installing, well, as we go through each step, we'll copy the previous step and then we'll install that new directory as our project in editable mode. And note that this is the command to do that and we'll be doing that often. PIP install dash E for editable dot for current directory. And next step. I gave you a quick overview with cookie cutters and this is really cool to do too. Pyterm does all this automatically for you, but essentially you just install cookie cutter, run it and you have the same options to select a template language, a backend, and run those same commands that I ran in Pyterm. And that's pretty cool, but we're not going to be doing that because we need to start from the very beginning. So we're going to skip over this one. All right. So we're going to start off with the very, very basic stuff about how to get started. Yeah. And I have a question before about cookie cutter and the current template. Do you know who's maintaining that and why it does not at the moment support things like namespacing and the different sub directory structure like we, for example, have in the stone community packages so that you normally have a source directory in your package, a test directory, differentiated and then you have the namespaces within, which helps because as you know that Pyramid has a lot of the ideas and concepts and the learnings from the soap and we know that even the Zen of Python has all the learnings from soap in where they say namespaces is a good thing. We should do more of it. Why exactly is Pyramid not promoting them a bit more in their cookie cutter section? So first of all, it is the Pyramid cookie cutter starter is maintained by contributors to the Pylons project. Your second question about namespaces is definitely valid and it has been discussed. And I just think that no one has, one of the issues is that when you create the project from the cookie cutter, it is really basic and it is essentially just a few files. So you will get a test file, a routes file, and in it and then you will get your models, views, templates and static directories. And we don't do, we don't split those out yet. That becomes an opinion and yeah, namespaces are freaking awesome and we could start doing that but at a certain point then we say now we have to maintain that and we are telling everyone how to do things. So it is like one of those balancing acts like how do we get people started without giving them too much to work on. But also we want to have them do the best practices going forward and what people might end up doing is just throwing it out and doing their own thing. There is definitely an opinion too about, well, do we structure things as a module or as a package and I have my package package within a package or do I organize everything in my views folder, do I organize everything in views and models and all these other separate places. So that is another reason why you might not want to use namespaces. Does that make sense? It makes sense but for the people that maintains larger projects, namespaces become a very quick thing and then you differentiate between the main project and the different models you have used within. Yes, and this is very true. And I think that at a certain point, the cookie cutter gets you started, it does not get in the way of you doing that. And I think once you have a project that gets to be that big, one of the beauties of Pyramid is that it will allow you to adapt your project to that. But as far as just getting started, it is called cookie cutter starter and it does what it does and it helps to also minimize maintenance. To be honest, it is like three people here are maintaining this cookie cutter and very diligently and get a lot of contributors. Okay, any other questions so far? All right, so let's go on to our single file web app and try to get it going. So within our project, we are going to create a new directory called hello world and then we will CD into that. So let's go ahead and do that right now. So I am in my project directory and I am going to make a new directory and CD into it. You can see that is where I am. And let's copy this hello world app and put it into the file at that location, app.py. I don't want a new data source. I want a new file. Oh, yeah. Now run the application. And then visit that in the web browser. And lo and behold, there we are. So let's talk about this with the structure of a basic web application. So with any Python project, we always have some imports. Waitress is our server. We have the config module and the response module. It's starting in line six. We define a function that receives the request and returns the response. So that's the basics of web stuff. Here's the incoming request and here's the response being returned back. We should have also seen when I made the request that lo and behold, incoming request actually showed up. So when I do a reload, I should see, yep, there's another incoming request there. Then moving down into the actual meat of the application, we use, we create a configurator object using Python context and within that we just add a route. And this is something that's unique about Pyramid. A lot of frameworks smush together the route name and the URL. And that makes it difficult to do, to use something called predicates. We'll cover predicates later. But one of the beauties of Pyramid is that when you specify routes, you can use predicates to say when somebody makes a get request or a post request, do one thing and for the post request do another. That's super cool. And one of the best things about Pyramid. In the next one, we have a view. This is the function hello world. We tell that for this view, we configure a view that will run the callable hello world using the route that we named hello. Again, this is something that's pretty unique to Pyramid where we have this separation of the views from the routes and the route names. This is so awesome and adds so much flexibility. The next one, we create the application to spin it up and then finally we serve the application. So that's the very bare bones essential for a Pyramid app. Anyone have questions so far? Okay. We do have some extra credit questions. One thing, if you wanted to spend some time playing around with this, you can, yeah. Absolutely. That is very true. In fact, I do that a lot with my web application. It helps, for example, Holden's question is, can you have several views for the same route? And the answer is yes. So on a lot of my applications, I have separated the get request from a post request. And the get request, I serve up a web form. And that's just the form being served up. And it will load data that comes from a database. Then on the post, when somebody submits that form, I'll have a whole different set of logic that gets executed. So it will validate the form, it will write to the database, it could do some other task. And it helps to keep that separate than having deeply nested logic. And then it just becomes, for me, in my brain, it becomes easier to manage. Okay. So, any other questions? All right. Well, let's go on to our next step in our tutorial. So packages. So here we have, so Python has its own thing about packaging. And a lot of folks get confused about how to structure their own project. We're going to start it off by learning a little bit about Python packaging in general and how that is applied to our project. So basically, in our training here, we're going to have a directory for each step within our project. So it'll have our project will end up having multiple directories. And each one of those directories is going to have a setup pipe file. And that will basically inject some libraries and dependencies so that we can actually run our project. And then inside of our directory, inside of the tutorial directory, we will use an init.py file, Python file, which turns a directory into an actual package. So that's the basic requirements of the Python package. You have a directory name and inside of that, an init.py file and then other Python files that are modules within your package. And you can nest those one with the, and the other. And we're ready to install our project in development mode. We'll run this lovely command and get that going. All right. So let's get started with this step. So what we'll do is we are going to change directory, go up, and then make a new directory name package and then CD into that. All right. So I'm going to kill that. CD will pack up. And now I should have a new directory package, which I do. And my current directory, oops, current directory, if I could type that, is there. I'm going to create a new file setup.py and create that in my package directory. And we'll now install it. Oops. What did I do wrong? Oops, I forgot. Skip to head too fast. And then pip install the, no, press. I should have done that in package setup.py. I missed something. I did not. Oh, there we go. Bad copy paste. I thought I did something wrong. All right. Let's try that again. There we go. So here I had actually installed, I installed tutorial. And tutorial requires pyramid and waitress. So those two should have been installed into my virtual environment. And yep, they should have been and they're good to go. Now I'm going to make a directory name tutorial. That should show up here. And within there, I'll create an empty, almost empty, init.py file. So new Python file. And one last thing, going to make it slightly modified. I'm going to put this tutorial app.py. Create a new file. And app.py there. And finally run. And reload. And lo and behold, it's working. And we have incoming request here. So essentially what we did, the difference between this version and the previous version is that this version uses packages. Packages allow you to start using namespaces. And that's really fun. Okay. I see a hand up from Alex. Go ahead. Sorry, I still have the hand up from before. Okay. Questions on, are there any questions on this step in the tutorial? Yeah, I don't see any. So we'll keep moving. All right. Let's do one more step and then we will have a little break, a bio break. It uses configuration stuff. And configuration is primarily handled using.ini files in pyramid. So what we'll be doing here is to just basically modify our setup pie and to make it have an entry point and that just tells pyramid the location of where to find the whiskey app. So let's go ahead and actually do this. So first we will copy the previous step and copy and then CD into that new directory. In our, we're going to make a couple of changes here. So we have entry points being changed as that's going to be modified. So we'll make that change in our setup pie file to find the entry point. And we will install our project. And this will regenerate the egg at tutorial.eginfo. In fact, I'll just open that up so that you can see what happens. I think is it this one? I think it's this one. No, it's not it. Oops. Whoa. I can't remember where is it, there's an interesting file that changes. I saw the requires info. Yeah. Let's go ahead and run that and install. And it should update the egg stuff. Let's see. Yeah. Didn't really do anything, but anyway, we've now actually specified that this is the entry point and updated the links to it. Now let's make a configuration file at the root of our project in INI. New any file. So file. Is that it? Development. And then lastly, we're going to change our code because we no longer need that import. From our tutorial file. There's our init file. Now we've actually got something there. And now we don't need the app file. So we're just get rid of that. Lastly moved everything over there and removed the whiskey import. And lastly, we're going to run the pyramid application. Okay. And why did it do that? What did I do wrong? Let's see what I did wrong here. Double check my steps. Let's do this. Set up pie. I know I did something wrong. Okay. Set up pie is the same. No, that's the problem. There we go. That's a reinstall that this is a problem with just me. Install just to make sure and now run. There we go. That's better. That's better. That's better. That's better. Always editing the right file. There we go. Much better. Okay. So with here. We now have a pyramid app with a configuration file that's. Basically defines, you know, where entry point is. And has server configuration. And you can add to your configuration file. You can add to your configuration file. You can basically use the egg. And using name to get in. And on what host and port number to listen on. So that's what that configuration does. There's a lot more complex things that you can add to your configuration file. Including logging. Database configuration. And then you can add to your configuration file. Any questions about. Application configuration so far. Yeah, Jordy. And boot. Does the configuration supports. Variables. Can you repeat the question, please? The pyramid configuration. Does support environment variable variables. Yes. And then you can add to your configuration file. But it's a little bit tricky on how to do that. There is. You have to specify. A function that will. Load those variables. This is actually. Exactly what we're talking about. There's actually a question there. What is the best way to do that? The best way to do that is to do environment. Variables. This was recently done. I think. Yeah. So like secrets. That's a common use case, right? So you can use that as a password. In your. In E file. And therefore in your source control, we don't. We don't want to put secrets out there. So. We, there's one. There's lots of different ways. Yeah. So your environment database URL. This is one way of doing it. And I know I will find this. I know. I know. Where I have actually. This was recently done. Shoot. I know. There we go. There's that one. This is one way of doing it. And yeah, definitely. This. So let me copy this link for you. Not as the definitive way of how to. Load up environment variables. And then you can use that as a. In any file. Is that it's kind of not the greatest way, but you do have to have this function to load things. We. We do have another way. We can just continue. But let me see if I can extract that code later. Don't worry too much. We can just continue. Something. And just ask it because. What I have checked on the last. The trend of having everything on the coffee. Being. Being able to be a red arrow around environment. Just like if you put your application in a container, you can just. Change environmental variables and it works. But it's. It's. I'm pretty sure there are solutions. Don't worry. Yeah. There's a million different ways to do it. Okay. So are there any questions on configuration. So far. So. So. With that, we are about a one hour in. So let's do. Like a five minute bio break. So if you need to step away from your computer. And. Have some personal care time. Well, let's do that. And we'll be back in at seven 18. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. So. Jordy, here's a quick answer. Basically, if I define this function. It'll just use a pythons. Oh, well, I didn't import it, but you'd have to import us. And it will actually expand the, the very, the bars that you. You can use that. And then you can use that. And be able to load those. Basically put that into a settings object. And then finally in your configurator. Say that the settings are settings. That's another way of doing it. Yeah. Yeah. And I know I got this from somewhere in stack overflow. I'm not sure what I'm doing. It's interesting. There's like three or four different ways that I've seen on there. And this is the one that stuck with me. All right. Okay. So we're going to go on to the next step in the tutorial. This time I'm going to close all the files so that I don't edit the same ones again. We'll make that mistake again. And then we'll go ahead and do our, our lovely debug toolbar. Now I already showed you how to do some debugging and pie charm, but if that's not good enough for you, you can always rely on the debug toolbar. And this is really cool. So let's go ahead and jump right in. So we'll create a new, create a new directory. Copying. Our previous step into the new step. We're going to add a new extra. For development purposes. I'm going to copy that very carefully. And define it extras. Requirement. I'll go into our setup. So let's go ahead and do that. Actually, I think I just realized something. Should be doing this. Let's flip you floppy. And of course, because I can't edit it, I don't have something in my editor. So let's open up my editor. And now I can actually size it. So much better. Yay. Isn't that better? Okay. So we'll do. And then add our extras require. By the way, on Mac OS, I have multiple. Clipboards. By using keyboard Maestro. It's one of my favorite little utilities. It allows me to have. Up to 10. Clipboards and paste boards. It's so awesome. And then I can just go back and forth. It makes, you know, going back and forth much easier. I just have to go back and forth once for multiple things. Now we're going to install our project. But this time making sure that we specify that we're going to install development requirements. If we. When we specify what requirements we're going to install. We're going to install the dev. We're going to install the dev. For the package. And then by virtue of dev, it'll say, okay, we're going to install dev requirements. All right. This is basic Python stuff. So let's go ahead and do that. We should see that. You low and behold, it has actually installed our tutorial. And some. Packages. And then we're going to change our configuration. In our development. I and I file. So let's get that. And here we're going to say, we're going to include the pyramid debug toolbar. In our configuration. And that's pretty much the only thing that has changed. And finally, let's run our application. Okay. And let's load that. And look at this. We have a shiny new debug toolbar. What happens when we open that up? Wow. We actually get to inspect a lot of really cool things. So if we had logging enabled, we'd have this logging tab active. We can check that out when we add our logging stuff. If we. If we have a log in SQL, can we tab so that you can actually look at the SQL statements that are being generated. If there's an error, I'll have the trace back. You can inspect request bars. Hey, this looks kind of familiar, doesn't it? Like when I was doing the debugging in pie charm, all this stuff is here. And there's a do tab. This was just added is some session stuff. And so if you have sessions now, you can actually inspect your session variables. Really cool stuff. And so that's a quick tour that there's additional things like if you wanted to see all of your. So the session allows you to basically run the P. Scripps, which are the. Paste or Scripps. And they allow you to inspect your views. What have we got here? Yeah. So we have a lot of things that we can do. And we can also do some. We can also do some. Some routes and predicates. Yeah. Lots, lots and lots of stuff. You can see this is nice to get a nice. Little list of all the routes that you've defined. And if we look in our. Tutorial in it. I and I, we can see that. Here's our route. Name and pattern. And we can see all the settings that are loaded. These are usually default settings. We don't explicitly set them, but. In our. I and I file, but we can override them in our. I and I. And if you've defined any tweens and. Two bars, one of those. And finally. This is really helpful if you don't know what packages are installed. You can see what's actually installed. And then we can see what's actually installed. And then we can see what's actually installed. Okay. One other thing too is that for every request that you send. So if I reload this. I will get another request showing up. And each time I do a request. So like, if I try to go to food. I get a 404 not found. And I can see that in the toolbar. Let's go to the next one. Okay. Let's go to our next step. Test. So. If it ain't tested. It's broke. If you don't got to. It's broke. If it ain't tested, it's broke. If you don't got tests, it's broke. But make sure that when you have your application, that you have it fully tested. And that's kind of like, it's our mantra in Pylon's project. That as well as if it isn't documented, it's broke. So documentation allows other people to understand what they need to do to use your project. And tests are a great way to verify that what is working stays working and what you, what should not, and it actually is a really good way to learn like the history of a project, like why you do things a certain way, because I've actually been able to do some, like deep dive into source code to understand, oh, that's why they do things a certain way and by reading tests. So yeah, yeah, tests are not the greatest, they're not fun to write, but at the same time, they really do, your future self will appreciate the work you put into them. All right, so let's start doing that by creating some tests here and we'll just do something really quick. And let's stop this. I'm gonna close the current file. So I don't get into that again. And we'll copy the previous step into a new step. And in my setup.py, I'm gonna add a new requirement in my development area. Now, notice how I'm also like adding requirements as my project grows. This is something that you will do a lot of. Since it's in the dev stanza, I'm gonna have to run pip install e with a dev in there. So I'll do that. And this time it will install the tutorial with pytest. Ooh, look, there it is, latest. And now I'm gonna write a test and put that in test.py, in the testing tutorial. Now, of course, it's not a good practice. I'm gonna say that you shouldn't do this, but I'm doing it anyway. It's not a good idea to put your tests inside of your package these days. What the best practice is, is to put a test directory as a sibling of the source directory. And even this is not the best practice. What we want to be doing in the future is, from your route, you'd have a SRC directory for source and then nested within that would be your project name. And then a sibling to source would be your docs and a test directory. This way, when you actually distribute your package, you do not distribute docs or tests, you only distribute your source code in your distribution. This makes it lighter for people to use. And if people need to actually do development, they can go to the repository to clone it and do it the right way. There's a handout, Alex. Yes, I just want to underline what you just said that you should have on the same level, the source directory, the test directory and the document directory. I originally came from Avionics software development. And if you ship any debug or testing code with your production code that could harm your product, very, very hard. So you should always separate it and the best way is to do it on also the file level system. So have it in separate directories, never have tests in your actually runtime source directory. Yeah. Yep. And this is exactly what pyramid has done in a project that I'm maintaining we deform that's on our bucket list. We need to reorganize our structure. But the reason we haven't done it is that we're currently supporting two versions. And until we are ready to actually cut the release for the, on the master branch, we're dragging our heels and actually changing it. But I think at some point, we're just going to have to rip the bandaid off and do it for both versions. And then it'll be able to stay in sync. That's what was to spin the hum. Yeah. Anyway, this is a way to structure it where you have at the root of your project, you have your docs, you have your source slash your actual project and then test directory. So pyramid is a good example of that. All right. So now that we actually, did I set up the test? Yeah, I put tests in there. Let's actually run these tests and see if they pass fingers crossed. Woo-hoo. So we got when tests passed, there is a warning here and this says, the module has been deprecated. And this is also a really cool thing too about writing tests is that you get deprecation warnings. Here, this warned us. In fact, this warned us about an upcoming issue with how Python moved around a lot of stuff underneath us. So we fix this in the master branch and when pyramid 2.0 file is released, this warning will go away. So another good reason to have tests. So when you're writing tests too, I wanna, we, a lot of our testing we do is class-based. And we also recently, a lot of our Pylons projects have switched from using those tests, which is no longer maintained actively to PyTest. I just switched D form from using those tests to PyTest and it was super easy. All I had to do is say, we're gonna use PyTest as a test runner. And all the tests still pass. And I was super happy about that. But a lot of the, when you write tests, you will always have some kind of setup thing that'll set up your application or set up the, it executes every time you run your test suite or just a single test. And then at the end of the test, testing run, it'll tear down whatever you have specified it to tear down. So that's a good thing. And your actual test is just something that will run the test. And in our tests, we're just saying, okay, we're going to do a quick import. We'll send the dummy request. We'll get the response from that request. And then we're going to make an assertion that the responses status code is 200. Okay. And if we were to actually run our project and send that same request, we should see that the response is in fact a 200 response. Okay. Questions on that? All right. Keep moving on. So we're still like, we're still doing a lot of Python basic stuff, but let's get into some more stuff with Pyramid. With Pyramid now with web tests. This is where you're really going to start using testing your actual Pyramid app when you're sending a lot more complicated stuff and make sure that everything performs as you want it to. So in the previous step, we were talking about unit tests. Those are the things that just test your code. But when we want to send an actual request and inspect it a little bit more deeply, this will actually simulate a full HTTP request against your application. And make sure that what comes back in response is what you expect. So let's go ahead and dive into that one. Let's copy our previous step. And let's close the files. Why don't do that again? Now we're into functional testing. And we're going to add web tests to our setup py in development. It will install and extend our functional testing test suite. And this is going to be fun. It's going to be fun. To do. Make a few changes, boom. Scroll this down a bit. All right, so. So. Let's go ahead and now run those tests. I did a bad copy paste again. That's Steve. There we go. Try this one's more time. There we go. Now we see that we had two tests run and they passed successfully. And we got the same warning again. Yay. So. This basically ensures that our app is fully tested. It has our functional test as we did before. It sends a request to the root and make sure that when we get the response back that we actually had some text in there that matches up what we expect and has been defined in Hello World. And that is exactly what we're getting back. So I really encourage people to learn about functional tests and why they're so important. Yeah, super important stuff to know. Questions on functional tests. All right, let's keep moving. All right, well, let's go on to now some fun stuff with Pyramid. We're getting into some moving out of Python basic stuff into Pyramid basic stuff. So we're gonna go into more details about how we handle to use some Pyramid. And so let's get started on that. We'll copy our directory. We'll install. And now we're gonna go ahead and add some more and now we're gonna change our init.py file. So we're gonna go to use, in your init.py. And this time it's gonna be a lot shorter because we're removing this function. We're basically gonna be moving this function into a view. Okay. Now let's add a module inside of tutorial. So we create, first we have to create a directory or no, we'll use tutorial to use it to new file. Python views.py. And we'll add this. And as you can see, here's, here what we're doing is that we're configuring a route. Actually, I'm sorry, let's start back here. First we have our view. We have a whole, and it's callable. And a hello view. These are two different route names and we'll get to that in a second. Oh yeah. I forgot to, I skipped over this part, I'm sorry about that. But in this step, we actually added, we have two routes now. One is home and one is slash howdy. So home will have the slash as the pattern to match against when the request comes in. This route is named hello, and it will return the callable defined by that route when the request is slash howdy. In our views, the home route, this gets by virtue of when it uses a scan, when Pyramid does a config scan, we have actually decorated this function with the route name home. So when the route name home is called, right, so when the route name home is called by virtue of calling slash by itself, the route home will be referred and by virtue of this configuration, the home function will be called. Similarly, when slash howdy is requested, the hello route will be in mode and by virtue of configuring this function with the route name of hello, the hello function will be, will run. So this is like the very basic stuff about how Pyramid can run, how it runs different views based on the URL that's requested. This is called URL dispatch and it's one way of configuring Pyramid. There's also perversal, which is very familiar to Plone and Zope folks. We won't go into that a whole lot, but anyway, this is like the URL dispatch is the most basic part. Okay, so let's continue. We'll update our tests to accommodate these two views. And then finally run the tests. We have four tests passing. Yay, that's good stuff. Let's run our application. And okay, it's running. We're gonna visit it in two places. First the slash, which is the home route and then the hello howdy route. So home, get visit, let's go visit hello. Let's go back to home. Yay, we're visiting our routes. Isn't that cool? All right, so that's it for basic view and route configuration and how we handled views as we come in. Questions? Okay, all right, let's go on to our next step. And using templates. We're not gonna go into all the template languages possible. We're only gonna work on one. And we're gonna work with chameleon because that seems to be useful for people who are used to using Plone. We could use Jinja or Maco. Those are add-ons that are available, but we're gonna be using the pyramid chameleon add-on in our project. So let's get to working on that. We'll copy that. I'm gonna close all these so I don't go into that again. And go into templating. Let's add chameleon to our setup.py underneath it's a requirement. Now this is something, make sure that you put it in the right place. Pyramid chameleon is required by your app in order to use the templating language. So it won't go in dev. Install that. We'll install pyramid chameleon as well as its dependencies. And it did, in fact, do pyramid chameleon as well as chameleon. Okay, see the hand? Yeah. Is it really necessary to have a templating language in the system? Or can you turn it off? Because if you're using, for example, pyramid with open API, just as a REST back end, you don't have any templating serving to the front end. That's correct. So if you were going to be using pyramid as just an API, you could have it just return a JSON formatted response. But yeah, so you don't have to use any templating language, but yeah. Let's make sure that we configure, I'm going to go back over here and configure that. Let's get all that copy. Templating in it. We're going to add pyramid chameleon and include that in our configuration. We can remove that HTML sloppy stuff from our views.py. It's so much nicer. In fact, as you can see, this is actually a good, going back to your question, Alexander, as you can see, this is almost, that's JSONic, isn't it? I mean, it's a pyramid, it's a Python dict, but we could also return a JSON object if we wanted to. But what we're going to be returning on this request is a Python dict, which then gets filled in by the template. So we'll actually do that. Our template here, we'll put that in this new file. New chameleon. Yeah, go ahead. It could also work if you just transferred JSON to the template, because the difference is that some of the serialization done implicit by pyramid is not the same. For example, if you're working with date times or date objects, there's really a difference, but for all the other stuff, it should work directly. Exactly. We will, there is a step in this tutorial about how to change the response object. We will get to that one. So hold on tight. For now, what we're doing is just returning Python objects, and then those are consumed by the tutorial by the template. Yeah, but you're right. We will be doing that. Okay. So we're going to do that. We also want to, this is nice too, when you're doing development, sometimes you just make a change to your template file. And you want to be able to just reload the template instead of the entire app. So I will actually demonstrate that shortly. And let's run them. Make sure they still work. I messed something up. What did I do? You are in views by Yeah. I have an important statement. It's broken. Thank you. And pie chart helped me find that as well. Peacock. Yeah. Right. Stop air. Yeah. Just ignore that. Let's run that again. There we go. Now we got our passing tests. Now we can run our app. Okay. And let's visit. There we go. Okay. And let's visit. There's our home view. And there's our howdy view. And you can see that each one updates according to the name that was passed in. Okay. Now let's actually change this. I'll save the template. I hope it works. And lo and behold, it did actually work. Note that the app did not have to reload. I did not have to stop and start the app. I just was able to, in the background, there's something. It's called Hover that is monitoring for changes to templates and other file source files. And when you have, when you have in your configuration to reload templates like that, it'll reload the templates every time you make a change to them. Okay. Any questions on this step on template team? Let's keep rocking on. Let's go on to our next step. So how are we getting to actually, this is more of a Python type of thing, but our views are going to get really sloppy if we just have a single view. So let's just start with the use files. So, you know, we're learning how to be better developers by starting to use classes. So let's start doing that. Let's stop our app. Close all the files. And copy the previous step to the step. Install it. And then we'll add a class view. So, convert this from just a bunch of functions. I didn't select it all again. There we go. There we go. So, you'll notice that. So, why are, why would we do this kind of classy type of thing? Why would we use classes? Well, the cool thing about using classes is that in your init method, when you say you're going to, what this does is that it establishes that this object is going to be available for all of your methods. Right. So this is just a basic Python thing. Let's update our tests. Run them. They should all pass and run our app. And we see that, yep, our views are still working. That's great. So, this is just, so in this step, we're just basically looking at how to better organize our views. And this becomes super helpful as well, especially because, you know, if you wanted to navigate around into certain things, oops, within your views, it's easy to do this in, you know, finding out where you want to go within your project. Python also has this really nice thing to being able to say, where do you want to go? And it'll show you where you want to go. Okay. Any questions on class views versus just playing a whole bunch of functions and methods? Okay. Yeah. That's a pretty basic one. All right. Let's go on to our next step for handling web requests and responses. So, here, we, I don't want to get into that one. We will go into, I'll get into some more details in a minute. So, let's just start by just copying the previous step. As always, let's update our unit.py. Okay. So, we're going to update our tests accordingly. And run our tests. We have a new test here. We're going to run our tests. And we're going to run our tests for passing and finally load this. So, let's see what happens here. The first high home view, how's it going? That's pretty cool. Let's go to. Actually redirect me. It did not redirect me. I think I messed up somewhere. Hold on. Where did I mess up? So, let's check. The grass just wants to develop. Okay. I'm in the right place. Tutorial in it. I'm playing yet. That looks good. Views. Oh, there's problems. Now that's fine. Actually. It should be found location. So, it should have redirected me to playing, but it did not. I'm going to run it again. What? I remove a route. I think I removed a route. No, it's there. I'm going to run it again. So, So. It scans the views. Oh, dear. I'm going to run it again. I'm going to run it again. I'm going to run it again. We've come to the hour. Let's take a five minute break. And I'm going to figure out what the heck I did wrong here. And we will resume in five minutes. So at eight, oh five. Let's come back. Okay. Okay. Okay. Okay. Okay. Okay. So, So, So, yeah, it's very late already. It's midnight. It's about one o'clock. I got up late today. I hope you enjoy the training and everything. I hope you enjoy the training. I hope you enjoy the training. I hope it's good for you. Do you enjoy the training? Do you understand everything? And think you're also on the European time. I'm enjoying this program. But, So, And the other one who's muted and didn't show their video all the time. Are you good following Steve with the training and any questions. So, from there. Chris. Everything is good for me. I understand what he's doing. I can follow along. What I started doing was just watching him. Rather than trying to do the examples because flipping screens back and forth. Doesn't allow me to keep up. I'm not going to ask if there's any questions and I'm typing or cutting and pasting something. Usually two or three steps behind. So I just decided I just needed to watch him. And since it's being recorded, I can do it in slow motion or stop and start later. Oh, it's about you dirty. You're muted. I'm fine. I'm just my example is working. I'm not sure why but it's working. You will figure it out. The tutorial, the online tutorial is very, very good. Yeah, perhaps it should be updated to by test using fixed source. It's like a bit. I'm not sure the actual purpose of the project. I'm just going to use it right now to do. The unit test. Yeah. So we had one of our, our other. Attending from Japan has to get up early. So they just signed off holding. But he'll be watching the recording. As far as my mistake. It's not the same as the previous one. So I was able to re-load the unit. The, the route of the current stuff. So once I installed my project. I was in it able to reload it. And then when I visit the URL. Oops. I guess I should start it. Ha. It does redirect the route to plane. And then it says, Hey, there's no name provided. a name in the request. And we can do that by saying, here is a get parameter with its name called name and its value as Alice. When we go there, we say, oh, here's a URL and it says with name Alice. Yay! And that is exactly what we defined in the body and what we should return. So basically this step is saying, here is the URL and we replace this string with the pyramid object request and its property of URL. And the second string is replaced with name, which we get from the request params object by getting the key name and saying no name provided or what is actually provided if it is provided. That's a lot to absorb, but this is where you get to like, wow, I can do a lot of things with pyramid. Okay, so basically there's a couple of things that I wanted to emphasize here. First is that pyramid has HTTP exceptions, that's a tongue twister, that you can return or raise based upon the request. In this route, when we go to the home route that is just a forward slash, we'll execute this function and it returns an HTTP found or 301 and says redirect to slash plain, okay? And when we did that, let's just open that up in the web browser. We see that it actually did redirect to it. Cool. Let's see, anything else to kind of emphasize here? I think another thing to be aware of is that the request object has a lot of properties and you'll be diving into that a lot when you start using pyramid in depth. So one of those properties are all the get parameters. You can get all the post parameters. You can, I mean, the request object has a lot of stuff attached to it. So this is gonna be your primary thing to look at. And just as some additional information when you want to really dive into what's in the request object, the documentation shows exactly, these are just a few of the things that are attached to the request object. So, we already showed like the get what we can get from that, or from, excuse me, from Params, which is a combination of both the get and post dicks and multi-dicks. And for more of what is available, we can go to the pyramid API documentation. So pyramid's documentation just to kind of back up a little bit. Pyramid has its documentation in two parts. One is the narrative documentation, which goes into detail. It kind of tells a story of how to use pyramid. And down below is the API. So this is the technical nerdy stuff where it shows all the pyramid modules and where you can get more information. So like the pyramid request module, I'll set that off to site, I'll open that in a second. But just to continue here, we also have documentation of the scripts that we use. Like pserve, we've been using that throughout our tutorial so far. And there's other scripts that are available as well, like how to find out what all your views are. If you don't want to run the toolbar, but you want to introspect your application, this is how it's done. And of course all the change history and other probably good. But anyway, back to the request. That's the class event, all of these goodies. Session is also gonna be useful for those who want to know about authentication and authorization. We have in Pyramid 2.0, we're gonna be, we have a change to this. These from, these are still useful. We haven't removed them, but they will become deprecated at some point. But for now, there's a new object called identity. And that will become the way of working with sessions and authentication objects. So I'm not gonna cover that in this tutorial because it needs to be released before I can actually go into it, but it's a common. So anyway, that is, I kind of did a kind of a sidebar off to Pyramid documentation. And also dove into like how to get more information about a request object and how to handle the request to respond cycle. Lots of that. So I'm gonna take a breath here and see if you have any questions on this part. Okay, seeing this, there's no questions. Oh yeah, someone had a question. Sorry. I just wanted to say that last, last problems can resolve resource before the break. It's your 404, not fun. Right. So with that one, my mistake was, I forgot to install the project. So, It's 80, okay. Yeah. What I, and then I forgot to do that. So when I started to serve my project, it was serving the previous project and not the one that I wanted to, not the current one. To fix it, I installed the current project and then I ran the current project. Okay, any other questions? All right, we're gonna continue on to the next module. All right, yes. So we've been using URL dispatch. You just didn't know it, but we're gonna talk a little bit more about this. So basically, you know, what we've been seeing in the past is in our setup, we're gonna use the URL dispatch. So we're gonna use the URL dispatch. So basically, what we've been seeing in the past is, in our setup file, we've previously, in our setup file, we've been, excuse me, excuse me, not in our setup file, but our setup code. We were defining routes and dispatching the URLs to these routes. Which in turn would call a view or a function, a callable, according to each one of those routes. Okay, so here we're gonna do a little bit more because, you know, hey, this is great in everything, but, you know, I don't wanna have to define like slash view name, slash, you know, I don't wanna have to define every single route name, excuse me, every single route, when I want to pass in some parameters in the URL. So a lot of times you'll see like blog posts where in the URL, they may have something like slash year, month, and day. So would it be great to be able to match this pattern and serve the content according to that pattern? Well, Pyramid lets you do that. So let's go into this step. All right, so let's stop. And we'll copy, close all these. And go into routing. We're going to add a route, but with a replacement pattern. We're gonna get rid of those two routes and say we're gonna take the first after we match the URL pattern of slash howdy slash, we'll take the first segment and say that its name is first, then we'll take the second segment of that URL, call it last, and we'll do some work with that, okay? So let's change our view so that it can handle this. Now here in our view, remember we had first and last as the, let's split this like you can see. Remember that we're saying first, we're going to, what Pyramid will do is actually just turn this into a variable that when it is called in our, when it is called in the request, the request will say, okay, we got this request thing, let's based on the request match, get this value for first and assign it to the variable first and get this thing, the segment last and assign it to the variable last and then actually return that in the response, which then eventually gets into our template. So let's modify our template to handle that. So first and last, okay. We're going to update our test as we always do and run our tests. Did I do setup py? I don't think I need, no I did, okay, good, pit install, I want to make sure I did that. I don't think I did actually, I did not do pit install, don't do that Steve, okay, put install. And now we have two tests, yep, that looks good. And now we're going to serve the application and we're going to go visit the URL. And it says, hey, look, here's a URL where first name is Amy, last name is Smith and we can change that, let's see what happens. Oh, even this, is it case sensitive? I don't know, let's see. Oh, that's cool. So yeah, this is like, I've got power and this is where Pyramid starts to shine. All right, do we have questions here? Okay, sorry, I have a small question. I just want to try to understand why we are not attaching the road definition to the view config decorator. Yeah, Pyramid doesn't do that because predicates are useful. One reason is that you want to be able to assign multiple views to a single route. So when we get into predicates, you'll see that will actually come to light and you'll have that aha moment. So we'll get to it hopefully. Yeah, cool, let's just, yeah, can't find any other questions. Okay, yeah, weird stuff you can do with URL dispensary and we won't come there. Let's go on to the next step and now we already did chameleon. We can also do a Gingya 2. So let's try playing with that. So let's go back to our previous step and copy it. We'll add Pyramid Gingya 2. Oops, what am I doing? I don't know. All right, let's go back to Gingya 2 and go to set up Py and add a requirement. There we go, that's better. Now let's install that. It'll install Pyramid Gingya 2. Let's change our include from where are you? Can you apply? So this one, we changed it from Pyramid chameleon, Pyramid Gingya 2. And from there, we're gonna change our renderer from innerguse.py. And this time we're using actually class-based views. So change the renderer from home.pt to home.gingya2. And we'll create a new Gingya 2 template in here. New Gingya 2. Home.gingya2. Okay, and we'll run the tests. And I messed up again. I didn't copy something correctly. Steve, what is your deal today? That import, hello from tutorial. Use, let's try that again from Pyramid View import. That was right. I probably didn't copy that one line. That one's almost tricky for me. I don't know why I can't get that right. No, that was right. Double check that I actually installed it. Yeah. I think I got the same error locally. Okay, so it's not just me. I don't know, but I think the tests that they didn't get updated. I get another test error. Really there is a testing pie missing. We should update, but there's no one in the tutorial. Correct. I see that we did not update our test file. Okay, so let me do a quick issue here. Thank you for that. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Issues. Okay, so let's just go ahead and serve the app. That should be fine. Yeah, it just, it runs. I wonder if we can fix that, and that might happen. Going to be. Simply not found. What did I do wrong. How the. And templates fun. Oh, thank you. How did that happen? All right, let's refactor that. That was interesting. I guess it just. Let me try one more time. I wonder if the tests will actually work this time. too passed, yeah, which is better than for failing. There we go, thank you. Thank you for your pair of eyes there. I think the best testing this first and last entry we put into the URL. Maybe we don't have them anymore. Correct, yeah, they're wrong. They need to be updated. Cool, well thanks for catching those mistakes, but you're pretty good at keeping these up to date, but every once in a while we just don't catch a few things. So we'll get that fixed in the next release. So this just demonstrated that it's super easy to swap out, super easy to swap out templating languages in Pyramid. And that's all we check on that one. Any questions? I think it all crafts that one pretty quickly because you caught two mistakes. So let's just go on to the next one. All right, static use. Yay, we finally get to, you know, if we're gonna be serving some CSS, JavaScript or images or other static files, this is how you do it in Pyramid. So let's go ahead and start doing that. Let's close all this up. And let's copy the step view classes. So we're not doing that, we're not copying the previous step, we're just copying a previous step. Let's install. That's good. And let's add a view, a static view. So we'll do that in R, and it.py. By configuring, so here we're going to configure a static view. Its name is static and its path consists of the package name, which is tutorial, followed by, after the package name, you actually delimited from the path with a colon. So I'm gonna be adding a static directory within the tutorial package. Okay, so before I do that, I'm gonna edit my two, I'm gonna change my template, my Chameleon template. And here, what this is gonna be doing is saying, by taking the Pyramid request and getting its static URL property, I will then generate the href, the full path based on that. So it'll be, in the tutorial, it'll take this argument and generate the correct href path. Does that make sense? We'll see it when I view the source code. So basically it's saying, we'll serve this file in the HTML and the web browser will then make a sub request to load that static file. So let's create a static file that will be served and we'll put that in tutorial, new directory, static, and new file CSS, and name it app.tsvss. And did it, nope, okay, good. Great. And let's add a functional test in our test file. Where would that go? Line 46, just to make sure that when we make a request to static app, that we'll get, that when we make a request to, excuse me, that first that it'll return 200 response status. So this basically says that the file exists and we'll also assert that in the response, that the body is in it. Oh yeah, that body is within the response. So that's the test that it's actually doing. All right, so let's run that test. Five fast, that's good. Let's serve our app and let's go visit it. And so it's a little too quick. There we go. Ooh, it is in ugly old times. We actually have it styled with the font that we asked for, sans serif. And if we view the source code, you can see that the in our template, when we say, this gets, this value is, gets substituted. So we're saying, for find within the tutorial package, find this path, this file path, and then regenerate the static URL based on that argument. And the static URL, or the, excuse me, the URL is, you know, the protocol followed by the host name, the port, and the full file path. Okay, good. Any questions on that? So there's another property too. So perhaps you don't want to do a full URL, but instead you would want to do a static URL. So instead of static URL, we're going to do a static path. And I just want to show you how that actually works. This is useful to know. So in our template, instead of static URL, we return static path. And if we reload this, we should see a change that just has a root relative. Okay, so that's nice to know, because if you are serving your assets on over HTTP, you shouldn't be doing that. You should be serving everything over HTTPS, but this makes it easier so that, you know, you want to request a certain file relative to your web root. This is how you do it. And then you don't have to put, like, all this garbage in front of it. Can you set up a CDN for the static path? Can we use the what at the end? The CDN. Like all my static assets are just on a CDN. Can I set up the static path variable? Oh, yeah, absolutely. It's on configuration. Okay, cool. Yeah, yeah. A lot of folks, there's actually a lot more underneath static path. These static assets. So in each one of these steps, these are like just giving you a taste, but we also have things like cash busting. Or are you? Yeah, cash busting. So there's different strategies for doing that. One is like appending. Like here's an example, appending some kind of MD5 hash or something like that. And so every time you restart the server, then this could be regenerated. And if you don't like the default cash buster, you can create your own. So, lots of different ways of doing things. That's what's so great about Pyramid. You don't like something, make your own. It's fine, sorry. But the problem with the sketch of the file implementation is that you are serving from different instances. You will be serving different times. Exactly, yeah. Cool. Though you have a lot of flexibility. Like here's, yeah, so many different ways. That's fine. Yeah. Okay, other questions? Okay, let's go on to the next step. All right, so you don't need to use template languages. You can use JSON as a response for Denver. And this is like, oh boy, oh boy, I'm getting too excited. All right, let's play with this a little bit. So, we don't need to do that. And what's so awesome about Pyramid is that this is like out of the box, for Denver. You don't have to do a whole heck of a lot to make it work. So, let's dive in. So, let's go first. Let's copy the view classes. That's kind of our base step right now. And we'll install this step. Good. And we'll add a new route. So that we serve a JSON response. Ooh, it's as exciting. Okay, so we have this new route. Hello, JSON. And it's gonna return a.json response. Ooh, exciting. Let's update our views. And so in our class views, we still have, you know, the good old home route. And in our hello view, we had, you know, plain old, here's the route name for hello. Here's the function, hello, still the same thing. But we stacked another route configuration. And so if the route name matches hello JSON, then we'll return a JSON response. So the renderer will be JSON instead of, you know, a template file. Pretty cool. So one of the things that people don't realize is that you can have multiple decorators. You can stack these as tall as you want. And, you know, so your content, what's being returned is the same thing, but it's just being rendered differently. All right, so that's really, another cool thing about pyramid stuff. All right, so let's update our tests and make sure that they actually work. And run our tests. They all pass. Let's run our app and the moment of truth. Okay, so first let's go there and see what gets response back. Whoa, so when we request howdy.json, we get the actual JSON response. And if you don't believe me, you can reload that. It's actually in the response. You know, here's the raw response. Oh, isn't that cool? So let's try. What happens if we add a date time to the date we are returning? We haven't configured it to respond to that, but for example, if I were just to go to howdy, instead of howdy.json, I would get that. Does that make sense? And can both have the same route? Or, I don't know, I don't know what to do with that. I would like to go for HTML. I would want HTML. If I ask for JSON, I want JSON. Right, so this function is decorated with two routes. They have different names. Well, let's see what happens if I were to say, if I were to decorate it with the same route. What I would have to do to make this different is to add a predicate. So let's see what happens when I do this. This will actually, yeah, it errors out. It says you can't do that. There's a conflicting configuration action here. Basically, instead of, it was saying, you have to have a unique route name for this. It'll reload and now it'll work. But if I do something to make it, I can refine my view code so that based on the predicate, it'll, in other words, I haven't gotten into predicates yet. But we'll get there. But basically, if I add something that a predicate is something like a thing that makes the configuration unique for that request, whatever that thing is, will then determine what code you run within the specific view. Predicate is like one of those weird words that I had no idea what it meant in the English or in programming, but it becomes a parent in pyramid. So hopefully we'll get to that step. Sorry, it doesn't really answer your question. Does. No more questions. Cool. Okay, are there any other questions? Okay, this is an important point too about JSON. It's that it doesn't have, JSON does not know what a date time object is. So it only knows what a string is in that regard. So you would have to take a string in and parse it to a date time object. So, you know, just, you know, just, you know, you can just, you know, you can just, you know, you can just, you know, you can just, you know, you can just use a date time object. So, you know, just consider that when you're working with JSON objects, you're not always going to get a date. You cannot get a date out of a JSON object. You only can take a string and convert it to a date time object. So that's one example. And I think, like, numbers, decimal, I think is another thing that it does not, that JSON does not have. It just has an idea of, I think it only has whole integers. Anyway, I have to always go and look it up. Because I can never remember all these things in my brain. So let's move on to the next step. All right. So let's do a little more playing around with few classes. And I think we might actually touch on this in this step. I think we actually start to explore this, Jordy. So this might be it. Yeah. Yeah. Is that, is that like dispatching one route or UL to multiple views based on the request data that might be what you're after? Yeah. So I think we finally get it. All right. So let's dive right in. Solve this. And let's have that. And we'll install. I think we're going to have to do this. Let's configure our init. I think it should be careful. And this is more of a class. In it. We are changing. Okay. We're now changing howdy to be first and last. So based on some of these arguments, we're going to do some stuff with our views. And we're going to have to do some things with our views. It's going to be used up. Okay. We will, we'll get there in a second. I'll explain this. And then a short bit. Let's modify our template. Okay. Oops. Did I screw up tutorial home? No, that's fine. And now I need to add a new. template. Okay. I missed. New. Chameleon template. And this one is going to be hello. PT. Okay. You have an edit view. What. Okay. And delete view. Okay. Let's update our tests. And read them. It should be just too. I think I forgot. No, I didn't stall. Just make sure I didn't. Yeah. Okay. Okay. And finally run the application. And visit. So let's see what happens when we go to how the Jane Doe. We now have. A welcome. So this is the. The first one. So in our app, we have views to find. The first one. Let's see. We are going to have. Okay. We have our request. In our class. We are. We have our request. And we have our request. And it generates the full name based on the first and last. Segments of the URL. As we had to find first and last. And then we. Concatinate them. And return it so that we have first space last name. And. When somebody visits. The first name. That property will automatically. Be generated. And be available in the. Where are you? There it is. As the views full name. Is that cool? So, you know, hey, here's your. View. And you can get that. So. Back again. This property is now attached to. The tutorial view class and any view that uses it. So that's nice to be able to just. Toss it into a template. Other views that we have here. Are. A post. And then we have a. Depending on whether we're editing a form. Or. Deleting a value from a form. So. Let's see. And here's exactly kind of what we're talking about. We have. This. We have a. Let me back up here. So in our. In our class view. We can actually have a default route for. For all the views that we define. Okay. So. And we do that by by using a decorator. This decorator configures all of the functions. With the route name. And then we have a. In the route name. That's not have. Route name in its configuration. We'll get the route name hello. So this. This. And this. They don't have. Route name configured. But by virtue of its parent. Class. And then we have a default. Route name. Wow. That's pretty cool. We have. We don't have to explicitly. Define the route name. But if we ever want to override it. For example, a default. Route name of home. We can do that. So. If I were to. Just go to home. I would get the home view. And then. Okay. And when I go to the form, then. I get the hello. Route name. Mind blown. I'll pause here for questions. Cause this is a, this is a pretty. Deep. Okay. Well, let's try a couple of things. So I'm going to now. Just a question. Sure. It's like a predicate. Can you repeat that? On the latest on the delayed. On the delayed. Method. The request. It's a predicate. Yes. You got it. This is a predicate. So. The request method. Post. And that's something here. So the default. I should mention this. The default request method is always get. And so it doesn't, if I were to change this to be actually. Request method get, then it would be explicit. Right. So that might be something you want to do just to. You know. I'm not going to go into that. Keep it clear in your brain. But if you already know that it's there, you don't absolutely need it. So if I were to go back to. How do you Jane Doe, I'm still getting the same. Oops. Am I doing that right? Yeah. I'm still getting the same thing. I'm getting the whole of you. I'm still getting the same thing. But it did restart in the background. So that's why it reloaded. That's another nice thing. Yay. I liked to reloading. So let's get rid of that. Restore it. It'll reload. And I still have the same deal. So now if I say Steve piercey and save that. It'll say, Hey, you submitted Steve piercey. And I still have the same deal. So I'm going to go back to the edit view. It would go to the edit view and pull the edit template. And it would say you submitted. New name. Which we defined back up here. Yay. Is that cool? So let's go back to our form. And now let's say, what happens if we do the delete you? We don't really do much. It's just like, it just, all it's saying is that we're going to print to the console that, yeah, we're going to delete it. And it says, Hey, here's your, here's your viewing for what you're going to do when you're deleting. So that is, that is quite a lot to absorb right here. But what's really cool is that there's a second. So you talk, we talked about predicates. So the default predicate, the request method default predicate is get. And if we override it, we can say that the request method is post and we can not only that, but we can supply even more predicates. So if there is a request parameter of form.delete. We can say that. See that. So when we request form.delete. Oopsie. I shouldn't have done that. There we go. When we click the submit button for delete, what we will get is that, oh, hey, when form.delete is in the request. Object as a parameter, we can use the element of form. And then we'll use the delete template as the renderer instead of the edit template. And we'll return this response. Quite a lot to absorb there. This is a really meaty, meaty step of pyramid. So I think that's a good point. I see that Paul made a comment in the chat. But yeah, as far as, like if you send an accept header with application slash Jason, you could also just say return the renderer as Jason. Thank you for that. And then we'll add the return header. He posted in the chat window another way of returning a, using the accept header as a predicate. That's a really useful tip. Thank you. And with that, we're at coming up to the nine o'clock hour. And so let's take another break, another five minute bio break. And let's reconvene it. Sorry, I just left because my wife comes to my kids need something and I have to go to help her. Oh, okay. Perfect. So if he, is it going to take you out for the rest of the day? Or just for five minutes. Okay. No, just for five. No, no, I just leave before when you say me something. Oh, okay. No. I'm just going to go to the kitchen and I'm going to go to the kitchen. I need a bit of help, but. Right now it's all it's fine. And I can continue. Yeah, that's okay. We need to take a break. Anyway, so let's come back at nine oh four. And I can hear my wife. She's getting ready to leave. So I have to say goodbye to her too. All right. I'll see you guys in a second. Bye. You You You You You You You You You You You You You You There you go. All right, we're all back. So we're going to, we have like six steps left. So I'm going to try to keep us timed to like 10 minutes a step or less. So here we go. So here we go. Logging. Important stuff. So when something goes wrong, how do you figure it out? Look at the logs. Get some information. Collect as much data as possible. Don't jump to conclusions. Because you'll be wrong. So logging is really helpful. Logs don't lie. They just give you facts to use when you're. Troubleshooting. All right. So let's do that. We'll copy our view classes. Step. And install it. We will add some blogging statements. And then we'll add some more. Let's edit our I and I, our configuration file. Oops, I missed. So for this one, we have their part standard stuff that we had before, but now we added some logging configuration. We say, we're going to have a couple of loggers. One is the root blogger and the other is the tutorial logger. We're going to add a new one. Or a tutorial. For the tutorial logger, we're going to say, we're going to. Set the debug level. No handler. And just say it's name is tutorial. For all handlers are going to be a console. For all four matters, we're going to use generic formatting. And then for the root logger. We're going to set its level to debug level to info. And said it's the console. And then the console handler will be handling the stream. And so forth. And finally, we specify a format for logging. Let's make sure I test still pass. What I do, I forgot to set up it. Didn't I? I skipped over something. Didn't I? What did I skip over? Did I skip over this? I didn't skip over that. Gosh darn it. Logging. Got development. I mean, logging. Okay. Good. And views is annoying. Good. The tests are not. Oh. Something else is wrong here. What is wrong? I think we got a bad test that we need to fix. Anyway, let's just see if it can load our app. Yeah, it's serving. So, I'm going to go back to the test. When I visit these URLs, you can say, each one of these. Each time I visit the URL, a logging statement is returned. Depending on which URL I visit. So, that's super basic logging. For lots more details about logging. There's lots more details for logging. One of the things that I found really useful is using Sentry. As a logging tool, it's super easy to configure it. It's just a couple of lines of code in your, in your in it.py file. And you're ready to rock and roll. And then you can go back to the URL. Combining that with SQL to me. Yeah. It really does save your, save a lot of frustration, especially when you've got an application deployed in, in production. Okay. Any questions on logging? I'm going to make a note here too. I'm going to go back to. Just quickly append that we need to fix tests. On this issue. Thanks. Test. On this step. Okay. Let's go on to our next step. So this is, we're not going to get into authentication just quite yet, but sessions is part of it. So we're just going to focus on session data at this time. So the sessions are like, they're just data that is saved usually in a cookie. And I'm going to go to what a cookie is, but it's one, it's just like a tiny piece of data that gets passed along with every request. That you make from the web browser. So let's jump into that. We'll copy the view classes and put it into sessions and install it. And then our sessions in it. Hi, we'll edit this. You have a signed cookie session factory. So here in our unit. Hi. We are a session factory is. We're going to make sure that we have a set of tools. And then we're going to put it in the session. So basically. You throw stuff into it. In this case, a secret. Which is something that just says, okay, we're going to make sure that nobody else can break these cookies. And we're using the secret, but you typically would not put this hard coded. You'd have like some environment variable. So that would be your cookie. So that's a good thing. So that's a good thing. And then we're going to do some more. Encryption key or something like that. But for just demonstration purposes, we're using an actual hard coded string. Not the best practice. So. What we do here is that we're. We are. Sending this off to say, Hey, we, we need to. Generate a signed cookie. And the configurator will then just. Do that by taking your session factory and then returning that. And that's, that's pretty much all you need to do to. Generate sessions. Let's then modify that and actually use the session stuff. By looking at the view stuff pie. And here. So whenever we enter into our. Pyramid views. We will always include the request objects. And a property that'll come be passed through. This is just a Python property gets passed through. We'll just run this function called counter. And if the. The counter key exists in the session. And then. When the session comes through, if that data is a. Exists. If the counter key exists in the session, then we'll increment it by one. Otherwise we will create the session. And initialize it as one. Does that make sense? So. Maybe we'll start clicking. And so let's go ahead and. Update our template to. So that we can actually see what we have in the session. Of course we can also use a diva toolbar to do that. Let's actually say here's that counter that session data. And let's run our tests. And we'll see what we can do. And then we'll see what we can do. And then we'll see what we can do. So let's hope that this time they pass. What the heck. It did it again. Torio. All right. We're going to have to fix that later. Let's run our app. Okay. And go visit. And now we can go back to our session data. And we can actually see what we have in the session. So let's go back to account one. Now let's go to howdy and see. Oh, it went to count two. If we reload, it'll keep increasing. So that session data is getting passed around. If we open up the toolbar. Should be seeing session. So on the way in, we're saying, Hey, our current counter value is four. And on the way out, it's five. What happens if we reload? And we're going to see that. And I got to expand this so that we can actually see it. On the second next request. Ingress is five. Egress is six. So that's on the way in and the way out. I love this new feature. God, I love this so much. If you want to be able to debug your sessions. Oh my God. This is so nice. And so anyway, that's, that's just another little. That's just a little bonus. Yeah. As fashion is from some point. Cookies are from some point of view. Old fashioned and old style. How is it about local storage and the more modern technologies. Of the browsers to handle all the data. Is there already some kind of factory. Or. Is there any. Is there any. I don't know if there is. There. Has been class included. There. No, but there's nothing stopping you from using. From creating one. It's. I know what you're talking about. I'm not sure. I might be a really cool feature request. Or if not a feature request, at least. Something that could be added to the pyramid cookbook. A recipe. So we'll take it quick. And then we'll go back to the. The only way you can communicate is with a cookie. That's why we use a cookie for a storage of the session. Right. And you have to be able to. Yeah. I'm not. I haven't really experimented with it. So I'm not super familiar about it, but. I mean, I guess the idea is in your response to the user's browser. You say, hey, store this in local storage. I don't know how to do that. This is not possible because. On the server side, you don't have, you don't have access to the local storage of the browser. Oh, just only in cookies. You don't have access to cookies. So you don't have access to the cookies. Okay. So you don't have access to the cookies. Okay. So you don't have access to the cookies. Okay. That's the. Oh, I see what you're saying. Yes. Yes. Yes. That's why we put. Usually. The cookie storage is something you used to, to take on the, on both sides on the server side and on the client side. Okay. So this is your, what Alex is actually referring to as something that you can use to do the same thing. You can use any front end. You can use any front end. Yeah. So can you use react with pyramid? Yeah. You can use any front end JavaScript framework. Yeah. Obviously I'm not a front end developer. So I don't go there very often. I mean, if you are using a front end framework, if you want to share something with the vacuum, the only way or passing parameters or just using a cookie. To get to visible on the server side and on the client side. Right. Cool. One other thing that is really useful is. We use sessions for what we call flash messages. Flash messages are those types of notes. So. Here's an example of one. Okay. So when I submit this. And it's empty. It'll say, Oh, hey, there's a problem. This is a flash message. And there you go. So. That's that. And so those are the types of things that you can actually pass along in this session as well. And that's super helpful when you do a redirect. After submission is successful. Make sense. What we do in that case is we. We add it in the session instead of like the post response. If that makes sense. Okay. And there's more information about how to use flash messages and pyramid, which is really helpful, but we won't cover that here. Any other questions. Okay. So, I showed you something really cool. I'm the maintainer of D form. And yeah, it's all old school. It's not shiny. It's not the latest. Awesomest thing. But I'll tell you what. I. One of the things that I like about D form is that it's all server side. And so, you know, I'm a person team. So D form has been my savior has been like, it helps me to make money. If you're part of a team, then, you know, you have somebody who just does the front end. You have somebody else who does just the back end. And you know, you have business logic. You have a team. So that's what I'm really looking forward to seeing. And I see people who are specialized and they're really good. Those things. I'm more of a back end type of person. And so that's what I focus on. And D form is one of those things that allows me to create a project without. Without having to learn a whole bunch of new technologies. And it gives me everything I need. It gives me security. It gives me all the flash messages. provides validation. So like in the example that I just showed, one of the things it validated for was that you had checked at least one of the options. There's a lot you can do with it too. I see a hand up. So Alexander, do you have your hand up there or is that just a leftover? Just a leftover from the question before. Okay. Well, just about the form, it's the QVLN to Django forums. I'm sorry, can you repeat your question Jordy? The form, it's the QVLN on Django forums. It's more or less the same, server-set forums. Yes and no. It's kind of like Django forums, but what I like about it is that it's highly customizable. I don't have to use the default, anything in D form. In fact, I know, but that's the same with parameters or so. Yeah. So one of the things that you'll notice that this is using bootstrap 3, I work on the next release of D form 3.0. I switched to bootstrap 4.5. I could use any front end at all. We've developed things where the default select widget wasn't so hot, so we went with select 2. Oh, that's pretty cool. Maybe we wanted to use tags, multiple tags. So add one, two. So you can just add your own custom widgets like crazy. This is one of the other things about D form that I really like, is that I can customize things so that they do exactly what I need. Pretty cool. Can I generate the form from an SQL optimimodel? There is a package that does that, but I have found that it almost never is direct one-to-one. Does that make sense? Yeah, I know. When you start Django Form, you find that it's not what you want because in the layout page, it's more compa, so you have a light information. It's like, okay. Exactly. Especially when you use a relational database, you may have data from one table and that table and this one over there, all trying to get onto one form. In my pyramid talk this week, I'm actually going to demonstrate a super complicated data object. It's not super complicated, but it's four levels deep. That's really plus. It's calling an API. It's getting data from an API and a database and the user's request all at the same time. You're just like to smoosh this into a single form. Try doing that with Django Rest Framework or Django Forms and you're just going to be like, just shoot me now. Yeah, I know. I'm coming from Django. Yeah. That's why Pyramid has been a godsend to me. I love it because it allows me to pull data from various places. Dform is awesome because it allows me to create whatever the heck I want and style it however the heck I want. I don't have to learn anything more complicated than basic jQuery. Isn't that great if we could just all go back to jQuery and say it's done? Anyway, that's a soapbox I think I'm going to watch. Right now you don't need jQuery. The browser has the same API as the jQuery. It's no longer muted. If you take something like Htm or something like this, you have a super good template on the client side and it makes a lot more sense to come back again to template applications. You know, it's just like this big circle. What goes around comes around it. What is it in the Alice's restaurant? We'll get you the next time it comes around on the guitar. All right, so let's get back to some forms here. Forms, edit, set up pie. Let's go there. I've got to add some deform here. And install that, buddy. Let's go here. And we've got to add some static stuff from deform. So we'll do that in a tutorial in it. We're going to actually add, we're going to have our own wiki. Yippee. Let's change our views. Lots of views here. I'm kind of just glossing over this. Yeah, there's a lot to suck up there just to absorb. And let's get our wiki view template. New million wiki. Wiki.p.t. Our page edit. And actually let's just copy and paste and call this one wiki page view. And copy and paste and update our tests. And run them. Still got those same things. We're going to have to fix those tests. It's annoying. But let's just go ahead and run our app. Okay. And visit the app. What do I do? Let's double check. Check. In it. View. Okay. Should have viewed. What did I do wrong? I hate that. I did set up pie. Okay. I did install. That's good. Let's go. I bet I forgot to copy a line. Okay. It's all there. So that's all there. The views. Double check that I can. Oh, I bet I forgot something right here. Yeah. That's stuff. There we go. Did I do wrong? No. Seems like colander it's not installed. Yep. Let's double check. Check. I mean forms. I think deform should have pulled in colander. All right. So let's just. I guess it needs to be explicit. So we'll put it in there. Okay. What? No model name colander in views. All right. Why is that? What? I must have totally hosed myself here. Import colander. It's right there. Right. And informs. And requires. It should have installed. And I put it in the wrong place. I bet I put it in the wrong place. I put it. See if you dummy. Oh, yeah, I totally put it in the wrong place. That's why. Okay. There we go. Now we got colander and deform. Sorry about that, folks. Okay. So that's, that was tricky. It actually, I shouldn't have just relied on copying that because this actually is a little bit different. So it's that. That tutorial docs. Fix that too. Okay. So let's just run the test and see if we can get them to pass this time. Still getting some field tests to see if we can actually do that. Oops. Okay. Now we got a running app. Let's see if we can get it going. Woohoo. Okay. So we got our default view. Let's see what the forums actually look like. Oh, come on. Why do you hate me? We keep page. Add. Add. Add. Add. Add. Add. Add. Add. Add. Oh, I missed the page. Oops. Look at you. Add edit. Yeah, I did. Oh, actually, I just named it incorrectly. It should have been. Not just. Edit, but add. edit. Okay, now it should work. Oh, look at that. So now we got a deformed thing. So my title and whoop-deew. Form is awesome. And we get all this stuff of, you know, you can, of all this stuff saved in there. Let's submit it. And it returns a response. Let's go back up. Now we have this new page. Oh, well, that's cool. Yay. It's still there. Let's go back up page 100. Oh, yeah. There's content. Let's edit this content and say, you know, we can do a whole bunch of stuff, submit it, and yay, the content is there. Isn't that cool? All right. So that was like, like, sorry for the mistake there on my typing. I'm naming this thing and also putting deform in the wrong block. I should put in the requires. But anyway, those are just mistakes that, you know, every developer makes them. So in this step, basically, what we're showing is that we have this really rich library called deform that allows you to, out of the box, just start using some forms without a whole heck of a lot of effort. And they look pretty decent. You know, you get standard bootstrap layout. We also, we also went into, we didn't really go into it. But the, you are able to, I didn't even go into a lot of this. A lot of this stuff is actually, I think we didn't go much into a database yet, because we're storing all of this stuff, I believe in a session and not anywhere else if I am correct, if I remember this correctly. But I'm not going to spend time going into each of the views. But the only thing that I would have you, where is it? There is one thing here that's kind of important. Yeah. Actually, I don't want to, this will take too much time to go into detail. And I do want to keep moving forward on this one. But anyway, this one is really rich about and how you would actually develop an application. Where one of the things that's really useful is matching on like any kind of arbitrary, matching on any kind of arbitrary key. So for example, we were able to use this as the match ticked item. And based on that UID, we return, or the unique ID, we return a page according to what that was requested. Okay, I know I'm kind of hand-wavy on this one, but for the sake of time, I'm going to have to be hand-wavy on this one. Any, I can answer questions on D form without going into the code. Okay. All right, let's move on to the next one. Let's, what? Because the next one is how to use databases. And I think everybody uses a database at some point in their life. So why don't we do that too? Okay. We're going to copy our previous step. Instead of saving data willy-nilly, I'm going to actually copy that whole thing this time so I get the right stuff. We're going to use a couple of things here. Databases and get the setup pie open. A couple of things that we're doing, we're adding three requirements. Pyramid TM is the pyramid transaction manager. I believe that actually depends on the ZOP transaction manager. SQL alchemy, that's the ORM that maps your models to the database. And ZOP SQL alchemy, that's an interface that ties Pyramid to SQL alchemy. Now, how is that a hard time saying that? We also add a new script here to allow us to initialize our database. Okay. Let's add some stuff to our any file. And here, we're basically taking the logger and a bunch of other stuff. But here, we define an SQL alchemy URL, which will be our database. And we have a logger for that. So we'll be able to get some logging statements out of that. That's very useful, of course. We're going to add a step for configuring our application. This is configuring our database so that we can actually use it within our pyramid app. There's some useful stuff here. Basically, we're defining what ORM we're using as the engine. We're binding SQL alchemy to our database session. And then we have a base object that is using that engine. Finally, we configure all this good stuff and let it run. So here we go. We're going to, now, here's the database script. We're going to put that in initialize db.py. I'm going to spell that correctly this time. New Python. Paste that. And now that we changed our setup pie, we're going to reinstall our app. And install SQL alchemy and the other dependencies. So SQL alchemy and Pyramid TM, as well as the tutorial. Our script that we also just created in the previous step refers to model.py. So let's create some models. These are database models. And then let's actually run this script. And that'll create our schema and populate it with data. So it created the table with these columns and their attributes. And it populated. So, and then it committed that. Then it started another transaction by inserting data into the database and committing that. Okay. And by virtue of that, we should see a database pop up. Oh, looky there. Let's take a look at that. There's our schema. There's our wiki pages database. And oh, yay, look. There's our lovely unique ID title and body. Okay. So, yay. We have to update our views. So we'll do that. So we're using a database. I'm going to be super hand way beyond this, too. So now the page is totally gone, but we're actually using a database to get stuff. And we'll also update our tests. And let's run our tests. Okay. They pass. Let's run our app and make sure it still works the way it did before. All right. And let's visit. Yes. So now we didn't have to populate anything except like that one record. If we want to edit it, we can submit it. And let's see if it's updated in the database. It's short in. So that's basically the big difference here between this step and the previous step. So in this step, all the data is now being stored in the database. I see we just got a couple of people who just dropped in. Hi, Seth. Hi, Anand. Do you have any questions on using SQL Alchemy and as a data storage or as an ORM for SQL data storage? Excuse me. In my pie chunk, database inspector, I don't know. Not found schema. How I can find it. I can find database schema in patcharm database inspector. Okay. So there's a couple of things. You might not have initialized your database. So to initialize your database, you would first have to have the script. So once you have that script, there it is. Okay. Yeah. So once you have the script in your project, then run the script. That'll initialize your database. And then you should see this database up here in your project. To inspect that, I just double clicked it or just opened it. And then I'm able to inspect its properties. While PeeCock is backtracking. Oh. I got to trace back here. Yeah. Mm-hmm. Oh, I got to trace back here. Yeah. Mm-hmm. How about if I can stay after and try to review what went wrong on this step for you? I'd be happy to do that. But I do want to allow, let's keep going with this training. And I can come back after the training's over. And we can take a look at what happened in your application to see what missed, what we missed. Will that be okay? Mm-hmm. All right. Yeah, I'll stay after the training to help you through, to give you the help and help you get through. Okay. So, that, yeah. Okay. Let's go on to our next step. And we're still flying through this. Logins with authentication. So here we're just going to introduce some login and log out views to our application. So, to do that. We're going to insert a new requirement, B, Crypt. And that's in authentication. And set up. Okay. B Crypt is now added. Let's install our project in edible mode. And now we have to add to our development.init, send configuration, basically saying, Hey, here's our secret. Again, this is probably not how you want to store your secrets in version control. You're probably going to want to use environment variables, but here you go. And just for demonstration purposes only. We're going to modify our init.py. Again, this time, how you would use the setting secret. And let's add a security module. And then we're going to be defining a bunch of users and the groups. So right now we just have that the editor is going to be what we're saying here is that the editor user is a member of the group editors. The user viewer is not a member of that group. Okay. And in our, when we're doing a group finder, we're going to say, Well, hey, does this user ID exist in the users. And if they do, then they say, Okay, they're in the groups and you say, Okay, is this person in the group allowed to get in. And add a login template. There we go. And a log out on the home. And no testing here, so we're just going to run it. And visit. And let's see how that works. So we're going to go click login. And I don't know what my username and password was. So what was it was it editor. I cannot type for the life of me. There we go. Editor. Editor. Did that work? Yeah. No, don't add that. And it says, Okay, can I edit. No, just as I can visit hello. I'm not logged in. But the thing here is that I am authenticated. I am logged. I was logged in. That's the important thing. So if I visit hello. Yeah, I'm not logged in. But as soon as I log in as editor with the password editor, then I'm logged in. Stop doing that. But that's going to persist throughout my session. So all the magic here is in a couple of things in our tutorial views. Well, first of all, we had to install some requirements, but then in our security, we had to set up an authentication policy and an authorization policy. We'll refine this in a minute. But here we're just saying, hey, this is what we're going to use to secure the user. So we got that there. And then in our security here shows that we actually hash the password, which is a good thing to do. Everybody should be hashing passwords so that they cannot be decrypted and then authenticating the user from that. And again, I already explained about how a user is determined to be a part of a group using this group finder function. In our views, when somebody is coming in, we have some logic here to make sure that don't use the login form as where somebody came from. And once the form is submitted, we grab the login and the password. We check the hash password. And if the hash password matches, if the hash password is present and we check the password using the check password function, which we define here, if that is OK, if it actually matches, then we set the remember header to remember the request and the user's login and then redirect the user back to the page they came from. If it's not a valid password, then we say failed login. So we can actually do that too. And it says failed login. So a lot going on in that one. I mean, no one kind of like breezing through it. But it really is, it's not too difficult. There's a lot of pieces and a lot of steps involved, but this is a really basic example too. You don't have to even use this as far as your default authentication or authorization policies. You can set up your own. One of the apps I'm using uses OAuth. And that's a really complicated process. You have to send the user to a form, redirect them back and then get another just a lot of back and forth until they finally get authenticated and authorized. So yeah, one more question. Other more complex things like JSON web token, authentication or the other, some kind of larger authentication systems because if you're working in a large environment, those are normally the things you use. Right. So again, you can write your own security policy and you'll have to pretty much. There are, let me go back here to try Pyramid. We do have quite a few things for authentication and authorization. So lots of different tools. Like there's the JSON web tokens that you mentioned. We even have LDAP3 now that's been updated. And other super useful one is, so warehouse. Oops. What? But that's actually a project. Did I misspell warehouse? Oh, I think they call it PyPy. They do that. Yeah, they just, I'm sorry. What? That's weird with this. That's a project that powers PyPy, no? Yeah, it's weird. I thought it would be under. What? All right. That makes me angry. Community. Powered by Pyramid. Yeah. Yeah. All right. That makes me angry. Community. Powered by Pyramid. Warehouse. PyPy. Okay. I was doing PSF. So. PyPy runs on Pyramid. And I'm not sure why. I think I did. I do wanted examples. I don't know why the code isn't showing up. I guess I'm in. Yeah. Minimal mode. So. Look at the source code of warehouse for other examples of. Authentication. I believe that's under accounts. But I can't remember off the top of my head. I don't know. I don't know. I don't know. Pretty much anything. If you need an example for how to do it well. In Pyramid. Check out warehouse as a really good source. It's very, very useful. Okay. I know that. Yeah, you can do any kind of authentication. You can do any kind of authentication policy. And then. Authorization policy. Are available. We're unfortunately run. Have run out of time. And the next step would be authorization, which is like, oh, yeah, that's the one I wanted. And then. Yeah. Anyway, so let me just say that from this point, I'm going to go back to the first step. And then I'm going to go back to the second step. Where do we go? So. This is, this would be the last step in the tutorial is authorization. And it talks about some concepts that are. Essential for pretty much every application out there. Where you want to deny access to people. For specific resources. And. So, I think that's a good idea. I think that's a good idea. And then the next thing that I would strongly recommend people to take a look at. Is the SQL alchemy and URL dispatch wiki. That has. More of a well structured project. And is it's really sets you up well to. You know, say, oh, okay. Now I understand all that stuff. Now I'm going to actually put it into a real application. That has authentication and authorization. And even talks about how to distribute your application. So that would be the next step for, for your independent learning. So I'm going to have to pause for now and just ask if anyone has any questions or comments or feedback at this point. So many things for the tutorial. It has covered a lot of things. It has been dense and hard. But very nice. A couple of things. Super dancing. Yeah. They will try to use the form as soon as possible. You know, I'm very, very close to releasing 2.0. 1.5. And that will include changes. That use a new widget that. I'm replic. I'm. Supplementing. So I'm going to go ahead and go ahead and go ahead and go ahead and go ahead and select 2. With select ties. And the reason is that they actually, even though it doesn't show on their demo, they support bootstrap for. So all of these. Things like. All this, the select menus are like the bane of society. So this is. The other feature is there's always been a problem with rendering. Read only forms. Or, and so. What is disabled? What gets passed through? What is actually clickable? What's not clickable? So. Look for 2.0. 15. That'll be the big happy release to like finally make everything clear. So that's all I can do. I'm going to go ahead and go ahead and go ahead and go ahead and go ahead and click on that. So basically just as far as I, I can tell. The last thing that I'd like to. Point out to you as well. As far as what else, what, what is coming next. If you look at your schedule and just search for pyramid. You'll see that there will be, there's two talks. talking about Pyramid and a Pyramid application that I've been developing for about two or three years. And then there'll be another talk by Matt Wilson and parties about role-based access control. And this is going to be a good talk. I really recommend you check that one out. And lastly, we're going to have a Pylons project sprint, and that'll be on Pyramid, Deform, Webop, Waitress, all the other projects that are underneath the Pylons project. So I invite you to join us for those. All right. And if there's no other questions, what we'll do now is we're going to stop the Zoom session and wrap it up, and we can go over into the Slack channel for training 2020 Pyramid, follow up with any other questions or any other issues that you may have had that you want to address with me. And as always, if you need to get ahold of me, I sent you an email, so you have my email address. And I welcome your comments and feedback in any questions. Thank you, everyone. So many things. Bye, bye. Thank you. Enjoy your day. Bye, bye. Stay tuned. See you the next day. Or at least at the sprint, hopefully. Bye. Bye. Bye. Bye. Bye. Bye.
|
Chapters: 0:00 Introductions, 14:15 Overview of the Pylons Project, 15:34 Pyramid Support and Resources, 16:54 Quick Tutorial for Pyramid Introduction, 18:53 Create a Pyramid Project with PyCharm Professional, 30:52 Requirements, 35:52 Tutorial Approach, 36:43 Prelude: Quick Project Startup with Cookiecutters, 37:24 01: Single-File Web Applications, 49:20 02: Python Packages for Pyramid Applications, 55:55 03: Application Configuration with .ini Files, 1:06:50 Break 1, 1:12:11 Alternate Application Configuration, 1:13:33 04: Easier Development with debugtoolbar, 1:20:29 05: Unit Tests and pytest, 1:29:42 06: Functional Testing with WebTest, 1:33:05 07: Basic Web Handling With Views, 1:38:42 08: HTML Generation With Templating, 1:46:10 09: Organizing Views With View Classes, 1:49:20 10: Handling Web Requests and Responses, 1:53:12 Break 2, 1:59:20 10: Handling Web Requests and Responses (continued), 2:08:30 11: Dispatching URLs To Views With Routing, 2:15:27 12: Templating With jinja2, 2:21:51 13: CSS/JS/Images Files With Static Assets, 2:30:45 14: AJAX Development With JSON Renderers, 2:38:19 15: More With View Classes and View Predicates, 2:51:40 Break 3, 2:57:56 16: Collecting Application Info With Logging, 3:03:24 17: Transient Data Using Sessions, 3:13:36 18: Forms and Validation with Deform, 3:32:05 19: Databases Using SQLAlchemy, 3:42:43 20: Logins with authentication, 3:54:35 21: Protecting Resources With Authorization, 3:55:25 Next Steps and Conclusion
|
10.5446/55228 (DOI)
|
So this is the last session of the day for the R training. As I said, it will be focused on computing with cloud of the GTIFs. For those of you that are new to cloud of the GTIF, I think it's going to be especially interesting. It looks like it's just another GTIF format, but as you will see, it's magical what you can do. And you will see that GTIF basically becomes like a spatial database. So you can do queries, overlays without downloading terabytes of data. So I'm going to introduce you to that. And we will run a little demo. So we'll do actually one from Africa. I have also demos for Europe, but I will just show you where they are so you can play. But I think the one from Africa, it's more interesting because we modeled basically distribution of agricultural, distribution cropland for Ethiopia. And this is a very nice example that shows a large dataset, 1.5 terabytes that we subset and load in R and do processing. So cloud of the GTIFs, how to use them. You can use them QGS. So this one I just loaded, I will do it one more time. This is a Lancet image for Europe. And you see I can zoom in and we can get, we can see that image, you can see the values changing. And this image is not here locally on machine. This image is on a server and we can load it and look at that image. And we can zoom anywhere around Europe and we can also crop. We can crop just the window we saw. We can run analysis without downloading the data. So this is the one way to use it in cloud. OptinGTIF is directly in R. The other ways to use it, which is probably better is to use the R and then combine Terra package GDAL. So let's put this Terra package Terra GDAL. So you can use a GDAL or Terra package. And then also you can do spatial overlay in parallel. I will show you that. It's super, super important. Think about it when you have, let's say, 1000 points and you said, are you guys who have this cloud optinGTIF maybe 10 terabytes? I don't want to download 10 terabytes. I just want the values on the 1000 points and then you want to parallelize. You can parallelize with tested up to eight processes. So you can speed up eight times to do spatial overlay. And then you get the values. And then you can do some data mining and modeling. And then you say, from all these bands you have, I only need these five bands and then you can download these bands and then it's only, I don't know, maybe 10 gigabytes. So it's a very intelligent way to use the data. Last week we had a summer school and Mario Sapelle, he made this package called GDAL cubes, also very fast, very efficient. And so he gave a talk and he explained the cloud optinGTIFs. They have, there are three things about cloud optinGTIFs. Number one, they have pyramids. So they are available in different scales. So they're already pre-cooked, aggregated. The second thing, they have tiles. So if you want to download, eventually just download a tile. And then the third thing is they are compressed. So they're internally compressed. So it's a very low bandwidth requirement, the lowest possible to pass the data. And cloud optinGTIF works on the HTTP request. So basically it works, we use HTTP to access data like in a geospatial database. And I'm a really raster person, by the way. So I don't believe in vector data except for the points and lines. But even lines, I will rasterize. I rasterize everything, especially polygons. I don't see any point in using polygons. So I'm a really raster person, especially now with the cloud optinGTIF, there's nothing comparable to vector data. They are building some solutions which are called flat geobuff. So they're building some solutions, but it's nothing that is ready as the cloud optinGTIF. So we are going to talk about that. And as I said, you can watch this video, please, by Mario Sape last week. It's on our YouTube channel. We have also this tutorial, geospatial data tutorial. And if you look at that tutorial, you can see there's the example with the satellite data. So we have, as Carmelo mentioned, like 1,800 images, different bands. And these bands, they all have basically an address and also vectors. They are on the web feature service. So you can load all the point data, but you can also load these satellite images. So this is one band, for example, this is the Sentinel S2L2A product 2018. So going from the 25th of June, 2018 to the 12th of September, 2018. So it covers basically autumn, the third summer. It covers summer. So it's a summer band. And I can load it into QGS, as you see here. So I can do it one more time. Wait. I will do it one more time. So I stopped this one. I just do the same, or maybe I can add the other one, the Landsat. This one. So I can go to the layer, odd layer, vector layer, raster layer, sorry. And then I just say instead of file, I say a protocol. So instead of file, I say protocol. And then I can add this Landsat. And then it will take a bit of time when I say add, it takes a bit of time, about two, three seconds, so don't get discouraged with that. And then you can, now it's added. And now it's very dark, you see. It's very dark. So what I have to do, I have to switch the, I have to change the display. And I put usually, for example, two standard deviations. And then you see it goes from minus. So I have to put here five. So I have to do some manual fixing, because these are the actual reverberates. And when I do that, here we go, we get the Landsat image. And you see it's a 30 meter resolution. And I can turn off the Google on the back. So now it's just the image. And we can see actual pixels. And you see these are the, we clean up all the clouds, we clean up all the artifacts and fill in the gaps. All our images are about 98%, at least 19% pixels are available, except for some winter months that we have less than 98%. But otherwise, we guarantee 98% pixels. So these are proper analysis ready data. The analysis ready that many companies call analysis ready, it's not analysis ready, by the way. Analysis ready when you don't have to do any cleaning, which is, and there's no gaps. It's complete consistent. It's ready to go. So we made this data, which we now call analysis ready. And as you see, sometimes when I zoom in, it takes a bit of time. But you see, I don't download the whole image. My session shows that I am downloading data. So there's here. And so I am downloading. So it's something like seven megabyte per second. You see, when I move somewhere. So let me see if I can put this on top. Okay. View. So if I. Let's go here. So now I'm looking at the internet. This is always the top. So if I zoom somewhere here, you will see it will be downloading extra data. There could be sometimes peak. If I zoom out now again, he has to download. Now it downloads something like a 10 megabyte per second. And you see, it takes time till I get the whole coverage. Yes, it takes some time, but it's a question of, let's say two seconds. So if we had, if we had bigger budget, we could have speed it up to put it in multiple data centers and then anybody in the world. So then we'll be like a Google, anybody in the world will get very quickly data. But otherwise we are happy. We find this service to cost the data at low cost. And it looks like it's working. It's a good enough. And it allows people to access data remotely. So that's, that's about cloud. Who's using it for the first time? How many people? That's great. Fantastic, fantastic. So you see, I can, I can show the, the accessing your data. It creates, it's kind of like a virtual layer. And it creates illusion. Like we, we have the data locally on the machine. We load it in QGS. You see that's the layer. But we are just downloading a part of data. And this is this, first you look at the scale that you look. So when I, this image you download, that's the aggregated value. So if I go to the QGS, and if I open this pyramid, you see we have pyramids. And so then when I zoom in, I pick up the scale. And then this is the HTTP request goes to cloud. And you're Tiff and you're Tiff is just a single file on a server, just a single file. You can also split it, but usually single file. And then it looks okay, you have this scale and then picks up only that, that scale and picks up a subset that you need. And then you display. Okay. So it's a very intelligent system that helps people access very easily data. And then in this tutorial, we also explain in the tutorial, we, you can get the points. You can take Switzerland, for example, get the shape file of Switzerland. And then you can match the Switzerland, let's say here, so you can match the bounding box of Switzerland. And then you can crop, you can crop the team image and then you say, I only needed for Switzerland data. And you can also use directly GDAL. That's also fantastic. You directly use a GDAL. So here I will increase the size of this just a second. Okay. So this is the city of Amsterdam. So I can go to my studio. Let's see, open project. So I'll open this one. Neither one. I'll put just a new script and I'll increase the size a bit so you can read just a second. So I say tools. Can you read it a bit better? Yes, or I make it a bit bigger. Let me make it still bigger. Something like this. Yeah. Better, better, you can see. Maybe not 36. I can do only 24, 36. 36 is too much. Let's keep it like this. So what you see here, I can send to a command line. Well, we can start like this. We will start doing the GDAL info. So let me just see the version I have. So this is under my windows. I have a version 3.1.2. So this is compatible with Cloud.sh. The versions below I think 3.0, they are not compatible. So you cannot use them efficiently. But this one is compatible. So what I do, I can do GDAL info and I can think basically I think this Cloud.sh. And so what will happen here? I will basically, I will just go and find out about this layer, which is a virtual layer. It's not locally derived, but it's on the HTTP. And I use this virtual server, CRL query. So I have to specify in GDAL info this thing. Otherwise it won't work. So that's a little trick with the Cloud.sh. But you see when I query it, where they get back, it doesn't go in download, but it just checks the file and it gets kind of the header of the file. It's quite long, but I can see, OK, it's a pretty large image, 180,000 by 150,000. So it's the whole of Europe, 30 meter. And it has the coordinate system is the European coordinate system. Pixel size is 30 meter. And you will see the bounding box. So it covers the whole of Europe, continental Europe. And it has blocks of a thousand by thousand pixels. And it has overviews. It has multiple overviews, different resolutions. And it has three bands. OK. So I can get all the info I need about. So and then next thing I can go to some working directory. I would just call it under the temp. So like this. So I set the working directory. And then I can do GDAL translate. And now imagine whether do I create a local TIFF where I just clip Amsterdam. I know coordinates, the bounding box of Amsterdam. And I would just basically download, crop that huge image, which is 3 gigabytes or more. And I would just crop it and get the data for what I need. OK, just using GDAL translate. And it will be very fast. So off we go. And we get it. We get it. We get it really in a second. Right. And then we can open. I can go now to QGS. I can freeze this one. And I can open the one that I just subset under the temp somewhere. So here's the Amsterdam file we produced. And you see it's only three megabytes. So I went from like, 10 megabytes to 10 megabytes. I went from like a 3, 4 gigabyte file to produce only Amsterdam. And now it's local. Now this thing is local. This is really just Amsterdam. And I can just make sure I copy the style. So I can see Amsterdam. And you see there are some missing values, the missing values, but more or less I have the. So this band, which is the P50. This is Fickle. So I get that band downloaded just for the Amsterdam. And now it's very fast. And I can load it. I can load it in QGS or in R. I can open in R. I can do some analysis. So this is the advantage of the Cloud OptinZoom. If it's super fast, you don't really have to download data to start with. You can just use the QGS to browse the data. You have thousands of layers. So you can open layers. But eventually when you see, oh, I like this, but I need it only for Amsterdam. As you can imagine, if I had 2000 files, I could run this GDAL in loop. And I could crop all the files and just get the Amsterdam. And then I can load it in R. I can do analysis. And that's the advantage. And you can have a thousand people at the same time getting the data and using it. It becomes like a geospatial database. That's really the key thing that it becomes like a geospatial database. And I will show you that you can do also, we can do point overlays. And one of the things we can also do, we can parallelize this point overlay. So I will show you that. And I have these tutorials, including the tutorial in OpenLandMap, where you do this parallelization. So maybe let me show you this thing. This is the OpenLandMap. And so here under global layers, there is a nice tutorial where you can do you can test things out and, oops, sorry, this wrong one. So here if I do point queries, so point queries, I will just go and define that TIF. So let me call this a TIF. Yeah, I can do this one. So if I go back to this example, I will call an object TIF. And then I can define here library theta and the library RGDAL. So this one, this one. And I can put this, for example, this lancer image. Yes, so I defined it. And then I can say it is, oops, theta. I can say it is TIF. Ah, sorry. Ah, not Amsterdam TIF. So this is really magical now in the theta package. You basically define the URL of the layer of the cloud.org. And from here, this roster, it's ready for a theta package to do any special modeling. Although you don't have it, you don't have the data, but it's defined in the session. So it's defined and it's ready to go. So anything you do with it, it will work like the data's been loaded in the memory. You understand? So it's really magical because I can go now to, let's see, I do this overlay. So I can, this is a single point. Let me see. If I do single point, this is a single point, I just define it as x, y. And I do overlay. Now I put a longitude. I have to put some longitude where I have, longitude, longitude where I have values. So let me see here in QGIS, I will have to get these things switched to coordinates to EPSG, 4326, right? EPSG. So that's this one. So I switch to these coordinates. And now I look at some coordinates. So let's say 4.8, 4.81, and 52.339, 4.8, 52.33. So I define this. And now, if I were like, ah, I have the latlone, I have the latlone, but actually I did the wrong way. The layer has the projection system is this one, the European. So I did the wrong way. So sorry about that. So I have to switch back to the coordinate system, this one. No, sorry. Let's see. Let's see EPSG. I think it's 3 or 3.5, yes. That's the one. And now I have to get some coordinates. And the best is I think actually I will just, let me see if I click somewhere. And this gives me the coordinates and I can also, well, I can copy them somehow. Can I copy them? Copy attribute value. So this was the one. So now if we do overlay, we still get, we still have missing values. Let me see. It could be, I think I have to specify this. I think maybe I have to, let me just check that just a second. Maybe I have also wrong version of theta. This is Y, can be matrix. Let me see if I'm mixing the X and Y, no X and Y are correct. So this is the correct projection system. Let me see if I put here. This is the same, no difference. So I'm not getting, I should get these values, 3, 4 and stuff. But for some reason I'm getting all NAs. I'm just looking what did I do wrong. Okay, let's try this. No. So the, the TIF is defined and I have the coordinate system. So let me just try to do this thing. Well, it's a good, it goes, the things don't work also. It's nice so I can do some debugging. So just like I convert this into, so X, Y, and then I copy the coordinates from the TIF. This one. So let me see now. I want to see just in a Tera package it's a bit, Robert, how many different names for the projection system. How do we get projection system project? Yeah. Interesting. There's no way to get projection system. Let me see. How? See you guys. Okay. Thanks. That's the one. This one should work now. Okay, I need to put a point. So this one says, oh, I have to convert to the vector. Okay. It could be, I also have to update the Tera package. No, I'm hopeless. I cannot get the values. So let me see what they did here in this example. So this one obviously worked, but here I used the long lot. Here I used the long lot. And this one worked. Okay, let me, let me, I have to dig into it. Okay, so I tested it with the long lot layers. So let me just test this. If this works exactly in the code, then it should be fine. Okay, this one works. And then this one I just did with this. So I should get exactly the same as here. Yeah, this one works. So I think it's possibly this XY when I just send the coordinates. And I have to do the long, long luck. It's possible I have to see maybe have all version of Tera package also. But this one works if I take another, another layer. Let me open this layer here. This is a global layer, but we can also add it. So let's see. Okay. For some reason I cannot get this one. No. I might, I might be blocked now way by the sobby. So I, I this layer I can access it through. I can access it here. So here it works. I can access it through the same. But if I want to open it in QG as he says, not existing. That's interesting. But okay, I will check why I cannot get the overlay. But what is important is this thing I want to show you if I put multiple, if I put here multiple points. I can create, I don't know 10 points, it could be 1000 points. I could run this overlay operation in parallel. So I will take a 10 course, I don't have 10 cores but let's say I put, or maybe I have 10 cores. So I can do in parallel and I can have a list list of geotifs, which is the list I use here. So here's the list. Now here's the list. So here's the list of geotifs. And then list of points. So I have a list of tips and list of points, and I can run this in parallel, but says on Windows it doesn't work. So, so I have to use one, one, but you can imagine if I have run this in the in the one to machine that I can paralyze. And now it's running in parallel. Well, not in parallel this time but you can see it's downloading. And I can get values for all the pixels for all the rusters. It looks like it has to download the whole ties I think it has to get the tiles to do on overlay. So it does take a bit of moment when I don't do in parallel, when I do this and CL apply, but not in parallel. And you know what, let me let me run this. I'm experiencing some problems. So let me just run this in a buntu to see the difference. That will be interesting to compare. So, again, the same thing. Let's run it now so in a buntu. We have enough memory s I want to take a lot of RAM here also so I slowly start running out of RAM so be careful when it runs out of RAM then it suffocates everything. So I think I have to stop. I will save this one. And I will turn it off. So I can save some Ram. Yes. So I'm just just on the edge, but let me try to do now in our studio. So here's the studio and I run the same code with the let's see. So I use the same code. I have many multiple layers. This one works. Now I have multiple layers and I have multiple points. And now I run it using, let's say, 1010 course. Okay, so now we can compare the. We can compare this is now so running here see this is parallel. And the other one it took some time to run. I put 10 threads but actually I have only six. So I should have reduced. And you see, does it doesn't run in parallel. So I think it takes more time I don't remember taking so much time so I think this one went through. And this one yeah this one is done now. So let me just try for put six, because maybe even slowly down that I put more course. Let me try six. So I run the overlay in parallel for multiple points multiple layers. And if you have like 1000 points. As long as the points they fall in the same. So that really spread equally around it will take a lot of time but if there's class some clustering of points that speeds it up because it only has to get this. It has to get these tiles there where is the values. And then it will take. And then you can do it again. But you could potentially you could do overlay without need to download with the many layers, but it does take I see it takes about about 20 seconds. And then we get this thing. We have values for the 10 points. But I downloaded lots of layers. Yeah, I have 12. I have a 12 layers, and I downloaded 10 points, randomly spread, and you see the values, the difference in values. Okay, so this is the magic of the cloud of the geotifs that you can do overlay in parallel. And you could without downloading all the data, you can program with it. And we'll do another example for Ethiopia but for now let me just put this on state but stand by. So I can go back here. So I'll do another example for Ethiopia. This thing. And this example I share with you also here in the, there is this solar economy data cube. It's a really nice tutorial also on the on the D club. Let me go to that one. And so he's a nice tutorial. And here we do multiple things but basically, we also use this cloud of the geotifs. And then we compute. We download the multiple layers we specify where they are. And we download these layers load them into our. And then we can crop, we can crop for perform we can digitize in Google or something, then loaded from came out file, then we crop that cloud of the geotifs just for one farm. And we can also calculate for here for the Ethiopia, we can go and calculate a couple of distribution. So that's the one I will run. And then I show you then I fit a machine learning model, I take all the pixels. So that's something interesting for you, I will crop it. Then I take all the pixels, and I fit a range of model. And you see I get a model 0.92 I don't know. And if you can see the variable importance, what's the most important, and I can then predict cropland distribution for future assuming that the rainfall is going to fall. Let's say 20%, I can make a prediction what will happen with the cropland of course drops you know, it's like a desertification. Then the croplands will drop. So then you can compare actual cropland to potential cropland using random forest. So let me let me run that. I think that's a nice case study I'll run it step by step. But you can when you have time you can zoom in and you can you can do similar things we do on your, your own data set so let me see I do that here. So this is the this one building a spatial prediction model. So I start from here I take this natural right package, and I compute. And I compute the, the Ethiopia, because I need the boundaries of Ethiopia right I don't have them locally. So I can just get them from the natural earth. And then I create a roster with the terror package rust. So I create for the Ethiopia bounding boss to say, make a roster at one kilometer resolution. And then I asked for is that. And then I can do specify this cloud alternatives that I'm interested in the all in the our repository. So let me see this is now I'm just forgot to get that package natural earth. So. So here's the packages are natural earth. See. Here's the package. And so I need the Ethiopia. There's an extra package needed the Python people don't like this, that you have to go mean manually and fix. I need to install packages. Okay. I think this one is only available from. This is available from crown. I'm going to try this one. It looks like it needs this other package not nice. Okay. I made a maybe mistake I'm running this from the from windows. So this thing works. Okay. So let me run this from, I will run everything from a book to the lander was right if you if you use a windows or something. And the packages are not installed and then you just lose time, so let's just put this in a bone to the window. Okay. Doing okay. I took on purpose the windows machine so I can teach because most of your windows. But the problem is, I didn't check the packages on windows. I forgot that they know one to just a few lines and proof. Okay, we're here. Let me start a new new project. I'll just put it on the home or something. Okay. So here we just copied the whole command line. And we're making a markdown. Okay. And now we can go to this one. And I go to this one. And I go to this one. Just switching to one to Yeah, I can get the packages that I'm missing. So fine. Okay. Let's see this one. Okay, so I just mess it up everything. Okay, finally. So, I need to load the terror. And I need to load our GDL. Okay, so project. Not the medic. I didn't define it. Yeah, I forgot to define this thing. Okay, so I'm going to do it again. And we go here. Okay, now it's going to I'm sorry some delays, but I needed to project this. I get this thing. Okay, so here we create the roster. And now it's 1200 by 1600 that's the size of Ethiopia. And then we get the all the cloud alternatives. And then I can see here I convert here to roster. Okay, so now I'm trying to do this. GDL. Let me see this. Okay. I put it under data. And then I can make the folder under documents. Africa so I think I need a data folder. And then I can see that it's working. So now I basically create this. And then I can see that there's a lot of warnings, but I take the African geotifs and I aggregate them, and I create them for one kilometer. It takes a bit of time. And I process it from the cloud of the duty of this kind of the 30 meter but I don't want 30 meter want one kilometer. So aggregate, there's a lot of these warning messages. And then I can see the GDL really. I will have to read about what is it exactly saying, but I think it's looking for some metadata or something, but it's working so I'm re sampling. That's a bit of time. So I can finish resampling. I basically have all the layers, but just for the Ethiopian bounding box, and I can also project them while I do GDL work. And then what what one saw these things is finished. So that's here. I finished now I can read it into the into our. So I actually read everything all layers. And now I have this, this layer. Let's take a look at this. I have all the data locally on my our session and I have about one million pixels 10 layers. I have one million pixels. The first layer is just a mask, if it's Ethiopian or not. And the rest are just different, different environmental layers. So I read that. And then the next thing I can do, while I can do just a plot. So then I get these plots, this plot and this is like the, this is the cropland. This is distribution of a cropland in percent based on a global product that produce a 30 meter. Okay, but I have it now at one kilometer you see there's a, it's very funny that this, this province in Ethiopia is probably a mountain chain and then you don't have any cropland just it matches exactly the, the provincial boundary. But the rest is like a spread cropland. And then, once we have all this data. So now we have it locally, then a load the ranger. And I say, the model is. So let's see the model the cropland is a function of soils rainfall elevation so I can put all these layers. So I said the cropland is function of slope elevation, so pH, the time exchange capacity of soil and precipitation of December January March, September. Yes, so very simple model. Right, so I said, and then when I fit, when I feed the model I have to get the complete cases, and I feed the model and it will run you see it runs in parallel. It takes a bit of time. But they trained that model. And as you see then I get the model is highly significant. And then we can play with that model. We can predict assuming for example that I don't know October rainfall drops 20%. What will happen with the cropland. So we can once we build that model and now this model is uses all the pixel so there's not a dozen samples. This is a fully adjusted model. So it's a fully takes all the combinations. There's nothing doesn't leave anything on on accident and you see the ranger actually gives you estimate, it will take two minutes to feed this. So it takes time to feed because I use all the pixels. So it's possible but I have a 1.2 million pixels using some 10 layers. So it takes time to fit a random forest. And here I don't have to do any blocking right because it's a spatial exhaustive. There's no clustering, you know, the older pixels are spread exactly same. Same distance. And so it will take some time. And then I can go and do predictions and that's what I showed you here that I basically then say let me predict now distribution of cropland if I changed, for example, the rainfall by 30% less. So you can compare the next to each other you can see, you can see the cropland currently and cropland with 30% rainfall for one month, or maybe four months. And you can see that these are there will be a drop especially here this areas. Now lots of cropland, but here there will be less. There will still be some cropland but it looks it for this area has more effect than, for example, this area. So, where it's not cropped, how much predicts the range of the crop? Percent, percent inside one kilometer pixel, how much cropland. That's what it predicts. So obviously, you know, if it's zero it means, yeah, there's no cropland and 100% means whole pixel is cropland. So, basically it's a way to also see the effects of global warming, let's say, or global change, you know, but this is just hypothetical I put 30% less rainfall. It will be now nice to pick up actually predicted rainfall in 10 years predicted temperature, you know, and build this model with more covariates, but it's possible. So, you could, you could do this random forest exhaustively on all pixels. And you could do a some ensemble machine learning also pixels you don't have to do blocking them because they're all nicely distributed, but it's also possible to do. You can build these models and you see I made, I'm sorry I had some problems I actually I wanted to show in the windows and that was a mistake because I missed the packages and things, because I don't use this laptop on windows to do computing so soon as I switch back to move to it, run all smooth. So, just to show you with the cloud opt-in is your teeth. You know, once you have a serious data, you can do local analysis and you don't have to download data you just use the data package so you can do cropping you can do, you can download just a piece of data you can do overland parallel. So you can do lots of stuff skies the limit so and, and it allows you to do. You can do test analysis and then it's okay I did a small area and I want to be a real and then you can again download just expand the bounding box. And this is all thanks to the cloud opt-in is your teeth. And this thing I think I have few more slides I want to show you. This was the, this is this was the example. There's a one more thing to talk about that stack. So if you're new to cloud and you if you're also new to stack stack is the way you index cloud opt-in is your teeth. This is at least what the, the OSG community the people that make GDAL they recommend that you stack. And you can basically add some metadata. And then once you make a stack for your cloud solution for data. This is a cloud opt-in system. It's a cloud solution for geospatial data. Once you make that there is a stack index. This one. You can send them email that you have a data, but I think there also there's a way that you can yourself register. And then it becomes listed on this stack index. There's many datasets here. It becomes listed and then people can use our stack also there's a package. And then they can say, oh, I need so they will say use a more natural language. I need data for Africa, you know, and then they can start programming directly. And the same things I was doing, but with more compact. And that way, also then in the stack you have to put the metadata. So like now this layer so I was showing you this, these things is a bit abstract. Probably, you know, the formula I showed you this thing. You know, when you look at this formula, we use these long names, but you will still see, oh, what is this, what is this layer I don't understand. So then in the stack you look at this metadata, and then it will tell you what's the target variable what's the, you know, coordinate system use, who made the data, what is it for you know so you can hold all these things you get in stack, because it's like basically metadata exchange service. And so then, when you program the further analysis you know exactly which data you're using I know it by heart because I made this data but you, you haven't made it so you wouldn't know. And that's why it's important. And so that way you can do also predictive modeling, you know, and project in the future, and also very interesting way to use all the pixels, but sometimes. Yeah, yes, but this was filled with points you know you have to over the points. This model I showed you this is something be careful I mean if you have a larger data sets. You know, now I had the light 1.2 million is when I fit the model it took took something like three minutes. You know, I imagine if I took whole Africa at one kilometer, probably I can fit model. But it will take then I need a really high performance computing and the model will be huge. You know, the model that they got here and crops. Let me see in the object size. It's 1.4 gigabyte. You understand. So just that model. It's 1.4 gigabyte. It's to exchange that model just to write it on the, you know, it will take me 1020 seconds. So so that's the, that's the thing that machine learning can I make this model smaller can I compress it yes you can do future selection. You could maybe see whether you randomly subset points and see the model doesn't change, you know, you can compare the models. And so maybe you can do it with 10% of points pixels, but just randomly subset. And that does damage the model. So I don't know that something needs to be explored. Let me see that some questions in the chat. What's the advantages of the cloud optin gt for Google Earth engine. Well, when data is in the Google Earth, Google Earth catalog. It's also kind of available to do computing. But the problem is you have to do authentication so the in the Google Earth engine you cannot use it without registering API and doing out this vacation. Now the question is whether that's open data. So if you don't have open data if you need to do authentication and go in some infrastructure and you know so. So if you don't have authentication you cannot access so then it's restricted only to people that do authentication and have the Google API. And then the question is whether that's open data. I think officially it's not open data. Open data is data which has unrestricted access. And I would you there's no authentication we are done astroid you just program with it. It's a fully unrestricted access. So the European data cube we made the Africa data global data. It's a fully unrestricted access you don't, you only need to have a computer to compute, but you don't have to authenticate and with Google Earth you don't have to authenticate.
|
Software requirements: opengeohub/r-geo docker image (R, rgdal, terra, mlr3), QGIS, Google Earth Pro This tutorial is an introduction to the Cloud-Optimized GeoTIFFs, exploring scalable spatial databases, accessing COG files using rgdal and terra packages in R, making functions that run on COG files, spatial overlay with COG files (in parallel), and running spatial analysis and plotting results.
|
10.5446/55221 (DOI)
|
From the program, we are using this virtual box, so I'll be using this virtual box. Don't get confused. I can switch, in my Windows 10 system, I can switch to Ubuntu, so I just press the button and now I'm on Ubuntu. If I do Ctrl-F, then it looks like I'm on Ubuntu computer, but it's running on my local machine. It's really magical. We will be doing this virtual box. Why we do virtual box? Because we want that you can all access exactly the same and we don't lose any time on installing downloading, setting up. Also have in mind that these slides you can make available offline. If I shared all the slides, you can find them in the Metamoss channel. I shared all the slides and you can access the slides. If you want to save bandwidth, then just go and make them available offline. That will be something like here. These ones we are starting. I will just go File, make available offline. This way you save bandwidth. If you don't make it, then switching between the slides can cause bandwidth conjuction. That's why you should try to do it, save things offline. I can start with this first block. The block is one and a half hour. We will leave every block. We will leave about 15 minutes, which we will stop doing YouTube broadcast and then we focus more on your questions. In the meantime, we will answer questions from people that are online. We will have every day, I think we will have maybe 50 to 100 people following online depending on the block. Some people in different times also probably the way to connect later on. Let me start the presentation here. I can just use like this, I think. We have today and tomorrow, I will be the teacher. I will have some help from Carmela. He is not with us now, but Carmela will help with the special temporal ensemble machine learning, especially because he is working with the real data. That's the one in the afternoon. Then also we will do one more technical session working with cloud alternatives. Something maybe new to many of you. We will do quite the gentle steps. We will guide you step by step how you use the cloud alternative and again, especially with the intent to show you how to do it in R. The guys upstairs, they will do practically similar sessions, but more focus on Python. Python and R, of course, are not one-to-one. There's many functionality that is really different in Python and from R. There's many packages that exist only in R. There's many packages that in functionality exist only in Python. Some things computationally, it's more suited for Python. Some things for plotting and statistics is more suited for R. When people ask, okay, which one should I learn? Usual question, my answer is both. It's very simple. You can, we can maybe show that, but in R studio, today you have also functionality to connect to, so if I go to R studio, there is a functionality also to connect to reticulate to connect to the using, so here if I go under, let me see, send the tools, I think, so you can connect to the, we reticulate to Python and then you can do Python programming also. So that's about Python and R going back. So this morning, first block, it's a, let's say, gentle introduction to spatial temporal data, specific to the data. Some things maybe aware of, some things maybe here for the first time. I'm hoping most of things you will hear in this block for the first time. And then after that, after the break, we'll do some modeling exercises and then slowly I will start jumping from R to Python, sorry, from R to slides. So you can see me actually doing programming and working with real data. And then this morning, basically it's a kind of warm up for the real session, which is the spatial temporal ensemble machine learning. So and that's something I think possibly new for you. And this is going to be really hands on. But then we just explain you how it's done. And some people tell me, you know, you think like it's simple, but actually this ensemble machine, if you do it for the first time, it's, there's a lot of components and it's really highly complex. So it requires like really, I think gentle introduction and that you feel the data and the code. Everything I will do, it's basically in this notebook. So there's this notebook. So it's here. And you see this notebook is under our training. And I did share it with you over the last week. I don't know if you managed to look at it, but this one, it has all the, it has all the examples and even has text. So you can really follow it. It's really like a computational notebook. So you can really follow it. And at the beginning, there's a bit of introduction, but then very quickly you have, you have functions and you run the functions. And then you can, let's say you can test your own thing. So we will be moving through this tutorial just here in our studio. And we will be looking at different things. And so you see, I can move to some section. I can run a function, I source the function and then I can, I got some warning message. I don't know, but I can compute something and they get the number. So that's this computational notebook. And that's what the guys use in Python upstairs, this Jupyter, whatever. So it's basically looks the same. And we can scroll through the different sections. And then we will test things. And then you can, you can modify that code, by the way. The art studio, you can modify it. You can play with it. It's no problem. If you do something wrong, there is a shortcut. Let me see. So you can do git stash and this will remove all the changes. So it's very, very nice to know the git command. So I can just do git stash. If I go here, sorry. Let's see. So if I do this, I have to get to the folder though first. So that's somewhere under, let me see where am I? I need to remind myself. So that's under the home odsc work directory. Then odsc work directory and then code and then odsc workshop. So I first change here and then I can do this git stash. And this git stash will basically remove all the changes. It will leave all the new files you make. But if you mess up something, I don't know what they did. And then you just do git stash and that will reward it into the original. Okay. So you see I have this cheat sheet. I can share this with you, but more or less it's just a basic survival. If something happens, we have to restart the docker. So that's also survival. But I can share that with you. So that's the plan for the, plan for these days is we use this tutorial and then we will go step by step through different steps. And then you can ask questions and we can slow down and we can zoom in into things. Also, I shared, as I said, I shared the slides with you. So you don't have to write anything down. You don't have to make screenshots. Remember, this is also video recorded. Everything I say I do the code. You can follow after the course. So you can really be relaxed and just really focus on the course and ask questions to see if you can reproduce something on your laptop. Why we like that people use laptops? Well, it's less containers for us and obviously you can use different operating systems. But also we wanted to learn how to use software. So as soon as you go back to your office and to your work that you can continue using the software. Also, why we wanted to use laptops is you can interact with us. You can ask questions and the metermost is very important because you can do a screenshot if you have a problem. I say I have a bug or something happening. You can send a screenshot and then we can try to help you. And as you can see, there were many people asking questions about the virtual box and there was screenshots people send. I have this problem. I have this problem. So people were sending. So we were debugging already and helping people. And I hope now you all have that virtual box running. By the way, I'm also new to virtual box. I didn't use it before. You would just connect to my workstation at the office using a no machine or team viewer. But now Landro convince me it is the best way. It's very elegant and you just it's kind of that's why it's called box. So you have everything out of box. You have all the software. Everything is customized. We all see the same and you can easily just turn it off and you can put it in standby. I don't know if you tested it. You put it in standby and and then when you start the machine, you get exactly the same thing where you left. So it's really magical. Okay, so that's the matter most anybody not connected or the art development group, please connect. If you don't see the art development group, you have to go to this plus sign and you say you have to say bros channels and then then pick up the channels you want to see. If you are if you're a physical participant, if you have questions about logistics here in the week, then you should ask it in this group. Don't mix it if it's a general question about methods problems with code. Then you ask in the town square or you ask in our dev. But if you have a question specifically, I have to with Wi-Fi. I have the then you ask it here. So that's and it's a very important to use matter most because it's a it gives a track of everything and you can open lots of discussion groups and each everything you post basically becomes a little discussion group. And if you reply on something as you see here, when you reply, you can see all the the thread. So that's also nice about the matter most now that you can go back to the same issue raised and then you can respond there. And usually if we responded to something that you ask, we will not respond back, but we are going to just send you the link and you can link to any thread or any post in a matter most. It's a it's kind of like in Github and so so you can link any to any post and this way this way we can debug but we cannot if we debug a problem once we don't want to debug it two times. I hope you understand that because that's a waste of time. Okay, everybody looks very serious. There's no exam by the way this course so you can relax. This is a and it's also special times. This is Corona pandemic times and we are really doing our best and improvising. We had to teach ourselves how to do parallel broadcast and you know how to use all the zoom functionality we haven't used it before. And also there were many cancellations by the way for physical attendance. 25 people cancelled so really high cancellation rate. But we had the last year exactly here at week we had a summer school and there were more people almost two times more people last year when there was no vaccinations and more chaos. And now that you would think okay there's vaccinations but the delta unfortunately arrived and that made it a bit complicated. But nevertheless we'll keep it in we'll keep everybody safe of course and so we will do our best to follow the Corona rules of course and and we think it's going to be successful event. Nevertheless, when you if you're new to R it's not I don't I cannot promise you you're going to leave this place and you say I went to this training course two days and I learned I'm some ensemble machine learning and R never used R before that will be very difficult. This is quite advanced topic. So you know there are ways to teach somebody in one day things with the right motivation. But I mean imagine this person here you know trying to satisfy Kim you and if he fails something of course it's not a good probably for him. But there's very difficult to teach something very advanced in one day. So what I recommend is that you kind of if you are new to many things open your mind and just make sure you you know take the copies of the slides and remember there's going to be a video published also so you can go back to some of the discussion points and you can see me coding and you can you can teach yourself after the course you can go back and try to learn things. The reason why we are here of course is because we have we have a European funded project which is called actual products called geoharmonizer. But that's a two technical names so we renamed it to open data science EU. And so that's the reason why we have this because we had some budget to organize this type of workshops and to do core development sessions. So this falls under the one of the core development sessions because we also expect to get feedback from you. But so this is our project and you can read on the bottom you see that it's been funded. It used to be called in there innovation networking agency. Now it's called there but you can read about this project here under the link and it's mentioned on the bottom of the page. I can see that some people now connected from US which is really crazy because I think it was probably like 4am or something. But they see lots of people connected now following around Europe and there's some people following looks like from India. So you can read more about the project and we pull posts here if you have the news then we post some articles how we did processing and but they're very short. So we are hoping we're writing our articles to explain how we did the process. So that's thanks to this. And as you see in that project we made this data portal which is under maps open data science. So if I actually here I can go. So under the maps open open data science.eu we made this what we call a European environmental data cube. And this is the data cube here. So we made this data portal and and as you see we are here in the lens and we can zoom in. So let's say we are now here. And so what we did that we created this land cover predictions using ensemble machine learning. So the things we're teaching you this is what we did but we did it with the 7 million points and with about 400 covariates or the regression classification matrix is only on the classification matrix about 3 gigabytes and so we computed all these things and we we also put we can show you we put the points here. And so we trained with 7 million points and we made the predictions and you can go and scroll through time and you can see how the land cover changes. And what you can see here is that this for example, Fenendal like this area you can see that there's a growth. So this all these places are new. And we did the proper I think the land cover mapping because we mapped every class. So if you if you look at urban we we mapped every class is a probability. So you can see also probability of each class mapped. And so if I click somewhere here, I will get it will do a query on the geotifs and it will get the values of that probability changing through time. So we can see in which time they started building our estimate. Yes. And also when I say we do things properly, we also put the uncertainty. So if I if I turn this off, we put also the uncertainty and you can see uncertainty per pixel. We estimate which of the pixels if it's a brighter yellow, it means there's a high uncertainty and it will pop up some places you will see a high uncertainty. And it means that you should be careful using that data. So I think it's a let's say a proper framework to do land cover mapping and we are proud that we did it using ensemble machine learning. In this case, this this predictions and the modeling was done in Python actually. But we also have the vegetation map that Carmelo and me went to show this was all done in R. So once you choose our Python, it's not easy to mix, but some projects we chose to use our or some projects we chose to use Python. So yeah, that's this project. And that's what we're going to talk about. And we're very excited that Carmelo, who is PhD student at Open Geohub and working in university, he's going to show you in the afternoon, really, with a really large data set, we're going to show you how we compute space time distribution of forestry species. And we will show you that it really matches the ground data. We also have about one million points, I think training points. So we are very excited to show you that results. This is also first time we're showing to public this results. Okay, space time data. So I personally work a lot with the spatial data, and I did a lot of predictive mapping projects. And I even produce some global data sets, European data sets, Africa, United States, we didn't want some awards for some publications on doing predictive mapping. But I also got interested over the year 2012, I knew that it will be very interesting to do space time, and then I published my first paper, I think on space time in 2012, space time prediction. And so we will use that paper. It's one of the case studies is the mapping the daily temperatures. And then later on somewhere in 2018, 2019, I wanted to switch from doing spatial to space time. And this project is a proof of concept that you could do all the mapping projects you can switch to space time as long as you have enough data. So when you look at the land cover mapping, we don't map a land cover of year 2020, we map a land cover of last 20 years, or maybe we can map 30 years. But so we switch from doing a spatial modeling to space time. But when you look at the space time data, it's not just like if you have a space, you have a 2D, when you look at the space time data, it's not that you just got the third dimension, so you got 2D, 3D, but you have a 2D plus time. And it's a special dimension. And so what's special about it? Well, number one is, for example, in a spatial dimension, XY dimension, if you look at left and right, then the left can impact the right and right can impact left. Yeah, it can be connected. There can be a, let's say, the diffusion movement in the space. But when you look at the time, the future cannot impact past, right? So you have the causality, it's only one direction. Then also with the time data, I will show you there's a whole field, there's a whole field of statistics, which is called time series analysis, which looks at the behavior of variables through time. And it's a way more complex, there's just changes of variables through space. So we are going to talk about that. Time dimension, it's easy to plot, remember that, but it's difficult to model, basically. And then what is especially very difficult is to predict the future. And that's what some people in time series analysis, when they blindly use the models, they say, okay, now I predict the future. But they predict the future can be very poor. And so it's very difficult, especially to predict the future. You can predict inside the observed space and time. So if we predict land cover for the last 20 years, then we can do lots of testing. But if I have to predict a land cover for 2022, you understand, it's going to be very difficult because you need something on the level of magic really to do it. But you could, for some variables, you get very close. So you can actually predict future also. If the variables are kind of like the train component, the systematic component, it's let's say you would screw them to be like dominant, then you can also predict some models you can extrapolate in the future. So we're going to talk a bit about time series analysis, visualization of data. And I will point you to many books and references. So you can maybe some things are new, maybe some things you know, but I will point you. Any data that has a so-called special temporal reference is a special temporal data. So obviously you need the coordinates. If you talk about geographical data, the coordinates are a lot to do. Then what is also very useful to know is the location accuracy. So that tells you about this x, y coordinates, whether they should, they relate to something, you know, which is plus minus 100 meter, 500 meter, I don't know. But also there is the size of the block you sample. You know, you could also do block sampling. And you have a, for example, for soils that do block sampling. So they say this is the value that relates to a volume of a soil. It doesn't relate to a point. And then you have the height above surface, the elevation, because this is elevation as of terrain. But you can also have, for example, temperature, the early temperature. Is it the surface temperature? Is it the two meter or is it the 100 meter? You know, so you can have a different vertical dimension. And that's this third dimension. And then we have the time. And time you represent actually with two columns. That's the beginning of the measurement and end of the measurement. I will show you that that's very important because time has also so-called temporal support. So some things we measure like the daily values, monthly values, value at the second. So it also, it's very important. And then you get the spatial temporal data. And once you know, once you know this coordinates of the data, once it's a spatial temporal reference, then you can import it and do spatial temporal modeling. So this is actually, whether it said time is just not a spatial dimension. And there are specific methods that develop for space-time data. And when I started in R, when I was using R, I think in 2013, I think then or 2012 already, Edzer and the colleagues, they made this space-time package and it was really a great solution because at that time there was only spatial data, SP. And then he says, okay, he looked at some problems in the data and then he made this space-time package. I don't know if you ever used it, but when you read some of the tutorial, then you actually becomes a bit complex because there's this full spatial stack and irregular data stack. So he made these classes of data. So it's a quite complex to understand. It's a bit abstract. But in principle, the most important is that you have correctly for data, you know, the space-time reference. And then if you have that, then you can import it in R, organize as a space-time class. And then we will show you with some data examples, you can plot the data, you can do a space-time overlay, you can do a space-time modeling. And this is this paper also from Edzer called Space-time. It's published in the Journal of Statistical Software. Have you used Space-time? Do you anybody use Space-time completely new? It's also magical when you think about this. So the space-time data imagine you can have a space-time time series of satellite images and you load them into R. And if you organize them as you say, this is the class space-time regular data frame of space-time full data stack. And then you prepare the point data and this can be irregular. And then you say over the points versus the space-time time series, and it will do a space-time overlay. And then you match the points with the cells in the space-time. And so that's really magical because you then do it in one line. The problem is when you work with the large data, you cannot load it in R. Your RAM is not going to handle it. The R is not so, let's say we are not so efficient with putting things in RAM. So then you have to maybe this space-time package wouldn't work. And I don't use it by the way because we work now exclusively with large data sets. So I will show you we're going to use the Terra extract, which will do then the space-time overlay. But my code, unfortunately, maybe not so easy because this space-time overlay, I will show you just very quickly how the code looks like. So it's a bit longer code. Don't get scared and it runs in parallel. So this is the space-time overlay for meteorological data. We have a time series of modus length surface temperatures, and we'll do a space-time overlay. And so I made this code. So it's not in a package yet, but I should put it in a package. And you see this code will run in parallel. So it runs in parallel. And I made a little function called extractsd, extract space-time. And this thing is a universal function. So this function, if you use this code with your data, it will, I promise, it's the fastest way you can overlay a large amount of points over large stacks of geotifs. And they don't even have to be stacked to exactly the same thing. They can be loose structures. So that's the space-time overlay. And then when you do after the overlay, you will get this so-called space-time regression matrix, and then you can do modeling. So I'm going to do this code today. I'm going to show you how it works. And we're going to go step by step. And then I will show you the data structure, what comes out. I will also show you when I do computing, how it goes into parallel. So we will look at it, observe it here when it goes to parallel, and so that you get the feeling that you understand what's behind. But again, I built up the thing that I showed you. I built it up on top of the package made by Robert Thomas, which is called Terra package. And this one allows, so that's this extract STUC. This is the function that allows to do very quick overlay. And it's actually magically fast. We overlay millions of points over like, I don't know, one terabyte of data, and it will take two minutes. So it's a magically fast. And this is mainly because the Terra is a program mainly in C++. It's another language that is something you should consider, but C++ is a different story. Okay, going back to the slides. So there is also a new book called Spatial Data Science. Have you heard of this book? And this is the URL. The URL is very critical, Keem Schwartz 3146c4. It's a very critical URL. Don't ask me, but Edzel is the main author. And they are writing it. You can subscribe to it and you will get the notifications every time they write a new thing and the things they discuss, you can follow it. So it's a live book. Basically they write the book and you can follow them. You can see all the mistakes they do and the fixes. So they write book in front of everyone. So it's a new age writing of science. I think it's the right way, by the way. It is a book down base. So you can see all the code, everything which you see in a code. And you see there's a chapter 13, a multivariate and spatial temporal data. I don't know how much they have, but they have, I think, only spatial temporal geostatistics. So if you go to this book, which will be cutting edge, you will not find anything basically that we are going to teach you. I don't know, very little. So everything we teach you, it's not in the book because we do the machine learning for space time. So I think we have a lot of original things for you. But maybe they will add it, I don't know, because as I said, it's open book. Maybe at one point they will say, okay, we add one chapter on space time machine learning for mapping. Okay, this is just to share the space time reference. If you use Google or any type of data in KML, keyhole markup language, then you can see how it looks like a space time reference. So you see here I have a name. This is a station, a station that we use in the first exercise. And we see that there's a measurement at some point and point has the coordinates and it has a begin and end time. And you see that's all this theoretical concept that tell you there are implemented in any software. So for me, actually, as I said, for me, now mainstream is to do space time modeling, space time analysis. And for me, any GIS where I cannot set up a space time, for me, it's a cumbersome. We cannot do our work. So we have to use only the GIS where we can do space time. And so one of the most popular GIS is the QGS. Does it support space time? Does it support space time? Do you know? Pablo, you don't think so? I don't know. There is a plugin. There is a plugin to go around it, but it's not. If you just start the QGS and you say here I have a time series, you know, you're not going to get a slider, right? That's one of the reasons why we develop original interface because you see all of our data, it has a slider. There's only a few data sets. There's no slider, but 95% of data is sliders. So as soon as you start the data, there is a slider, right? And so when you go to QGS, if I open some project, you know, you say, well, I have a, this is this temperature data. I want to slide through time. You won't get it, but there is a plugin. There's a plugin for time series data. And this is not going to animate changes. You have to play with the animation. But if you move around with the mouse, you will get the curves. You will see the curves of the values. And that's also amazing. Somebody made this plugin. What's the name of the plugin? Do you remember Camilo? No, another value for the time series. I have to also remind myself, but there is a plugin which allows to time series, time series explorer. There's a couple of plugins, but I have to actually remember which one is the one that Landro uses upstairs. Maybe please check with the Landro. So, but there is a plugin and you, there's more plugins, of course, as I said, but there is a planning plugin to put it and then you can see how the values change if you have a time series of values, how does it change through time? At the moment, if I want to see changes to try and I have to do this, I have to click on and off. That's the only way that's QGS. People that are experts in QGS, if they watch this video, they will say, no, no, no, you are doing it the wrong way, you should do like this, but officially, it's not easy to just go, hey, here's the space time data, I want to see changes. It's not easy QGS, but it's possible to set up. So, when you have a moving object, and if you may be following me now, I might sketch it on the screen there. So, when you have like something like a bird, right, bird is very complex, maybe car is better. When you say car, then the car moves to some streets, right, and goes somewhere here, goes from location A to B. But when you put it in, let's say somewhere mapped up, it's just a point moving. But you could also put it as a trajectory. So you put that line of movement, and you can put the trajectory in space time, so you can make a space time Q, something like this. So that's will be X, Y, and then this was the time, and time goes this way. So you should plot it, then you have something like trajectory moving up in time. Have you ever seen this? So I was doing that for two years when I was in Amsterdam, I was plotting this trajectory, it's a bird movement. And it's very funny because you can then in this space time Q, you can see that how this overlaps between species through time, which you wouldn't be able to see if you just have a 2D view. So you can see the density, you can rotate the cube and you can see the density where they overlap in the movement, right. And so this is also another way to look at it. And so now you have this, so you have objects, and objects have trajectories. And then you have also fields. And so what are the fields, regions? So I wrote here what could be the fields, regions, the next slide. Basically, it can be one of the three things. It can be a quantity or density of material. So it's not a physical entity. So that's like molecules, you know, quantitative molecules, let's say, you know, some element in soil. Or if you think about pH of soil, this is quantity of elements. And so these are all density. Then it can be energy flux, energy flux, because energy also moves like if you look at the heat, it moves through space, right. It's not a physical entity. It's a movement of energy. So you can also model that. And then this is where it gets a bit confusing. You can also do modeling probability for currents. And so this bird movement, I could also represent it with probability of occurrence, and then represent it as a region. Okay. But actually in the original sense, it's entities, physical entities, but I could also model them as regions. And when you look at this map, I showed you basically on the open data science viewer, this one, it's basically urban fabric. It's a physical entity, so most of land cover is physical entities, but we model that as a probability of occurrence. So if you say theoretically speaking, how could we model this? Well, we could model every house as a unique entity. You understand? If you want the total land cover, GIS, then I will model every tree, every house, everything which is a physical entity, I will model it as a physical entity. And this is maybe in about 10 years, this is how the land cover maps will look like. Because maybe 10 years in the Netherlands already they inventory all the trees, I think. Did you know that if you're not from Netherlands, they have a data set you can access. They map the whole country one to 1000 now, I think, and they mapped basically every tree. And the next thing they would chip every tree. Yes, so they know how the tree is doing, and there's already a concept called tree talker. Have you heard of it? So they put a chip on a tree and they follow the health of the tree every 10 minutes, you get the automated sensor network. And so that will be a land cover in the future, that you wouldn't map the probabilities, but you will take any object that you consider as a land cover, land use, and then you will take it through time. And so when you come to GS like this, maybe if I'm still working in maybe 15 years, we go here's the Netherlands and here are the homes that disappeared and the new homes built and that are the right, that's how the land cover maps will look like. So but this is just talking about concepts. First distinguish that there are models, space time models of objects, physical entities, and there's models of quantities and quantities can be densities or masses of volumes, or it can be energy fluxes or it can be a probability of occurrence. This is the space time cube. This data set you see here, this is this data set we're going to use here, which is this meteorological stations for creation, and you will see that when we make on the end, we will make predictions and predictions will look something like this. Now I have to actually show you that in the, because I don't have it computed, but I have to show you here. So we will make these predictions and they will look something like this. So you will see we will do space time interpolation of daily temperatures, and we will predict for every day, we will predict how the daily temperature changes. And you see these are the stations, and these are the predictions. So this is the same data set. This is this data set is shown in a space time cube. And here is shown on a 2D map. Yeah, it's not easy to match, right? You'd like, well, no, it's not the same, but they are meteorological stations and you have the measurements every day, their measurements. You see some stations, they are not, they are no measurements. So what happened here? How come there are no measurements? They had a day off on the meteorological service. No, so what we did when we first started working with this data, so I said, let's do space time. So we say meteorological data space time is high quality, is lots of data. So let's do meteorological data. And when we started working, if you look at it, they have something like 160 stations, and they may have measurements daily. Yes? So you have 365 days by 160 stations. That's a lot. When you load it in R, when I started doing, whoa, this is not going to work. It looks like it's only 160 points. But when you load all the space time data, and if you take, let's say, three years, you are suddenly loading millions of values. And that's the difference between space time and space. In space time, you get overloaded with the data very quickly. So what I had to do, just to do this plot, just to make this plot, I had to subset to think maybe 2%. This is what you see here is a 2% of something like this, because I couldn't plot it in R, and I needed it for the book and for the paper. So I said, oh, this is not going to work. Let me subset. So I just randomly took out some points. So you get an idea, but for the sake of plotting, I may be plotting here maybe 50,000 points. Sometimes when I say point, by the way, now imagine this, when I say a point, if you're in a space time modeling, then you say, well, you're talking space time points. You understand? Because the point, still in many GIS, when you say point, the point is, it's just this thing with the location, the station location. They said, that's the point. But in a space time modeling, the point is every measurement in time. So one station can have millions of points. So also be careful with that when you say, I have a problem with that point. When you do space time projects, then when you say, I have a problem with that point, they said, I have a problem with this coordinate and this time, this day. That's where I have the problem. But there's no problem just with the station coordinates. So this is this thing. Back in 2012, when I started going, let's work with space time. In this thing, we call 2D plus time. So you have two dimension and you have the time. We don't model changes above surface below whatever. We didn't model it. So we did this 2D plus time. And at that time, I work a lot with technique and geostatistics. I work with geostatistics. And one of the techniques I work a lot with, what's called regression creating or universal creating. And then I said, let's do this. Let's apply universal creating to space time data. And we managed to publish this paper. And you see there's quite smooth predictions. So we published that data set and predictions. But we did it using universal creating. It means we only use basically linear statistics. And what we're going to do here is the same, basically same thing. We do special temperature calculation, but using ensemble machine learning. So we don't use creating, et cetera. We use ensemble machine learning to do space time interpolation. And you will see it's a bit more computational. When I say a bit more computational, it's about 10,000 times more computational. But the effect that you can gain is that you can possibly increase the mapping accuracy. And that's why we, of course, use it. That's the magic. If you get the machine learning to better represent some nonlinear phenomena and to better subset and optimize fine tune parameters, then you are motivated to switch to machine learning. That's usually why people switch to machine learning. And we're going to talk about that also in the second block today and in the third block. This is the, so this is what I said what we did 2012. Fast forward, 2020. So last year, we started doing this. We prepared a 10 terabyte cloud option as GeoTiff, a Landsat time series. We get, fill it, we removed all the artifacts. We prepared it. It's a 10 terabyte dataset. Then we do space time overlay. Space time overlay takes about four hours. So Camelo, do you remember for the land cover or longer? How long will it take? No, not to overlay, just the overlay. One day. So like a 20 hours or something. So it takes 20 hours to over like 7 million points over 10 terabytes of data. Okay. You would think, wow, it's a one day. But if you try to do it with any other method, if you get it faster, we will be very much like to hear from you. This is the fastest we could get it. Imagine the fastest we could get is a one day to overlay. And then so basically one day you will leave it also overnight and we run it in parallel, right? We run it in parallel and you can parallelize maybe further, but eventually it takes a long time. So we did a space time overlay. We add also some other covariates and then we fit the model. We do space time overlay, fit the space time machine learning. And we made these predictions and this is what you see here. This is the predictions. And you know, there's people criticized as some homes, you know, they appear and disappear. So we have some places where you have a home one year and then next year there's no home. They say it's impossible, right? So there is absolutely there's noise. But we did some time series analysis to that. And when you smooth out the noise, we get accuracy on the second level and third level. It comes to 80%. So it is comparable. If you look at the Corine, let's say LandCover, this is now the state of the art European data set. So this is Corine and this is what we did. And they have also space time. So you can scroll, but it's every, I think every five years. And so, and also the polygons are quite large. And when you look at our data, we have it every year and we have 30 meter resolution. So much, much, much higher resolution. And we can update. We don't need, you know, every time they do new Corine, they start from scratch, not from scratch, but they have to, you know, get all the data and, and, and we only fit a single model. Imagine a single model to model LandCover. It's mind blowing. We had a single model. And now for the 2022, we don't need any point data. We don't need it. We only need the satellite images and the nightlight images. That's it. And we make the provision of LandCover. And so that's, that's, that's also one of the reasons to use the, the space time. And so you can play with all these things. It's super cool, super important to have this functionality to do this swiping. And so you can see the differences in different datasets. And we will keep on adding datasets. And also if you do a European project, if you produce a pan European data set that matches the specifications we use, we are most happy to host it. So we hosting it's on us. So we're most happy to host it. Let me go further. This is, this is also predictions. This is just a one tile. When we do predictions, we, we split the whole European tiles. I think it's 8,000 tiles. We use the 30 by 30 kilometer tiles. And then we do predictions and you can see this is the one, one tile in Sweden and you can see the changes at the start of the project. I was thinking, oh my God, LandCover changes 20 years Europe. Maybe you won't see so many changes. Maybe we'll think, oh, maybe this is, it will be difficult to see changes. Then when we got the results, we were surprised actually, lots of Europe is very dynamic. First you have urbanization happening in Sweden. There's a lot of, I think forest management is more dynamic than in Southern Europe. So we saw a lot of, a lot of changes. And now we're actually very excited because we would like to go back. We will go not back to the future. We would like to go back to the past and we would like to map it up to 1990s so that you can even see what was the history of Europe, LandCover wise and environmental data wise going from 1990, even 1985 if possible up to today. All these things I'm telling you, by the way, there is a paper and you can read it. It's not, it's still in review. So it's not accepted yet, so use it with a grain of salt, but more or less it's all described here and you can read. And if you want to cite this, you can, you can over the site. So one thing worth learning is that, and noting is that when you have a 2D data, you know, if you do geostatistics, you have a volume of three parameters and you have NS locations and SS was like station or space. When you have the 3D data, you have a depth, then you have four, when you do 2D plus time, then you have many more measurements. They go exponentially up. And if you do 3D plus time, then it's mind blowing. Then you have not only changes to time, but also true to vertical dimension. And then you have literally can have like a billion, so observations. And there is in our tutorial, there is a one data set also super happy. We published that paper, it's called Cook Farm data set. And this is a four dimensional GIS. So you have measurements with the automated sensor network where the sensors on different depth in soil. And you can do predictions also in 4D. And so I will show you also the here we use this altitude. So we use a depth also as a covariate. And then we model machine learning, machine learning changes using the depth. And here I show you only one prediction, I think I make the one prediction because it's of a depth of 30 centimeter. But I could make predictions through space time for the different times and for different depths. Now, how do you visualize that statically? How do you visualize the 4D when you can only do it in a paper? How do you do it? Well, it's not too bad. Let me show you how we did it. So we did this. So I downloaded all these papers locally, so I don't have to look online. So here's the papers. And this is the 2015 paper. And so how do you visualize the predictions in 4D? You have to make something like this one. So you have the, this is the time. The time. And then you have this direction and then this is the depth. And believe it or not, you can visualize that as GS, the 4D data. You can visualize in GS. In which GS you can visualize 4D data. Which is the GS that supports 4D data? Do you know? It's Google Earth. In Google Earth, you can visualize 4D data. And I will talk about this in the second block. And also tomorrow I have a session on Google Earth. So please come to Google Earth. I will show you that you can actually put these slices and you can visualize them through time. So you can see how variable changes through depth and through time. It's possible. But then you have to move to Google Earth to KML basically. But it's possible to visualize 4D data. I don't know about QGS. Time series. This is a time series example from Wikipedia. I think this is just some noise with the trend. Yes? But so they use it to visualize, this is like time series. Time series data can look something like this. It can be very noisy. When you do time, time dimension, then you have to consider I have to understand time series analysis. Time series analysis is a lot of things. I don't have time to talk about all these things. So what I just give you a really, really super short, like a one minute run through time series. So number one about time series is that lots of analysis is focused on so-called decomposition. If you look, this is the original signal, it's obvious when you look at it. So you see there's some trend component and there's some seasonality component. They can be split. You split them. Then you say, this is the trend, this is the reality. And then you have a random component. And this many time series does this decomposition. And then also how you feed this trend, which smoothing you use. How do you separate from what is the noise, what is the malation, what is trend? This is the whole science of time series analysis. One example of the time series data is the coronavirus in the Netherlands. These are the infections. There is a corona dashboard for Netherlands. And so you can go here in this dashboard. It's in Dutch, of course, but it's very well done. I don't know if your country has it, but at any stage it's kind of like in sports. You can follow the scores and tables, but it's a dark humor, of course. It's a serious thing. So you can follow the people getting into hospitals, a number of vaccinations. And these are the positive tests. So we can go. It is also GS. GS is very important, so I can go and find this is Ada. We are here. Ada Rachening. So we have 10 positives on 100,000, so that would be about four. It's about four. So four, yesterday, I don't know, four people had the coronavirus positive. And then you can follow. Unfortunately, they had also a time slider, so you can go through the time. You can slide through the time and see where in space there was more. Now it's very chaotic. It's really like leopard skin, so it's very chaotic. You used to be big cities. They used to be the hotspots, which you will expect. But now even Amsterdam has less infections than some little area in Filsland, I don't know. But this one is a time series. And this plot is, of course, wrong because it looks like the last year in March, there was less infections. So this plot is wrong, by the way. It's bias plot. And why do they give a bias plot? That's another question. I wouldn't have done it. I would have normalized it, so it gives the correct picture. But imagine the correct picture would be here is a peak. Here's a peak that goes down because then there was no testing. The testing just started, I don't know, in April or something. Then when they started testing, then it went down because there was a lockdown. And then there was the, we had the summer school here. Summer school, when we had the summer school, they started going up, but still less than now. And now it's in the, and now this was the vaccination starting somewhere here. From April, vaccination starting, then the numbers dropping, dropping, dropping. And this was the delta. Delta came from, I don't want to put geographical location, but in Netherlands, most likely via England, the delta came and delta was a big problem. But not so scary when you go back to the hospital, people having really problems, they had to go to hospital. Then you see that the delta was not as a big problem as in October last year. So it's about even like seven, eight times less. So it means the virus is less deadly for us, basically. But you see, this is the correct picture because this is in April last year. So there was probably much more infections, but there was no testing. So they never normalized the data. So that plot is actually, it's a bias plot. It doesn't show you the real picture of what you're interested in. And so you see this was the peak. So Netherlands never went back to that peak from the last year. So we went to some little peaks here and here. So you can say there were three, there were like say three distinct peaks. And then this was like a fourth, fourth milder. And this one, there's a lead, this is delta here, but less deadly. So that's, let's say some good news in a way. The reason why we're doing this workshop is that we knew that it's a less, less deadly, like way less, five times less deadly. So otherwise we would have done everything we're trying. But we said, look, it's getting people get vaccinated. Congratulations to Netherlands. Netherlands is, I think, in the five most vaccinated countries in percent, fully vaccinated. They're top five in the world. From the big, bigger countries than one million, I think there's Canada, Netherlands, Israel, so a couple of countries, they have really high vaccination rates. And I think as far as I know, 90% of people do want to get vaccinated in Netherlands. There's only 10% don't want to get vaccinated. Okay, so you see, this is the time series and you have these components and you have the trend component. So what is this trend component? So we will take a look at that and what is the similarity. So let's look at the temperature. Let's say, doesn't matter if it's surface above two meter, but the world temperature. If you look, this world temperature have these components from time series data. And one interesting component is the long term oscillations of the temperature. And they are connected also with ice ages. So this here was the ice age. Yes, so this is the ice age. Then the ice age finished about 9,000 years ago. And then you have the global mean temperature going up. But it's a gentle change. It's going from minus 0.3 to plus 0.3. So it's a change of 0.6 degrees. And this is how normally the temperature of the planet goes through this interglacial pyrus goes up and down. And there are some oscillations also here. If you want to read about this glaciation and why does the global temperature changes, it's very complex, actually, it's the whole field of physics. But what you have to know is that there is a distinct long term trend and the temperature change. And this is industrial evolution going 150 years ago. And now in this plot, that's why the industrial evolution is scary and global warming is scary. Because in this plot, you see this red line, this is what happened now with temperature. We are going to rise the temperature one and a half degree. It goes beyond this plot. It goes beyond this is a bit older plot. It's only up to year 2000. But we completely disrupted this long term trend in the global temperature. And then when you zoom into years, years and days, then you can see the temperature and also here water content. Temperature is much more regular. This is actual data. And then we just fit a spline. This is the Cook-Far data set. And we fit the spline for different depths. And super interesting. When you look at the soil, soil is like a buffer for temperature. So you see on the surface, you have higher and lower oscillations. And as you go deeper in the soil, the temperature stabilizes, oscillates more. And the soil moisture, all the way around, you have actually more smoother oscillations on surface and then more oscillation in the deeper soil. Because the water goes through the soil, it doesn't stay usually on the surface, but it goes through the soil and then accumulates somewhere. So when it accumulates, it becomes more stable. And you see there's this seasonality effect. And this is something that seasonality can happen. There's a daytime seasonality. So you have a day and night oscillations in temperature. And you have the seasonal oscillations. So that's like winter, summer, et cetera. So the temperature still has two cycles. It has the oscillations inside the day and oscillations inside the year. And they're both very systematic. There's always nights are cooler than the day, summers are warmer than the winter, et cetera. So it's always very systematic. OK, so that's the spatial temporal data. How do you visualize spatial temporal data? So you can do static plots. So some of the plots I showed you, this will be a static plot. And you can visualize four-dimensional data. It's possible. And is this objects or is this regions? Am I plotting here objects or regions? Somebody? So obviously, there's no, like, I don't use any vector structure. And we're talking about water content. Right? So water content, what did you say? It's kind of a density. So it's definitely a field. You represent it as a field. So I presented with a grid of data. So I just show what is the percent of the water in soil going from 0.15, so 15% to 60%. So there are some pixels that potentially get to 60% water, depending on the period. So that's a static visualization. And then I can do time slices. Time slices is doing something like here in the QGS. These are the time slices. When I do this, you see these are two days. This is 2006, August 15. And this is 2006, August 18. So when I produce time slices, if I do this, I visualize space time. Yes? So this is visualization using slices. So that's another way to visualize. And then the other thing we can do, we can do animation or interactive plots. And animation interactive plots, this is this thing here that you go and you create animation and you can choose your speed. You can slow down or do it faster. Yes? But you can visualize space time data. So these are the main ways, as far as I know, that you can visualize space time data. The most simple way actually is to have just one static image and then put a point of that time series data and then just show something like this. Then you just show the cross through that moment for that spatial entity. That's the most simple way to visualize. Then you simplify the data from 2D plus time, you simplify it into the time only. And you just have then the time series data. This is kind of the most simple way to visualize. And this is when you go in the data portal here. And if I, for example, click somewhere here, I don't know, then as you see, then I get this thing and this is the most easy way to visualize space time data, just having a one static image and then having a time series of the values for that specific query point. And then you will also visualize space time data, but we only know about the point. Now here's a nice example. This is the package I made years ago, but I'm very sorry. I don't maintain it so well. So I'm not sure if it runs still smooth. What I can do, I can put this data here. I can load the package. This is called a foot and mouth disease data. And as you see, I can load the data. It's in the package already. And I can pick up the points, pick up the dates, and I add to the dates. I add a new column, which is the report the day of disease. So foot and mouth is that someone UK where they observed location and time when some cow got the disease. And so that's a space time dataset. And you see I can put the data into the special temporal irregular data frame. SDIDF, this is from the space time package. So I start with having just the dataset looks like this. Let me see. So I have something like this, or it's already converted. Let me see. I have to do like this. So I have it as a simple spatial object first. So this is just a spatial point data frame. So I have the 648 observations. And I have the report the date, but it's just a spatial object. So you see that some of the coordinates, if it was not irregular, if it was a regular, then it would have been repetition in coordinates. But because an irregular, every space time point has a new coordinate. And then I could work that into the space time data frame. And then this one looks like this. So now it's a space time object. And it's a bit more complex structure because it has the data, the values. Then it has this locations and it has the time. You see, and even has some time index, etc. Okay, so it's a different, it's a higher complexity object. In this case, it's a higher complexity object, but it's the most simple extension. And it's called space time irregular data frame. It's a more simple extension because basically it's just a 2d plus time point dataset where time is not kept as a location in the SP in the SP part, but the time is kept in a separate slot, which is the time slot. And if you do it, if you do it like that, when you come to this, when you come to this example, if you use the plot KML package, you can go in and make the plot. You do plot KML. And then we create it. We create a KML file that I can open in Google Earth. And we can now look at it. If the Google Earth starts. No, maybe I'll have to restart it, but we can look at that data, how the values change through space time. Let's see. How do I kill the Google Earth? No, I think I have to check with the land or how was the installation made for Google Earth. I could try to pass this data I made. Let me just see if I can do it. What's here, I made it the dataset. And I think I can pass it to this Windows 10. And it should be visible from my windows. Yes, let's see if I can open in Windows. And I will check, I will check in the break. What's up with the Google Earth under Ubuntu. So here now we're looking at this dataset, foot and mouth disease. And here's the data. I ported it with port KML and purpose I put going from blue color to red color. Blue color is the early cases. Red color is the most recent cases. Have to remove all these other things. And so here's a space time dataset. And imagine now we can visualize it. The guys who make Google Earth, they understand statistics. So you see they have this temporal support. So I can play with the temporal support. And I can scroll to going up or forward and it will go very fast. So why do I use this visualization? Well, it's a complex thing, this foot and mouth disease. And what do you see from that foot and mouth disease? This is so at the beginning there were a couple of let's say outbreaks. So there will be only like three outbreaks and they started spreading. They're spreading all around. And then they spread also further down here. And then again, there will be just the one because here they stopped. Here more or less they stopped. And at the beginning it was very fast. Then it stopped. And then kind of a second outbreak here. And but you see interesting like this outbreak here. It happened, but then it nothing happens serious. But when I look at here, then it got again bad. And that's the way you can visualize space time data. Is this a region or other objects? Come on guys, wake up. This is obvious from the image. Well, how do I use to represent it? What do I use to represent? I use like a vector structure. So obviously these are the actual cows that observed as getting sick. And so these are kind of objects. But I could now, what could I do to convert that into grids? What could I do? Well I could calculate the density. I could for every let's say a week because it goes a couple of weeks. I mean it's over from March to August, September. I could take every week and I will calculate the density of cases like a kernel density. And then I have a time series. Yes? So anyway, but you see it's a nice thing to be able to go from R directly to Google Earth. Yeah, and you can play with that and that's something I will talk tomorrow. I will show the plot came out. And you can easily change the way you use the color. You put another color scale and you can add also elevation. I don't know. You can play with this, what we call space time aesthetics and then you can create your own resolutions. So that's the other very nice thing is the Google time lapse. So if you look at the Google time lapse, have you looked at the Google time lapse, you can pick up any location in the world. And this one is the Wageningen. And you can see last 40 years, you can see this Landsat images last 40 years, it plays animation. So I cannot zoom in more because the resolution is limited, but you can see how the Wageningen change going back all the way back from the 1980s. So it starts from 1985. And you can see that you can see also Landsat getting better. You can see campus growing, you know, so you can see campus growing. It used to be very small, of course, 1980s was like two buildings, I don't know, nothing, but with just the cows there, I think, just the farm. And now it's a big campus. So you can play the three-deframe location in the world. And there's also, of course, the Google pick up the ones that are most interesting. So that would be some really construction sites or somewhere in, I don't know, tropical forest disappearance and things. But please play with it. I don't have time for that now, so I will turn it off. But please play with it. Google time lapse, it's a history, history of the planet, except it's just history through images. There is excellent book on space time data by Oscar Perpignan. He was also in Osama still one year, but he has all these workout examples and there's tutorials. So please take a look at that if you're interested. The book is just behind the link. And lots of things, it's available just in the code so you can reproduce it with your own data. You can also make animations with it. So if you never heard of it, it's a fantastic book, slightly outdated now. There's a lot of new developments. There's the team app package, which you can watch the tutorial from our summer school last week, a tutorial by Martin Tenekes and also excellent package. And there is also animation in the tutorial. So if I look at the, if I will go back, then you can see, then you can see these animations. And there is also a book called Geocomputation with R. Also fantastic book by Robin Loveless and Jacob Ganes. So also fantastic book. And you can also read about how do you visualize space time using, for example, in your book. So you can have a book, you can make a, with R, you can make a book, online book or tutorial, which will have animations. So you visualize things with animation. So you don't have like a classical book with a static image, but you scroll through images and you will see the, you will see actually animations of things. So that's somewhere, a chat animated maps. You see some looking at the book. It's really magical that one picture in a book, it's actually animation, right? So it's not interactive animation. There will be books soon that you have interactive space time animation. So you can play with the things like the things I showed you here. Imagine if I had inside a book that you can go and play directly with Google Earth or something. So that would be also awesome. But you will soon have a book that will have that to leaflet or to some other way that you will be able to play in a book with the space time data. And with this thing, I think I'm done with everything just the conclusions. So what's important, what I said today, well, you may be no more than me, what's important, but if I can emphasize myself, what's very important to understand that the extending from space to time is not trivial. There's a lot of things you have to teach yourself, especially the time series analysis. And the second thing, prepare for the large data, jumping from space to space time can be from 10 times to 10,000 times more data. And then if you want to do space time, don't forget you cannot force any data to space time analysis. If you miss, if you miss, for example, time for some locations, you, with many data sets, we have to drop the data because so unfortunate, the surveyors didn't record the date when they went to the field. And many points we have to drop from the space time data. Sometimes when you drop so many data from space time analysis, let's say if you drop 80%, then your slippery slope, because you really are trying on force to do some analysis with the data, which is not fit for that analysis. So that's something you should also think about. And then to analyze the data, you can use geostatistics. And I showed you, I did that with the space time cringing. You can do process-based modeling, or you can do hybrid methods, and you can do machine learning. And we're going to focus this course, we're going to focus on number two, the machine learning. So we're not going to talk about process-based modeling, and we're not going to talk about geostatistics. But my interest next thing is to do hybrid methods, so to combine machine learning with process-based modeling. But that's another extra layer of complex. There are some groups that are doing it, by the way, experimentally, but that will be another, let's say, paradigm in spatial-temporal data science. And with this thing, I close the first session, I will stop the live broadcast, and then we have some time, so I stop the live broadcast, and we have some time now for questions and discussion, and also people that are online. Yeah, they were guessing good, you see, they're more awake than you. So they were guessing good, very good. I see two people said objects, and grass GS, some of them mentioned grass GS, yes. There's a space-time grass GS. Very good, Nita, thank you so much. Yes, there is a grass, space-time grass GS, and there is a leader from our summer school from last week, and also Vera is going to talk about space-time grass GS, think tomorrow, and also on Wednesday. So they will be mentioned, it's also a fantastic development that you can do space-time in grass GS. But please, if there are questions, so you people that are online, you can use the chat. If there are any questions, we have about five minutes, and then we're going to make a break half an hour, and then we continue with the modeling, so we'll start doing some modeling, and then we'll do more and more art, and it will be good that you try to follow that in the art markdown. Are there any problems with the virtual box? Do you all have it running? Virtual box, yes? Okay, screenshot, where's the error? You need to screenshot. You can customize, as I said, it's really amazing virtual box, and you see it runs very smooth, I can switch from Ubuntu to Windows, and they're not intermixed. I can exchange data, you see I can exchange data from Ubuntu, I make it in Ubuntu, and I open it in Windows, if I have a problem with Google Earth or something. So also fantastic, you see I look at the same folder, this is the same folder in the virtual box, and this one is in Ubuntu. So that's very nice with this virtual box, so when I turn it off here, I go turn it off, it will say, just save the machine's state, so always do that, don't shut down. Yes, so if I do that, it takes about 10 seconds, it will save everything that I was doing, so when I restart Ubuntu, I get exactly the same, so you lose very little time, so it's actually very robust, it's way more robust than I thought, but there was issues with the first image, first image was working, then we added some software, they didn't have Google Earth and stuff, so we added some software, we met a second image, then we had some problems, and then we resolved it eventually. So the last instruction, let's see, the Leandro, the last instruction said there was all working or functioning, now there's still some issues we will see, but these instructions that I think maybe the Leandro sent it in here, here he sent the last instructions, great news, new version, that's the last one, maybe I copied that to the, no I don't know. So this one was working, and as you see on my machine, it's not no problem to set up, there was only issue with the RAM, I think, so you can customize it a bit, so I went and customize the RAM use, so I increased, I have a 16 gig laptop, so I increased the RAM to 10 gig, and I think with 8 gig you can run everything, no problem, but in my case I increased it to 10 gig, and also I increased, do I have 12 CPUs, and I increased to using 6 CPUs on Ubuntu and 6 on my Windows, so I split the CPUs, that's kind of the good way to do it. And as you see when I start the machine, it will get me back exactly to what I need. Let me see, there's some questions, Mark Dev, yes, is the code for your demos in the, yes, so the answer is yes, a question by Nick Ham, so yes, it's all, everything I do, it's in the demo, and the demo, it's loading, the demo is here, under the R training, so everything I will do, it's here, there's no extra code, only this little code I made for the, this is the code for the foot-mouth disease data, so you can play with it, this is by the way, Mark, the MetaMode supports Markdown, so you can, if you make some code, and if you want debugging with the code, then please write it as a code, so this was the code I did to plot, and so I can send a code like this, and if I preview it, it renders the code as a code, okay, so, so you can copy paste that code now with you, and it's going to create that space time point data. Let's see, how is the Python group doing, I think it's going okay, we can watch them on YouTube, if you get bored in my session, you can watch them on the YouTube while they are going upstairs, but yes, questions. No, no, you can ask, they hear you online, they hear, it's no problem, so you can just ask here, please. Sure. Yes, yes, yes, yes, I don't know if you do this integration, so there are two parts, this one is the disaggregation, the second part is that you take all the code and that you take statistical aggregates for that polygons, so you can take a mean value, minimum value, highest value, so you take the statistical aggregates, and then you model it as a space time matrix, and then you just put the results to polygons, that's something very simple that comes to my mind, that's very simple, but they are caveats, so if the polygons are like the large polygons, small polygons, that can have different effects on the predictions, so that increases complexity. The disaggregation is another part, so you try to bring these polygon values, you desegregate them to match the grid values, and then you can use all the pixels, something I will show you today, I use all the pixels then, and I build a space time regression matrix with all pixels, but then you have to be careful about the resolution because if you get, I don't know, one billion pixels, then you cannot model it, so what I do at your P I will show you have a 30 meter resolution daytime, and then I aggregate to one kilometer, and then I bring everything to one kilometer, and then I match on the pixels and I fit machine learning, and that works. So there are two parts, the desegregation and aggregation, but when you do aggregation, then you don't take only the mean value, you should take more, more measures, and in machine learning it's very nice, you can extend, it doesn't have a problem, like in mean statistics you have problem if you have too many covariates, you have a problem if they overlap, multicollinearithmic, in machine learning, no, so you can extend not to infinity, but if it's worth it, you say, well, it helps model, extend, then you take mean, mean, minimum, you know, variation, interquantity range, whatever, you have lots of way you can aggregate. Any more questions? Very interesting question, good. Any more questions? One more? Sure. Yes, the process is very modest, then you have one state, and then basically your predictions go to a next state iteration by iteration, this is a process based model, but you implement kind of physical formulas, so you understand the process, you kind of simulate physical behavior, when in data science you don't simulate basic, data science is actually primitive, I mean primitive in a sense, ignores physics more or less, it's just you look for patterns, you have the patterns, you have correlations, you look for repeating things, you look for significance, you want to distinguish between noise and signal, so that's data science, that you don't understand. This thing we did here on the mapping the land cover, this is purely data science, we don't understand what drives the urban growth, we don't understand the processes, it would be very complex to have it for Europe, so we only have these points, and these points allows us to find, because we can find a relationship with the satellite images, and then if we find that relationship then we just apply that relationship to all the pixels, and that's how we can map, but it's data science is quite basically, it's primitive, it's kind of like data mining, looking for data patterns and relationships, but without understanding these patterns, why do I have these patterns? We just detect them, and the hybrid is that you focus the primary is the processes, you model the processes, and then you have the data, so let's say you do have a lot of data, then you re-calibrate anything about the process you do, any parameter you can re-calibrate, so you still data science, use the data science, but for calibration of the process based models, and that's super interesting.
|
The first 30 minutes are dedicated to Software/libraries preparations and user support; Software requirements: Python, Jupyter, QGIS, GRASS GIS, R This tutorial starts with the explanation of general concepts and main advantages of docker containers, answering questions such as: What is Docker image and where to find it? Starting with the docker image opengeohub/py-geo.g? The first 30 minutes are dedicated to Software/libraries preparations and user support; Software requirements: Python, Jupyter, QGIS, GRASS GIS, R This tutorial starts with the explanation of general concepts and main advantages of docker containers, answering questions such as: What is Docker image and where to find it? Starting with the docker image opengeohub/py-geo.g? Which tag/version should I use? The session also shows how to install new OS and python packages inside the container, how to share files between the host machine and the container, OSGeo live ready to use in the VirtualBox.
|
10.5446/55231 (DOI)
|
In this session, we will explain about the Cloud Optimize Adjective. So the title of the session, it's really working with the Cloud Optimize Adjective in Python. And we will use the notebooks that we provided. And we already used it in the past sessions a bit of the Cloud Optimize Adjective files. And it's important to emphasize that we all the outputs, the raster outputs that we are producing the context of the open data service group are Cloud Optimize Adjective. So what we will present here, actually, you can use for all the data that we are generating. And so my presentation, in my presentation, I will provide a short introduction to this format. So we call it the Acronic Kids Hog. And we will use it, we will access it in Quantia S and show how to do it in the GDAO. And so actually GDAO, it's the base library that implements this, that implements this, the access to this file format. And now it's a more like a combination of file format and access protocol. But Quantia S, Raster, IO, in Python, all these softwares and frameworks actually they use GDAO on the end. So and in the Jupyter notebook, we will do an exercise to clipping region of interesting and to calculate a time series access or how to access data and later, the end of the presentation, we will discuss about this stack. And we plan to implement it in the project. And we believe that it's a nice protocol, a nice solution to put all these Cloud Optimizer Geotips available. So what is Cloud Optimizer Geotip? So it's a regular Geotip. So and if you think about the Geotip, actually, it's an old format to save like Raster data images. So I think it was created in the context of like image processing and photography. So and it's, we call it like a loss less format. So you can compress the data and you can save it. So and later you need to decompress. So you, you actually have the original data after the compression. So you don't lose any type of data information related with your, with the file format. And on top of it, we, if you work with Geo special data, for sure, you already see saw some Geo TIF images. So on top of this format, it was developed the Geo spatial extension. So to embed the information about the projection system and the size of the pixel and spatial resolution. And so Cloud Optimizer Geotip, it's a kind of advance and evolution of the Geotip. And if you think here are the main, the three aspects that make this format really like a special and really useful for the most part of the applications today. So it is a loss plus compression, it implemented, it provides styling and overviews, and you can access it to HTTP range. So but what, what it means. So this format was specifically developed to work in a cloud environment. So when you are in a cloud, you actually, you need to minimize the access to the data, because if you move your data around, you will lose some performance. And you, if you minimize it, you can really optimize your workflow and access and invest time in just in the computation time. So this format, the first part of it, you have the different possibilities of compression and what this compression means. So in the last session, when we loaded the, like this, that tile that I presented, just for our pilot tile, you have more data in memory than in disk. So it means that when you save, like the data, like that matrix, if you think in a raster data, it's a matrix could have multiple bands or not. But you save this matrix in a compression format, and you compress it, and you save in the disk. So it will occupy less space than in memory. But you have a cost to do it, you need to run some type of compressor algorithm, and you encode the data and you save it. But when you need to load it again, you need to decompress. So it's like a zip file. So to decompress, we also need to run some decompressor, and it will take some time. So a good format, a good compression format, it's actually, it provides a good balance between of it. Because if you compress too much, probably it will take more time to decompress. And sometimes it's not good for your application. But the most common compression formats that we have available in the GDAO implementation, it is deflate and the ZW. So you can just select this format. And when you save your GeoTIF as using GDAO, and you can see that the file is maybe five times less. The files, you save the file with occupying five times less space. And so this is really important, because of course it will minimize the data transfer between the cloud, in a cloud environment. And the second part, it's the tiling and the overview. So when you have like a big GeoTIF, so imagine our GeoTIFs for Europe, we have more than millions of pixels. But maybe you are just interested in a part of that pixel. So that image, so just part of the chunk of pixels. So if you have just one big chunk of data saved in the disk, you need to load everything, decompress everything. And sometimes it doesn't fit in the memory, in the memory, for example. So the tiling is actually a nice way to split the data in regular grids, as we are doing in our processing workflow. So you can define a tile size, so it could be 1024 by 1024. But it's just a regular grid that you will split the data and you can access parts of this data, just recovering from the disk, just these parts. So but on top of it, specifically, like from the cloud optimized GeoTIF, you have the overview. So considering this tiling system, you can provide like a down sampling representation, down sampled representation of this image, what it means. If you think in the whole image that we have now for Europe, maybe you just want to visualize it. But just to visualize, you need to load everything in memory. So it's possible actually do a type of pre processing to load it and generate like a short version of that image with less spatial resolution. So instead to use all the pixels, you can you need to use all the pixels to generate it to have like a reliable representation. But you just have like now a shorter image with less pixels. So it's a kind of I don't know, instead to have a three and a 30 meter image with 30 meter of spatial resolution. Now you have one kilometer on one kilometer of resolution. So it loads way fast. But all these concepts that I'm explained to you, it's already inside of the format. So GDAO manage everything it for you. And using these concepts, if you put this file, compressed with the tiling system, and with the overviews calculated, you can put in a web service, or a kind of HTTP file service, and you can access it through the network. So and with the network, you can access using this range request. So instead to access the file locally in your disk, you are just accessing like a URL, as we did in the last training session. But now we are, we will do it more and you will see how it works in like in a practical way. So you are this, basically the cloud, cloud optimized the geotip, it's a combination of all these concepts. So and more recently, you can use it and host like in service like S3, and from Amazon, or hosting a Google cloud storage. So instead to have it in like a web service or put your own server, you can use like services that are specifically designed to provide like data through the cloud object format. And mainly, the difference between the file storage and cloud storage, so file storage, it's the file system that you have in your computer, each folder that you enter, you will like enter in new folders and you can see the folders, the data, so actually the data is the files, your files are in the last part of this tree structure. But here in the cloud storage, it's a total different concept, you just have a chunk of data, and you put it on the type of service and you can access it through some URL, but mainly you have a unique identifier. So you avoid all this navigation across the file storage. So the best way to use the cloud optimized geotip today, it's really hosting a cloud object service like S3 or Google storage. But of course, you can host it, you can have your own machine and see it, but if you are managing like and dealing with big files, it's better provided in a cloud environment. So and today we are seeing like we are seeing a lot of like the big players are really using this format. So USJS, for example, they provide all the like this book explaining how to access the Landsat data through cloud optimized geotip. And you have this system in the Google Cloud. Now you have available also Landsat data and Sentinel data. So and all these files are available through cloud optimized geotip and in cloud object storage. So there are multiple possibilities to access, for example, the Landsat data. And you have also in the AWS. So it's the same archive. And of course, these cloud providers are really interested in put this data available to allow and other applications to be developing side of their infrastructure. So for example, if you have one processing workflow that use, for example, the original Landsat 8 or the Sentinel, and if you put all your code inside of a server in the AWS, you will minimize all the traffic and you can do a lot of processing and just downloading the output, for example, like a land cover classification or a pre processing for the Sentinel and the Landsat data. So and in the context of the open data science, we also are using like this cloud optimized geotips. And mainly we are processing all this data in different servers like local servers, cloud servers, but the production of the cloud optimized geotip, it's mainly with GDAL. So we all the files that we are providing here in the open data science, they were generated with GDAL. And it's important to emphasize that the GDAL user was the version actually here, it's equal or my bigger or equal 3.1. So it's when the cloud optimized geotip driver was was really implemented and released in the stable GDAL version. And so we generate these files locally using our servers or other servers, but locally in the server and later we send it to there is a self hosted cloud object service. So it's compatible with S3. And it's a kind of S3 that you can self host in your own infrastructure. And we have the wasabi. So it's a cloud based object storage servers. It's really similar to the Amazon, but it's way cheaper and you it's compatible with S3. So right now our data is in there in this wasabi and in our local infrastructure hosted in MIIO. So and what we will do during this training session, it's mainly access the data that it's available in the wasabi. So I will provide some or else and some instructions of how retrieve this data and how work with that and what are the advantages and all these concepts that I'm explaining you will see working in a practical way. So to start, first I will, so this is a screenshot from our web viewer. So the first thing that I would show is if you click in this link, it's the same link that I presented yesterday. So mainly this are the this image here is the Landsat RGB mosaic generated for 2019 for the summer season. So summer it's the best mosaic that we produced because it has less cloud interference and if you play with the opacity, you can just see the other layers here and it's available here. So this is the RGB Landsat. So it's a file with three bands, red, green and blue and you can back in the time and see different year. So each year here it's actually one different file. So we generated this cloud optimized GOT for all the years and actually it's a tree band and specifically this file after all the compression that we generated it's about 10 gigabytes. So it's a big file and to work with that you actually you don't need to download everything in the old-fashioned way. So what I will show now is we have this URL here and you can copy this URL. So the first exercise here will be open it in the outside of the virtual box. I'm just here. Okay, it's here, right? This is the Quantia. So what we will do now is mainly get this URL and open in a Quantia as a service. So because for example in the Quantia you have that plugin to see for example Google Maps, to see Bing Maps, but it's also possible access, okay, good, thank you. It's also possible access cloud optimized GOT tips and one thing that it's important to generate cloud optimized GOT tips, you need to have a new version of GDAL but it's fully compatible with like old versions. So actually to access it you don't need like a really new version because they GOT tips have pretty standard format and it's fully compatible. So what I will do now, so let's get this URL here and we can copy it. So it's a LEN over RGB Lensat blade percentile 50 and we just put the the summer image but this is an image from 2011. So and now here add layer, raster layer and in Quantia you can open like a raster layer locally here but you have this protocol HTTP sCloud so mainly we will use it and again this will be executed actually by GDAL that will access this file and bring the result for us and and don't be confused this is not a download so it's actually like accessing part of the data. You need to remember that this data has 10 gigabytes if we download it maybe it will take the whole session. So actually what we did we did several of pre-processing and using this cloud optimized GDAL if preparing the data to allow the people access just part of it. Just an overview just I don't know some pixels located in varling and these libraries and these software and in some libraries can manage it and access just these parts of the data like in a really seamless way. So if you add it here it will that can take a not too much time but it's not as the same faster as the same in your machine and yes the first thing here I will disable this functionality. The first thing is you can see that it's just a more like a black image but I really don't know about it but and I don't understand but when you load the cloud optimized GDAL for some reason Quan Chi S doesn't enable this contrast enhancement so you need to enable it and you need to select this because it's a nice way to apply like an enhancement and if you and to enhance the data if you need to click apply so now you actually it's basically a histogram and stretch to these values here and it will provide a better contrast to see the image and if you click here you have the same image. So the first time that you load it's not enabled by the full in Quan Chi S maybe in other version it is but the version that I'm using it's not enabled by the full but you can see you just change here properties enable this stretch to min max cumulative count and now you have a nice image so and okay and with this image so this is this is not a label mass service it's not like any of this web GIS servers it's actually real data and you can see that because if you enable this plugin here here it's a you can use this plugin that we set up so here you can actually access the pixel value so and we are talking about three bands here red green and blue and those are the pixel values of these bands so the spectral values rescaled to a byte format so it will the range will be between zero and 255 so this is actually real data and you can zoom in and if you zoom in in some place so you can see it will load so it's almost the same speed of a local image if you have it so sometimes it's almost it's the same speed basically and so and again you can check it's full resolution so this is not like something as google maps service or something like that it's you actually you are accessing the data but what it's happening here like behind of this type of magic each time that you zoom in you actually you are seeing like a part of that layer pre-computed like that pyramid layer so this image here that we are seeing that it's not the full resolution image because it's not necessary bring all the pixels I don't need to have the full resolution of this image to provide the overview in this scale and it is level of zoom so what it's happening here all these pre-visualizations and these overviews this pyramid that pyramid layer that I show in these slides it was pre-computed and now Quan Chi has it's just accessing different levels of these overviews and we will access it in Python so you actually you can see the same thing in the code but here it's happening like by the library like inside of the software and it's why it fast and mainly this concept actually by the way it's the same concept used by the Google Earth engine so when you for example process some if you use Google Earth engine sometimes anytime you can perform like I don't know some NDVI calculation for the whole world but actually you are not processing a full resolution image you are just processing a downscaled image and even if you are working with Landsat data it's just a down sampled version of that Landsat data maybe to one kilometer five kilometers that it's that provides an overview for you and when you zoom in in the map you are actually changing between these overviews so and as we are providing it for in a using a file name convention and like and big format here you can play with this year so I can get the other year here just changing so here it's the same URL I just because Quan Chi has kept it for me and that's nice so and you can see I'm just changing the year because the interval it's the same so I have the same interval it's summer in the context of the project and we are accessing now this data it's from 2011 and this now I will access 2019 and again I have an image without contrast but now I will just copy the contrast from other image for the other image so here if you flip with the double with the right bottom of the mouse you can copy style and I will copy to here let's see if it will work okay that's nice because it means that I have two images with the same range of values what makes sense because we are harmonizing we are using a product that it's harmonized and and this it's a very good example because this image it's mainly produced by lens at eight so the pixels here were collected by lens at eight and after a lot of pre-processing because we are using a product derived from the I call it produced by University of Maryland we took this image and we did another level of processing and aggregated it by season and here at the end we have a full coverage for the whole Europe for two different years that actually were acquired for by different satellites so it's it's pretty interesting and and let's zoom in here I will disable it and now you can compare and yeah the image it's harmonized but you can see that the quality of 2019 it's better because satellite it's it's a better has a better sensor and you can go to any place in it but one thing that it's important is remember this is actually this is a this is a access to a real data but here you are quantia as it's playing with these visualizations to provide a a good and fast overview for us but you have a lot of raster processing tools here in the quantia s and I don't know maybe you want to clip raster by extension for example so if you use for example if you provide the extension for this whole area here like including multiple countries like a big area it will take a while because for to perform this operation you need to work with the real data so the quantia s will download it and really will clip this data for you but it will generate a big image and probably the process will uh and frozen so this is nice but you need to be aware that it's it's actually a proper service to access like parts of the data but so I will do something here so I will just get this area here and you can go to raster clip raster by extents and I hope it worked because I didn't test so I will use my map canvas and you can see it's I'm using the right projection so all the metadata properties and everything was derived by this cloud optimized edutif this is a remote file it hosted in our server and this is actually the command that you can see for with the gdalf and at some point here you can probably I will do uh I will enable this compression press compression and let me just check this so in the gdalf so now I will use the gdalf because actually quantia s it's just generating a gdalf command for me and yeah compress so here you can see all the compression algorithms that we were uh that are available and we talk about the lz lzw and the flake so I will use here compress and we'll put this one and this is actually the command so this window it's just to generate this command and you can see that it's using the vscure it's a common url access library but gdalf it's using it to access the file remotely and it will provide the output here and the output it's it's it will be right written in here yeah okay and the important command here param parameter here it's actually this one so this is the window so it's the extent of the area that I selected so and now I produced the local data so now I have some local file here and you can see it's possible to see because now the extent it's different the colors are different and I will use the same style and okay this is the local file so what we did here we access it a remote file we navigated across europe and selected the specific area and for this area I downloaded like the data and now I produce a local geotiff and you can see you can see the location here now it's in my machine but it's in the temp folder because one chis managed it for me okay this is nice but uh it's it's not like and do will do it for several images thousands of image so so now we will use the notebook to actually do the same thing and a bit more so I will go to here and I will restart my kernel and clear outputs so and okay and before I continue I would like to show that it's important update the repository because we are testing this code and do some changes and fix so you need to open up terminal and it's odsa work here code odsa workshop and this is a git lab uh repository and you need to do a mid pool it will bring some new files if you don't have yet and for sure for after the the training session we will do we will add for example the presentation so you can do like one last pool and get the whole content that we presented including data and everything so and it's better to use it outside of the virtual box but to practice you can use the virtual box and do it later so now we have the new version of the code so it's updated and if you click in the Jupyter lab you will see and these files here so I will just navigate to open the files that we have in the repository so the asc workshop and python training and this is working with cloud optimized geotips okay and this is our notebook so mainly here we will access different data because we will access the red band so it's just one band it's to demonstrate but it's the same concept so the rgb mosaic it's three bands and it provides a color composition but here it's just one band and and but it's the same concept so let's start with the library so here we are mainly importing the rest radio numpy yeah it's better right here I will move it here yeah so okay and let's use the same code that we are using to do this and repository definition so our data it's here and we will use this as a cog output so the goal here it's really access the data as we did in the quantity s but now saving it like using python so programmatically so we can really do it it's automatic and iterate over several images so but I like to explain I would like to explain first the these overviews so if you so this is the where else so it's it's hosted in the same service and here I'm just passing this where l to the rest radio so and here we are using actually the raster you you could use the read raster to read it the you map package but rest are you'll provide some uh functionality to access the data directly so and in this data here we will get all the overviews and I will show it to you but most important to using these overviews so that the pyramid layer the parts of the data that were pre-processed downscaled to provide the sort of visualization and a preview for us it's not like the original data but was pre-computed we will use it to actually read the data using this function here so I can define an output shape so and internally raster you will understand that I'm trying to access just not the original data but the overview of the data and if you do it you can see that here we have these levels of overview and the last one is this so the image will be rescaled like in this proportion and actually here I'm just getting the last overview so the small the small one the smallest one and this is the size of the image so it's actually like a pretty small image just a thumbnail but it provided me a good visualization for the Europe so and what it's nice here is the cloud optimized geotiff the format manage it for you but we actually we pre-processed it so it's it's why we have these overviews calculated so if you generate a cloud optimized geotiff using the gdoll it will take more time than a regular geotiff but it will allow to the users really access this overview and have a thumbnail of the data it doesn't matter if it's it's word or it's the the Europe it's just a matter of to put to run and it will pre-compute everything and prepare in a nice way to be consulted and it's what we have here but maybe this is true course I don't know but you can choose here a different overview level so and I will now get this one so I will move true positions considering the end and I will just print here oops here okay oops I'm using the okay over hold so you can see now I I changed the overview so now I'm using a different level of overview it's not more it's not more the smallest is more one and now you can see that the image it's a bit bigger and yeah I quality it's a bit better and yeah I can play with that but of course if I go to the level minus eight or the first level I will bring much more data and it will take a lot of time because actually needs to download all the data so and what it's nice for example imagine you here it's actually this is actually real pixels it's it's why I can I can do this plot but I I in here it's just like an a numpy shape a shape so we'll just print here so and of course it's starting with zero because it's this this part of Europe but it's it's an umpai so imagine you can you can calculate if you have the red band this level this overview and if you have the near band you can calculate the n dvi in this little image and you can see the result across the whole Europe and check some problems actually it's how Google Earth Engine in n since actually so you can do the same thing and okay so now what we will do is this I will use this ipai leaflet to present this web map for us so this is a nice plugin where you can put some different base layers actually you can put the google maps here here you can insert some draw controls and some layer controls and this is actually a interactive map and you can zoom in and you can move around so yeah it's it's just a map and you can it's it's a pit that you cannot plot directly the the cloud optimized edgative and they are working on it there are some libraries trying to do it also and g a map map leaf leaflet map something like that but maybe for sure in the future they will integrate this functionality but now we are using this uh plugin here just to draw a rectangle so it's just a regular geometry and you can see that I'm I have this draw control here and I created it and I just added to the map and with this draw control it's possible retrieve the bounding box so here it's a it's the bounding box that I created in the inside of this map and now I will use this bounding box and it's important to remember that I can see these projections are not the same one the projection system it's not the same one from the open data science here so you need to convert them so I'm doing it here so I will convert from this WGS84 to our projection system and I will generate a proper window a proper extent and using this extent I'm actually generating a window object and this window object it's it's just like a representation of a window over a projected raster so here this call off and this rule off it's just like the offset considering the beginning of the image and here it's the time the size of the data that I will retrieve so it's a nice uh system provided by rasterio and now and you you you need to create it from the uh from the your extent and now what we will do it's actually get the URL so here's the data wrap band we will use this read raster our function to read this cloud optimize a guptif but as we did in quantiis I will just send a window it's possible read the whole data is if I remove this spatial window I will retrieve the whole data but in the most part of the case the users don't want to do it it's it's it you are just interesting some part of the the whole data set and now if you execute it it will get the data and using this is so this is a noon pi array as we are doing the last uh pintal training sessions and now you are plotting it uh here it's just a rescale of the values for between zero and one and you are plotting it using these colors here and you can see that it's the same region that we generated here and this is a workflow that we created here just to enable the users to access like other specific region in Europe without go to the quantiis so you can go to any other part and you can do the same thing now I think I'm in German and you can draw a new window here and if you'll see this function here it will get the last geometry so now I have two geometries in my object and I can repeat the process so I now I have the extent the window and I will generate again and now I have other data and this is just a noon pi array I'm accessing directly and you can play with that now and this is nice because imagine if you you could go to the open data science Europe you can maybe you could you would download the data but you need to imagine that you have a we are providing a nice functionality in the web viewer to download it so you can click you can download it and when you download you need to save it so you'll have the time to download the time to save in the disk and the time to load it again and here everything is happening on the fly so actually you don't have the download you are downloading a file directly to the memory so it's not necessary write anything in the disk and this is a nice advantage and here I just changed the I actually don't know where this image will just put the test and now I'm saving it in the disk so I can save it and you can optimize it and click and get different parts of Europe automatically and so how we are with the time okay so now and we we did see it with the quantia is but I'm using here the GDAO and this explanation point will be formed to Jupyter notebook to run a command like in a terminal mode and you can see here that I'm using the GDAO to and if you use directly the GDAO you need to inform this first part here this prefix before the URL so and with this URL here it's the same one and you use the GDAO and you have like all the parameters for this image and here it's the original size so it's a big image and and you can have it and you can see the projection system this is the original one and we did it in quant with quant to GIS so but here it's just an explanation of how to execute it by yourself so you can send again a projection window and a projected window and and you can access the URL and save the output here and so and here it's just like a comparison and it's comparing the two images that were produced by the Python and by the GDAO so okay it's because I changed the the the bounding box so I will not execute it the image maybe I will execute it just to show so you can see that it's the input size so it's a big image but actually the GDAO knows and this is this is the concept that I explained in the slides working here so I have a big image but it's not necessarily downloaded at all to access parts so it's a kind of random access mechanism that allows to go specifically for each part of the rasters that I'm interested and here you can see this is the tiling system so this image this all these pixels here that we can see so it's a big image it's not just one chunk of data actually it it is split and different tiles and each tile has like this regular size so you are in the file disk like in the when the file is saved in the disk you have like a smart mechanism to access just these parts of these tiles considering the way that it was prepared and oh because I moved I changed the name so I think it's test yeah so you can see this is the file that I generated with Python it's for some region in German I think and here is the file that I generated with the GDAO Translate and it has the same projection system the same pixel size of course and and the block it's less than the tile system but has less pixels and because it is for different regions those files so till now we access this data we clip it we save it and most importantly it's possible access directly to the memory but there is other type of analysis that it's that might be interesting also is because you need to remember that these files the Landsat archive that we prepared here they are really temporal data so you have 84 images with this size for the whole Europe for each band and for each percentile so you can have like a red a spectral imagine that you can retrieve for one pixel you can retrieve the pixel for the whole period and see like a time series and it's what we will do here and but we also have an NDI time series so we will access it and actually see the result because the NDI it's more easy to understand the signal and but you can do for any kind of temporal data that we have even for the land cover so you can use the same code to retrieve the same information that we are providing the website you can get one pixel and see how the land cover changed across 20 years but for the land cover it's just one layer per year with the Landsat and DVI we have four layers per year so it's more data and we will use here the same concept but as it is a time series analysis you need to work with points so I will get one point here so in here you can just put one point put here in the vision maybe for this exercise it's good to change to the Google images and so actually let's do it here in the vision I leaflet so base maps so we are using these libraries pretty nice and I will change the base layer to satellite image so and you here you can see all the options that are available so I think the best one needs it's hydra asery let's try this one so you you can just select the my specific base layer here and I will replace the original one this one I'm ethnic it is better so and now I will go through okay not so better okay so yeah there is a limitation if there is a what I has it layered there but it's just to get the pixel in a vegetation here so now I have a marker in some vegetation some vegetated area and here I'm using the same approach but now it's a point so it's not the rectangle more anymore and this is a WGS coordinate system and I will use it to again convert to the projection system from the that we are using the open data science Europe and to do it mainly I will open the cloud optimize the geotip in the same way and but instead to retrieve a window I will just get the pixel value so there is a nice functionality to a nice method inside of raster you do a sampling so you need to send like a point of coordinate in the same coordinate system of the image and it will retrieve to you the pixel value so yeah so it's uh using this coordinate the rest are you it's going to the file hosted in the s3 and access just one pixel of this file and this is the value that we are receiving and here you can see I just changed the world because now we are accessing an nvvi data that were rescaled to zero to arrange between zero and two hundred and fifty five to reduce the size of the file and however in this example we need to access the goal here it's really access all the data so we have a time series of nvvi ready to go in this service so you just need to provide like a all the world that you want to access and here I'm using the same concept that we use it till now but now applied to the cloud so I'm creating like a list of 4 else that are sorted by time so and here you can see that I have this base URL and here it's the position of the date field in my in the world and here are the dates that I want to access and of course you need to know these dates and right now we are organizing the system to provide like a comprehensive catalog of all these layers and I will talk about it in the end of my presentation but we are putting this data in these these dates specifically but the data also the URLs in the documentation that we are organizing for the package and in the tutorials of the website and you can see here that it's just a a matter of to change the year and create all the URLs and here it's a example with the URLs that we created and now you can check how many URLs we have here is 80 because actually this is the old version of the NDVI we don't have the 2020 here but we already processed it but now we are updating the service and you can see that that here it's the version 1.0 so some of the tutorials we used the version version 1.1 and and now using this where else and we will create like a we will send it to a function to do this accessing parallel so first thing we have this read pixel function that will receive a URL so and create a coordinate app and a specific for a coordinate system it will open the called optimized geotiff using rasterio it will send the coordinates and select and just retrieve the value and return it but this is a function so it's just a function definition and here are the parameters so and now the parameters it's just a list with two tuples and you can access you can see that this list has the name the URL for the cloud optimized geotiff and the coordinates and here are the coordinates and of course it's the same coordinates but now we have a list of parameters and what we are doing here I will execute it so we are using the parallel helper to call this function in parallel using all the cores that we have available and sending this 80 uh this list with eight uh elements so actually here we are accessing in parallel the cloud optimized geotiff files to retrieve the pixels and we are putting this result together in pandas data frame and it's taking a while because now we are accessing all the files but uh it's multiple files but just a small part of the data actually the smallest part of the data that it's a pixel and you can see here so it was not so bad and now so this code here do everything actually so this actually again this is a parallel function that we will call the read pixel so and read pixel it's a function that read just one pixel for one image considering coordinate leaks actually could be more coordinates so this is run in runs in parallel considering all the URLs that we defined using all the cores so if you have a eight eight eight cores it will work using everything and the result is actually like the pixel values here and the rest of the code it's just to prepare the data frame so here we are because we have the file so I'm getting here the name and I know the coordinates and I'm getting the putting the name of the file here and as this file name it provides like the start and the end date and we can put it as separate columns so it's just a matter to split this file and create specific columns for it I'm putting the longitude and latitude that I did consult and here are the values and so and you can see it's one value for each image and it it was produced on the fly so we consult we did consult everything and using this and data frame of course we can plot as we like to see like this is a time series that we retrieve right now and but the values are weird for NDVI so here I'm just in I'm just rescaled the values and you can see the variation here and even for the lens set because it's it's difficult to produce a time series like that for a lens set because it's a 30 meter resolution but it's nice because we have four pixels per year and it's possible capture part of the seasonality inside of each year and sometimes there will be some problems for sure and maybe there is for one specific year and season the gap feeling didn't work well but in the in general we have a like a nice product that it's ready to be accessed using all these different protocols and technology so I will back to my presentation just to summarize yeah this was the yeah what you have here yeah so yeah so now we access the so I presented how to access it in quantias how to access it in python but considering all the layers that we have available in the open data science Europe we are developing a quantias plugin specific to access this data through cloud optimizer dutif and the main advantage of this plugin it's it will like have all it has actually because all the available layers so for example you can check that the land cover it's there which landsat data is there and the ndvi so you can play with dates here but better than that you can present it in the quantias and see like in the the same color scheme in the same style that we are using the open data science web viewer so for example here is the land cover I didn't load it in our previous example but if you load the land cover data it's nice you can load in the quantias but you will have I don't know 33 plus and just to select the colors it will be a pain just to start checking the data so we are providing the styles also so that it's available in the open data in the EU map repository actually it's in the spatial layers I will show it but this plugin do all this work access the data get the style and show it directly in quantias so and it's under development but I don't have the plugin here set up in the you have it here okay you have it here okay nice so yeah we have it so that's not my machine so let's see if it's working so okay so you have a problem here yeah we install it but you need to check the the version yeah I will just show the repository and but at the end when we we are really closing and finalizing the first version but the planets really provided in the repositories of quantias so but you can see that it's it will appears here and you can select the layers so but for problem of dependence it's not working properly here and but I would like to show the EU map so here in the the geo harmonizer repository you can see that it's the same group for the workshop repository you can see this spatial layer and here we are providing all the styles to open the data so for example if you load the quantias layer you can download the styles from here and for the land cover it works pretty well and so yeah we are developing this plugin also and my so to close the my presentation I I would like to talk about the stack so we we are working to set up it because now if you think we are organize all this data and this raster files in a cloud optimized geotube and providing it as a cloud object service but you need to enable the users to access this data so it's it's a nice way to access but it's important to provide like a kind of catalog with some metadata so we have the geo network but the geo network it's not specifically for raster data and it's it's it's more like specific for geospatial data but this stack it's it's a it's a specification that provides this structure to put this data available in a in a proper catalog and if you see here you can see that the most part of this big players they are using it so you have data from google earth engine here and microsoft planetary I think so you have sent in our data so this is actually so you have the USGS so these are public catalogs that can be accessed and you can retrieve like you can first you can consult here and access the metadata and find your specifically your image and later you can get like a URL as we are doing and the Python training session and with this URL you can actually access the data so we are working to establish a stack server for for for open data science and the goal it's really improve the access to the data and here are some catalogs that we I presented and yeah as conclusion of this presentation it's it's easy to claim that cloud optimize a geotip it's the state of the art standard to distribute data so and if you use properly it's it's like a geospatial database but for raster and and for me it's a nice combination of accessibility to the data but also to it provides a quick way to visualize it because the most part of I don't know 30 40 percent of the work with raster data it's visualize the data so if you just provide access to a abstract array and you can see for example the data it's it's for me it's not it's not a proper way to work with raster data and so the overview and this pyramid layers structure are really like good and fits really well in a workflow for who works with the geospatial data but on top of it we are aware that we need to create a stack catalog and provide this data also like with some sort of metadata and some we need to improve the way to access this where else because right now it's in our gith lab and and stack it's the right way to do it and yeah that's my last slide and that's all that I have to present let me see here I'm trying to find zoom no no yes it's possible let me that's a nice question so the question was if we it's possible construct a time series for multiple points yes you can see here that when you this is the function and this function it actually it works with multiple points so we can try do a risk test here but let's see so but I don't know if the data structure here it will work because if you have multiple points the structure here it will be a bit different so this code need to be changed but the most important the most important is this here so actually you can send multiple coordinates because this t data source sample it actually it access the geotif file but if you have a list of coordinates it will retrieve the both the both values but it need to be changed here in the code maybe we could test with just one point here and see uh offset here list right okay here maybe maybe my offset was not okay so here you can see this function it's I'm calling for two points this point it's just a offset and now I have two values but to produce the time series as we have here we need to change this code because this table for me the right way to create this table it's actually expand the number of lines right so you can if I send two coordinates I would have like 106 lines so it's it's possible
|
Software requirements: opengeohub/py-geo docker image (gdal, rasterio, eumap) Why Cloud Optimized GeoTIFF? This tutorial shows how to generate COG files using GDAL, how to provide COG files through S3 protocol, how to access remote COG files in Python, and how to use the QGIS Eumap Plugin.
|
10.5446/55233 (DOI)
|
So, hello everyone, my name is Lukka, I am doing the training session about introduction to spatial and spatial temporal data in Python. So the outline of this workshop first, I would like to start with some spatial referencing basics because I found that a lot of experts that are proficient in GS actually sometimes forget the basics of coordinates, reference systems and what they mean for the data and even myself with actually background in Geodesy and it reminds myself of this sometimes. Next, we will continue with Raster IO and manipulation and after that with vector IO and manipulation in limited capacity because the rest of the workshops are mostly focused around points so we will limit ourselves to that. We will then compute a basic time series and go through some, if there is time, go through some new map convenience and performance utilities. So to continue, so what is actually spatial data? It's data in a spatial context. So when we say spatial context, we usually mean the data is placed somewhere on the surface of the earth but spatial context is more than that because we build multiple layers of abstraction around the earth surface. So we have like first there is a surface of the earth, then you have like a terrestrial reference system, then you have an ellipsoid and then after all that you have like a surface that you project ellipsoidal coordinates on. So yeah, we need all of that to put data in a spatial context. So Rasters are positioned basically with multiple parameters. You have the coordinates of the upper left angle of the Raster. You have the size of the pixels which is actually defined by four parameters, two of which are usually zero. If you see the BND, the BND, sorry. And if maybe some of you recognize this image below the left side, it's from the Wikipedia page on Esri Worldfiles which is basically what's positioning Raster in the spatial context is it's an FN transformation. And vectors are positioned in bits of a simpler way which contains more data but the data itself is written into the vectors. It's just coordinates for which vertex. So yeah, the X and Y coordinates in the context of the geospatial data are a reference to the projection surface. This is also true even for 3D data because elevation data is think of its own basically, usually. So the quality of a coordinate reference system really depends on what you are using it for. So basically for mapping, topographical mapping or web mapping, like navigation maps and that, you would use a conformal projection because it preserves angles. So things retain the shape of how you know them. But for quantifying surface properties, for example, you would use equal area projections like we use for open data science because that's what we're doing here. And why does any of this matter? Because we use data from all kinds of sources and we forget that it might not be compatible with one another. So you can have like, well, if you have continuous data, then if you project it in another CRS, then everything is probably fine because you interpolated, but if you have like, have a nominal data like, for example, land cover, you cannot interpolate it. So you do like nearest interpolation and you have pixels that are largely different shape and if you compare them with another, for example, nominal data, data set, your results would not really hold water. And also with vector data, there's things like old coordinate reference systems, like for example, in Croatia, we have before the current system, we had inherited the system that was from the Austro-Hungarian Empire, which had a local ellipsoid and people are still having trouble to this day converting data from before to our new system because you basically get vectors that are 400 meters away from where they should be and stuff like that. So yeah, all of this being said, you won't have to deal with this during the course of these training sessions because all of the data is in the same system and all of the pixels are aligned in exactly the same way. So yeah. So yeah, let's continue with the practical part. Okay so before we start, if you joined the previous session with Landra, you don't have to do this but if you are just joining us now, please update your notebooks like this. So you go to this directory and just type it pull. So okay, we can begin. So yes, again this session is very introductory. So if any of you are joining this just because you are joining the entire Python track, then sorry if you are not going to see anything useful but yeah, so let's begin. So yeah, we'll cover the basics of accessing basically and manipulating data in Python using the standard tools which are REST, RIO, Numpy, Geopandas and also UMAP which has some performance and convenience oriented functions like Landra already mentioned. So yeah, okay. Let's clear all the content. So let's begin. We will gather some files. I'm not sure if, yeah, Landra already mentioned all of this. So this is, we will use this style, this style ID, this is the data there. We can just go ahead and run this. We'll get a bunch of file paths. Then we can open the second file in the list and then we get the dataset reader. Data set reader is a REST, RIO object that enables us of course to read files and contains file metadata. So if you run the next cell, you see there's a driver, Geotiff. There's the number of bands from the file, the shape of the REST, the coordinate reference system and the transform. No dates of any as well. You can actually see all of this in the REST profile. Maybe don't print it. See the display. No, it's the same, never mind. So now if you read the data, you can put the number, the band index here. Here there's only one band, so it doesn't matter. If you do not put the band index there, it reads all the bands and just if there's only one band, it just adds one more dimension to the array. So we're going to go ahead and do this. So when you read the data, we get an umpire array. And we can do with it whatever we could with other umpire arrays. So we can do this. We can compute with the methods of the array. We can also do some stuff which we'll get to later. We will do that, compute some statistics and then plot with the plotter from your map. So yeah, this is the REST there. I think maybe the picture is a bit big. So if anyone is wondering why the fixed size is only one number, if you're familiar with Matplot Libits because the other axis is computed automatically to preserve aspect ratio of the RESTers. So yeah, here we have a, what was it? It was a blue band, I think. Yeah, blue band. Well, let's make it blue. Okay, so here's the blue band. So now we get to do some stuff with NumPy because NumPy does, enables you to do computation basically parallel under the hood. You don't have to think about parallelization and does it on its own with BLAST libraries. And one of those things we can index, we can compare arrays to a number in a vectorized manner and then index them with what we get from there. So yeah, here we'll compute the 20th and 80th percentile. If you're concerned with the usability of this, don't be, this is not about doing something particularly useful. It's just to go through some things one can do with RESTers, with RESTorio and NumPy. So yeah, we'll compute the percentiles. We'll compare the data with the high and the low percentile. Then we will combine the index with the bitwise or, so bitwise means you compare basically the indices element by element. So you have two arrays of Boolean data and you compare them element by element so you get a combined index. And then we just, yeah, we can calculate the percent of data outside the bounds by, yeah, this code. So yeah. So this is what we get. The low percentile, oh sorry, this should say 20, not 5. And this should be 80. So the low and high percentiles. And this index is, like I said, an array of Boolean values. So we can then use that array. So we will first create a copy of the data so we don't soil our original array and we will index it with this. And say that all of these pixels that are true here should be no data and then plot the results. Now again, reduce the size a little bit. Yeah, okay. So you see now that a lot of the raster is now no data. We can do something that's usually more useful and actually clip the values to the low and high percentiles like this. So we index first with the low index and say that the values should be low and then with the high index and that they should be high and we can then plot the results again. Yeah, again, let's make this blue. Yeah. Okay, actually let's not make it blue because you can see the difference here. I'm going to go back to the previous image and actually just let's plot them next to each other. Data and new data. Let's add another title here. We don't need the F there. Yeah, okay, you can see the title very well. Let's do this. Yeah, okay, so you can see the reduce of variability in the data because the right one is clipped. So there's something we can notice here as well is that this corner here and this cut out here that were previously no data are now actually clipped because no data of this raster is zero if I remember correctly, which is lower than the low percentile and we did not do anything to account for that. So what we can do is this. Restrao enables us to, we could also compare just the data to the no data value and then produce an index that way, but the more efficient way is to use the read masks from Restrao. So we read masks from the same band. We do this as type of to make it a bullion area because you get you in date, I think. If you don't do that, then we can do this again. So we get only the pixels where there's data. So we can compute only on them. So we get data only. We do again, we do the indices. We compare the data pixels to high and to low. We again clip them. We make another new data array. We fill it all of it with no data and then we can fill only the pixels that under the data mask with what is contained in this, the data only array. And again, let's actually let's plot the old data here as well again. No, sorry. No, wait. So if you can see there, no data is now accounted for. So we have made a new raster and let's now go ahead and write it into a file. So first we will create a directory with this part here. Then define the path of the output file. And now we can use Rasterio dataset writer as a context manager as one would with a regular open from the Python standard library, but aside from the output path and the mode, which is the right mode here, we need the raster profile. So if you remember the dictionary that contains all the raster metadata, well, we can just pass that here as keyboard arguments because we haven't changed basically anything other than the data itself. So the data type is the same, the CRS is the same, the raster size is the same, everything is the same except the data. So our new raster basically has the identical profile to the new one. So let's go ahead and write it. You can see we have here dataset writer object and it wrote to the file. So just to check we can use plotters with the file path to see what was written. Again we'd use the size of the plot. So there we have it. So now to some vector data. For that we will use Geopandas library that basically it's Pandas, but for spatial data it has Geopandas data frame basically differs from the Pandas data frame by having a non-optional geometry column on which you can compute geometry operations. So we will go ahead and read the land cover samples that Leandro mentioned in the session before and print some data points and then look at the points themselves. If you ignore the runtime warning there. So the CRS is the same as the raster data. Here we have the number of points and these are our points. So we will not be using any of the data in this data frame actually. In this session we will only use the geometry. So yeah it was mentioned that IPa leaflet is useful for visualizing data in spatial data in Jupyter but if you need a quick and dirty way to compare the extents of datasets this is one. So basically you can import Shapely. So box and multipolygon from Shapely geometry. Shapely is a library that Geopandas uses under the hood but used to now it's used as either that or PyGeos but yeah we won't get into that here. So you create a box from raster extents and that creates a polygon. You then do the same with the points. So you do this. Now the GascadedUnion is if you have a relatively small dataset you can make the entire data frame into a single Shapely geometry so that it visualizes easily in Jupyter and also you can compute bounds on the whole thing basically. So yeah we can do this. We have two polygons and you can see our raster is here in this little corner. And these are the points. The other points are just here and here but yeah we will just, what we will do next is take only the points that intersect with the raster. So we can do this. We create an index by using the points intersects method. So you just pass it to another geometry and you get a Boolean array that contains TrollForce depending on where the geometries intersect with what you have here. So yeah let's print the number of points we get now and let's also visualize the points subset. So yeah these are our points now. Here we can also write vector datasets. So right now we have to do this because there's bug with the current version of Fiona so if you don't do this at least we, I think with our point dataset produces an error so basically we have to do this. So what is important here other than the error wrapper is you just do this points subset to file and then you define an out path and then say which driver. You only have to define the driver if you're not writing an esri shape file because that is the default. So yeah we now wrote that to a file. So onto computing raster time series. So yeah like really Andrew already mentioned there's a helper in UMAP that enables you to read a bunch of rasters multithreadedly into a single array so you don't have to go through all the files manually and then stack the arrays and all of that. So we will go ahead and take all of the spring data set, all of the spring rasters for red and for near infrared for this style that people we were already working on. Yeah here are the shapes. As you can see these are stacked like a multi band image instead of being like this axis being on the front where you have like a list of raster datasets. So yeah we can basically just do this. Raster datasets are arrays you can compute with them directly and this is how you compute an NDI time series from the time series of red and near infrared. And we can also index one point and one pixel well and plot series for it. So yeah that's that. So we can also plot the entire series with plot rasters and I'm not sure if we have to do this anymore. I think Leandro maybe patched this last night but I'm gonna go ahead and do it anyway. Just move the axis that contains the bands to the front. So that the shape of the array looks basically like this. And yeah we are going to plot the rasters now. I think I can increase this a little bit. So yeah there we have it. We didn't plot the entire series because of the 20 rasters. We can try it but yeah. So we can now compute with the series. We can for example compute the difference of each year to the previous one. Which is basically just differencing the series from itself with the shifted index. So we take here the last 19 images and then we subtract the first 19 from them. And then we can also plot the results. So yeah now you can see differences from each year to the previous. Or you could say you want to compute differences from the first year. To this. But yeah. Let's get back to the previous year differences here. And we can use the do-map-helper function to save the rasters in parallel. So yeah we define the file names here. And we write the files. Also yeah we need to put here a path for reference raster. Which then saves rasters takes the profile from if you remember the raster profile you need when writing rasters. So instead of putting that there you just give it a reference raster to take the profile from. So yeah basically we just we unstack the data again to be a multi-band image because that's what save rasters takes. And we also should define no data here because the default is zero and the difference of Ndvi from one year to the next can definitely be zero so we should change that. We also change the data type because the GeoTIF driver does not accept float 16 which is what our data is because we computed from the start with 8-bit integer values. So it tries to keep the memory usage low by using the lowest possible float. So yeah let's save the files. And here save rasters just outputs the file names that we saved data to. So what we can do now with these files we can use the overlay helpers from map mapper to overlay all of the rasters at once with the geometries. So this optimizes things a bit and so let's run it. So is it done? It's done I think. Yeah it's done. So basically give this class a bunch of geometries as a Geopandas data frame and just a list of the files that you want intersected and it does the overlay for you. So now we have time series for each of the points. So you can see the column names are just named after the years. So let's plot the series from a single point. For a single point we are going to sort the columns here because they are not currently sourced by year. And we are going to take a series from a single point and just plot it. So yeah as you can see series at this particular point. So now onto some convenience functions from UMAP. So yeah RasterIO allows for window reading of rasters. So if you do this, if you define a window you can read raster with that window. So only the data from the window will be read. So if you do this, so you need a column and row offset and width and height in pixels. So yeah now we have a 5x5 pixel data set. But what matters here for large files is that RasterIO has to read the entire block of the raster before it reads that particular window. So if a window intersects with multiple blocks then it has to read all of the blocks before reading that window and just outputting the data. So if we look here at the number of windows that our test raster has we can just do block windows and just expand it into list because it's generator. So yeah 125 blocks. So yeah but what we can do with UMAP blocks basically we can read a larger raster by just reading it block by block window and we can not worry about that let it run in the background and just define something that we would like to do with the data from each block when it's read. So we can do this by import mapping from Shapely and take the URLs for a landsat read and near infrared for 2018. This is summer. And so we will define the file URLs and define the geometry. The geometry has to be given here in the JSON schema. So we do that with Shapely geometry mapping from our Shapely geometry. So this is the raster that we, the size of the raster that we are going to read now as opposed to 1000 by 1000 pixels that we were reading before. And this is our geometry. So we import from UMAP Parallel blocks we import the block reader, the block writer and that's basically it. All we have to do now is define NVI as a function that will be computed for each block. So we initialize the reader with the reference file and here all of the files that are read have to have the same transform and the blocks have to be identical. So this is for reading those kinds of data sets. So we give it the radio as the reference file. Then we initialize the block writer with the reader. You can also omit this and you can just initialize the writer and then that will initialize the reader with the first file. But just to illustrate because the reader is then reusable, it has to, it initializes by computing geometries for each block. So it can take a while so if you are going to reuse the reader class, the reader object rather you should initialize it on its own. So if you run that and wait a bit. So check. Oh, it's not done yet. Okay. I should run this. Yeah. Okay so if it's done. So we did, we defined our NVI function here. We defined an out file. Then we called writer write with the source paths and the destination path. And also the geometry, the function that we want to compute and should give, of course, the no data value because again I think the default is zero. The d type and now let's see if our files have written our file. Sorry. Yeah, so this is the same block but computed from very large files that are hosted in S3 buckets. So you do not have to download the entire file to work with it basically. Yeah, so the final part here, I would like to talk about some convenience abstractions around the catalog of the data produced and also used by Open Data Science. So as you know as Leander has mentioned, we have the catalog in the geo network but from a Python standpoint you would have to use CSW from OWSlib to access that directly. So here we have a catalog object that we import from EOMAP data sets. And then we can search with it. So here the catalog also has a fallback copy of the basically of the entire catalog as a CSW that's distributed with the code. So if basically if maybe you're having connection troubles and the unit per can also be a bit slow you can just use this and that's what we're going to do because it's faster and the fallback CSW is updated so there shouldn't be any difference between the two right now. So yeah, here we have a resource collection. So the output is grouped by metadata so all of the resources that have the same abstract theme and title so this is NDIA Landsat Quarterly. They're grouped basically but you can use the results object as you would list so we can just do something like this basically results. So to get the first three of them. So yeah, these are and if you look at only one of these. This is so these results objects are basically strings so you can use them as URLs for files to use with Rasterio and the UMAP helpers but they do contain additional metadata so if you want only the metadata objects you can do this. So here is the metadata. But yeah and also there is the metadata of the collection object. If you want to access the group, the metadata groups of results so if we have zero that gives us the quarterly Landsat NDVI. We index the one. We get this and oh no there's no three. Okay, this output is a bit long so we'll scroll down here. We could also use, you could exclude the results with the search so we don't have to sift through them manually later on we can just exclude with the exclude keyword. So if we exclude the trend we just basically get this. So yeah let's check results. So yeah, there's basically only the first one. Again with the long output. And we can also search by year. So here we get the quarterly Landsat for 2019. If we want to say search by multiple years so we can here input range for example from 2015 to 2020. This is non-inclusive so we should be okay. Then you have, then year is added to the metadata and you get groups of resources by year. So if we do this. So that's now just the first group. So only the rest is from 2015. So yeah we can use now these resource collections to read directly from the files. So we'll use the read rasters. And read rasters also takes a window object so we will define a window like this. So yeah the rest of the window sub module has this from bounds function which takes basically the bounds. So we will give it only the bounds that we were using so far so the extent of the test raster that's included in the VM. We will, we also have to define a transform. We can also just read from one of the files so we get the first file. We just do this. If you remember it's what the rest of the data set reader contains of all of the metadata like this. So yeah you define a window here. You just basically use read rasters as we did previously but you just say you use the spatial bin keyword argument to define a window. And this part here is basically the catalog result resource should behave like regular row strings but there's currently bug with that so we just convert all of them to strings. So we run this. And then we can plot the results. Okay this is a bit too small. Okay let's instead just plot. Well we can do that here actually. Oh sorry. Yeah so these are the windows read directly from the rasters that are hosted in S3 buckets. So okay so this is the end of the plant section so we have some time left if you have any questions if you would like to suggest that we try anything or. Are there any questions on zoom? No? Okay. Andra would you like to use the time left to go over the things from your last session that you had to skip over? I can just explain it. Okay. I think that's a good idea. Okay yeah. And yeah so you already covered some parts but I can open and explain a bit. Okay great. Thank you. So just a short recap. So we presented like how basically to access the data, read and write parts of it using the local data that we have in the virtual box like in the machine. And some of the remote data also. So this is actually the first part so to really read the points, read the raster data and in the afternoon we will use these functions but now in the context of spatial and spatial temporal machine learn. So all these functions that we present it's mainly to implement these input and output operations like deal with these GOT files. And I would like just to in my last, in my presentation the first part of this morning. So here you can see that I didn't present it but it's the same method that we use here to read these three images. You can use the same approach. You just need to change which are the file system, the file names and with these file names, these function files, it's like a utilitary function of EU map. It will sort all the files in the chronological way because the file name, you have the date inside of it. So mainly this function organize all the files in a chronological way and you can read all these files using this red raster and it's fully optimized so it reads in parallel. You can control how much threads you can use to read it and you read here all the 84 images. So here for the NIR like the infrared band of the Landsat so this is a totally harmonized product so you have it for the whole Europe and you can read it and here you have like a complete time series with four observations per each year. So we have 20 years and here it's actually 21 because it's 2000 to 2020 multiplied by four and what it's interesting so here it's just like a noon pile array with the last dimension it's the time. So each part of this array so it's like a data cube with each part of this last dimension one date but with this function here you can see I'm just plotting like a time series and I'm getting like a random pixel here and you can see the time series value so how for this specific pixel this pixel it's one position in this data cube structure you can see how it is changing. So tomorrow in the cloud optimized geative training session we will do it but with real data so we can actually access this temporal response for NDVI data directly through the cloud so it's here again you are reading the data it's just local data it's just part of Europe because we prepared it specifically for the training sessions but you can use the same approach with the same methods and you can point this to like a real data that we are hosting in the cloud and you can plot all this time series for NDVI in our app. Example and here you can compare so this is actually the structure of the data so you can compare it across different years so this method here it's just like a function to help plot the data using like the plotter the plot haster that we saw here and you can send like two years and you can compare how the image looks like so the four seasons across different years and here it's just like a operation between this this image so it's a sort of reduction so considering all this data cube you can do multiple operations and in this in this case it's just like a median you can use like maximum calculation minimum so basically it's a type of a reduction in this data structure so you have like two dimensions for the space and 84 dimensions for time each of these dimensions are specific for one season and for one year and you can derive multiple statistics about it so for example here it's you this code it actually calculates like the median value across all these images across all this time series you could have like the maximum value the minimum value so it's a easy way with the work with this data and here it's just an optimized way to access it but all the functions available in umpai you have available also to process and to deal with this data because we are creating this EU map function just to provide some helpers and to optimize for example specific parts of workflow but it's fully compatible with what we already have in python numpy scikit learn as you will see like this this afternoon so yeah this is the last part of the presentation and you can see it's possible also save the raster files and you can open it in the quantias and it's a nice way because when you save a file you need to you are reading a data put it in memory so you have a numpy array you can process and do multiple operations but to save it as image again you need to set several parameters what is the projection system and what will be the like the the data format and things like that so this method save raster it's actually used as profile one like base image and copy all these geo spatial parameters for all these images separately and you can save multiple files like that also so yeah that's that's it's mainly what we want to present this morning it's more about what data we are processing how we are reading and how we are writing and how you can access it using python so it's a preparation for the afternoon because in the afternoon we will use this data to do a proper like machine learning model and classify it and this landsat data primarily we will classify it in different land cover class so yeah I don't know if you have some questions so otherwise we can stop the session have a lunch and prepare for the afternoon sorry okay okay so let's okay let's go back here and I think if we don't if we don't have like more questions I think we could start with yeah I can actually I can execute it and because and we can yeah okay so I will open mine so I was rushing but as we have time now I will I would like to explain about this data set preparation so it's a it's a nice context for the machine learning model so mainly what we are this is mainly the data that we work until now so the landsat and R&D images so you can see here this is a nice publication about the data set but mainly we are downloading it from University of Maryland as I said and we we are getting all the like this 16 day style so for the whole world is this product provides like one integrated image every 16 days and when I say integrated image it's put together all the three landsat satellites that are available like landsat 5, 7 and 8 so it's a nice product it's a harmonized product which makes all these pixel values and when you think that there are three satellites different getting data from the earth surface of course you have different problems and it's really challenge it's really a challenge put all this data together so this is a harmonized product it's it's nice but even considering it you still have miss data and you still have like clouds so what we are doing we are getting this this data and we are organizing it for each season and mainly we have the date intervals here so for example between March 21 of March and 24 of June we will have like several images maybe six seven it depends in winter of course we will have less images because you have more problems with cloud and but it it changes across all these seasons but for this time frame for these dates you will you will have like multiple images mainly what we are doing here is we remove all the clouds so there is like a mask there is an indication of which pixels are cloud which pixels are not cloud and we are using this mask to really remove and clean the data but of course if we remove the cloud so this satellite it's not it's an optical satellite so it doesn't see across the clouds so you will have a value and a gap there so and if you have a gap if you have a miss data and no data it is a problem to multiple applications so it's it's it's better it's actually it's mandatory put some value there to use this image in this format but if you think that okay maybe for this interval I I will have like then this data will have gaps so there are two options here you can expand this window so imagine it if if I have like value pixel values here cloud here I can expand my window so I can get all the images available I don't know in 2020 for example and maybe some of these images will have like pixel value so it is not guaranteed that I will have like observations because it's a combination of when the satellite pass and if we have clouds or not what are the climate conditions but you can expand it but of course if you expand it you will have like a final image with different with pixels derived from different observations and it could mess the data so it's it's why really it's important harmonize all these observations so and but in the context of the project we decided to use we decided to use other approach so mainly we kept we kept this seasons and we get feel with you we get fill it using like different years and neighborhood seasons so and I didn't prepare the tutorial about it but we have available in the official documentation I will just show to you but we in the EU map documentation and in the official documentation we have these tutorials so and here it's like a it's exercise of how we are implementing the gap filling so this is actually the code that we run so and you can see here where is the original data and where we get filled the observations so mainly it's it use like a moving window to get observations from different seasons and from different years and fill the gaps of course you need to be aware that you are introducing like a new observation there that and in some applications it could introduce some problems so imagine it like in a case of a deforestation if you put like a gap filling maybe you are not see a real deforestation because you are using like a value from the past for example but at some point we will see that and because at some point the satellite will pass through this part of the earth to that part of the earth and you can you and you will have like a real observation for surface and you will see it so it depends of the application but here as we are mainly we are dealing with data to the past so past data since 2000 we have like some observations up forward also so what we are doing we are just using like the medium value from past observations and from future observations so for example to get filled I imagine 2020 2010 we will use observations from 2012 and from 2008 of course we will have limitations in the end of the year in the end of the time series like 2021 so and we are preparing like a publication to explain it but for the context of the project and to produce like land cover maps to the past this data it's it's it's prepared to do it and of course we have four seasons and maybe one of these seasons has actually observations without gap in the most part of the case that's that is the case actually so maybe we are gap filling for example the winter image because there are too many clouds in the season and but the summer image we have observations so it and we are using all the seasons inside of one year to produce the land cover maps so this problem will be minimized and and here you have other approach we did test three of it and one more that it's not implemented here and here you can see this is more like a linear regression between all the available observations in the cross time but you can see that our approach here it captures better the decisionality and we see these spikes because we actually we just have four observations per each year but if you think that this data it's a 30 meter data so and it was prepared for the whole Europe so we removed all the clouds and it's a nice time series and it allows to develop several applications on top of it and so let's go back and we will use a lot this data and so just to explain like the last part we did it for three percentiles so mainly inside of each season imagine you could have like four observations three observations maybe you have just one observation that's it's the case for the winter in the most part of Europe but we are calculating these three percentiles for all the seasons so it's a it's this approach it's kind of it's a straightforward way to produce to organize these different data availability in three ready to use images so for example if you have like just one pixel and for sure this three percentiles will be the same but if you have like on the Italy for example you have a better observations and a better data availability you have three different observations for each these three percentiles because you will combine all these pixels and you will produce it so the output here and of course it depends of the data availability but it's a it's a approach that allow us to generate this data and to prepare it for the machine learn and the assumption here is when we train a machine learn we will use points from the whole Europe so we will have all these different situations so we will have points with three values constant values in the three percentiles inside one season and we will have values with like multiple person with multiple values for it is different values for these three percentiles and the machine learn the model will manage it so that was the assumption here but again there are multiple ways to deal with that you can we could use like just one image for the entire year that that's a way I implemented it in the past but here we really preferred to generate the seasons and get through it and and it's it was more like a methodological decision but of course there are different strategies that could be implemented the important part and I I'm repeating it because it's not it's not these satellite images they are not like if you think in this Landsat data in the Sentinel data as they are available now they are not ready to use you need to do a lot of processing and you need to download it you need to remove the clouds you need to do some a lot of harmonization to use for example different observations from satellites so and there are there are like specific projects to do it so here we are using one of these products that already implemented it and we are building like a new kind of additional product on top of it and if you use directly the satellite images so you have a lot of work and probably during the machine learn modeling you will have you will have problems and because this even for example from the same satellite you can have like different interference different like noise from the atmosphere effects and things like that so if you are really focusing develop like models and use this data it's important to have a clean data harmonized and ready to use otherwise you need to implement that and that's the reason of why we are doing it and we are putting all this data available so to allow other people other researchers and other organizations to develop applications on top of it and of course we are improving this product so now we are generating a new version a version 1.1 and it's still this product it's not it's not perfect you can see like some artifacts still but we are really improving it and during the year so and we have the capacity to rerun it so I will as I'm talking about it I will open like one RGB image here so we can use the data portal to see it on top of my tutorial it's the easiest way to check these composites and if we go here so this is a public URL you can access and this is the RGB data and I will use the zero capacity and I will zoom in some place you can use this to yeah we can try to find one and so yeah this is nice so and we are here and you can see this is this is not like a Google or image because of course each pixel here has 30 by 30 meters so if you zoom in it it will not shift for a better image as we are as Google Earth do you have the Google that not here is the big map but it's just to show the difference but this image it's really enough to do like several environmental applications and so but what I want to show through here it's this is the summer image so it's the best mosaic that we have available because of the data availability and this is the medium value and here you are seeing actually three bands like the red green and blue and if you go back here you can see like from all the years you can see some changes and you can see still some artifacts so it's there is like rest of like some noise some artifacts because of course this processing it's not perfect we are trying to minimize the number of artifacts but sometimes there's problems in that there are problems in the cloud mask so it's really difficult remove all these problems but what it's important is we are really like creating an optimized workflow to do it and to update it and for example if we find a better way to identify the clouds we can run it and we can process again and using all these infrastructure that we are presenting here so what it's nice so we are here in Farnigin and this is Fenendal and if you see all these images you can clearly see some of the urbinary expansion here and this is only possible to see because we can have we have one image per year here and for each year we just have we have four images actually here it's just the summer but for Google Earth you have this for some areas you have multiple images but it's not really you can't do analysis with that so with this data as we are we presented today you can load the data in the memory and you can process it you can derive some statistics you can calculate like NDVI index index and for the NDVI we put it all and now I will turn it so if you look here for the NDVI you have the four seasons and I will turn on the opacity and you can do the same thing and see how the NDVI changed and as we know this now this area expanded so we had an urban expansion here and the NDVI will change because in the 2000 and the early years probably it was some kind of crop or pastry area and later it was converted to urban area so the vegetation index changed across it and if you you can check the values here so you can clearly see how it's dropping it and so probably in this specific in this specifically in this pixel maybe the conversion occurred here so and again this is a data that we prepared and we can try to identify it here yeah so you can you can see that you want considering one year to other year I will stop it here so yeah you had an expansion so we use it this data and we produce it as I presented but I will show it because it's a nice example so we produce it this land cover maps so we prepared the Landsat data we harmonized we remove at the clouds we put it available and we calculated this NDVI index and on using this data so it's not really raw data it's an harmonized product gap filled and ready to use but it's just data right it's just like we just have a spectral value like a pixel value that represents some some response of earth surface but it's not there is no like directly mean for this pixel I don't know if it's a urban area I don't know for example if it's a crop if it's a pasture and use this data we actually produce for all these years we produce it like a land cover map and we actually classify it using these spectral values we classify it as my internet here it's not helping you see yeah now I have it so here it's like this is the result of the machine learn model it's so you can see that this area were like it was a pastry I think and so what it's interesting here if you look the image you can identify it but yeah this because we all most like the most part knows like what is a urban area we can see that maybe the we can see the building growing up across the landscape but for a machine learn model and to produce like a product on top of it we really need to train a model we need to provide training samples so there is a whole workflow on top of this data and if you put here so here you can have like you can see all the land cover class and when it was converted to the urban area so and this this is like the result of the land cover classification we will do it on the afternoon so we will produce something like this as output for a specific for the tiles that we we have and using the same the same workflow and you have other classes here and yeah that's it's a more like it's a advertisement a way to try to stimulate you for the next sessions until the afternoon and we will really process this data and produce one like the rival product on top of this lands up data and you can you can use it for other applications also so inside of the project we are using this data to map where are the three species for example you can use this data to identify for example and to map the soil properties and you can there are several applications to build on it but it's important to emphasize that to depending of the scale depending of the area of your interest if you plan to do something like that for a whole country for example all this data preparation all this data immunization it's it's hard working and it's it's hard work and we are providing it as service for different users and different organizations and possible applications okay we have more questions or any questions actually okay okay it's about the difference between the end DVI why it's subtracting the previous year here so it's the first year yes I think it's related with yours I can explain yep okay it's more related with the Python syntax okay so so what I'm reading here is it looks to you like we're subtracting the last year from the first year right because yeah this the Python syntax is because these this means that so we're not using the first year we're using every year after the first because the first year has the index zero and the second one has the index one and then if you say one and then then this the yeah yeah okay you get it now great so was that it oh yeah yeah indexing from one is is better but yeah I'm spiteful in Harris from a long line of of working on limited hardware and all of that so yeah no problem yeah yeah indexes start from one it's weird for me but it's how our manager think okay so more questions are let's stop and now I think it's time to the lunch okay thank you all and see afternoon we see each other in afternoon
|
Software requirements: opengeohub/py-geo docker image (gdal, rasterio, geopandas, eumap) This tutorial explores the basics of spatial referencing, reading/writing raster datasets and vector datasets. It also shows how to access the datasets produced by the Geo-harmonizer with eumap, working with time-series datasets, and finally how to visualize the results.
|
10.5446/55234 (DOI)
|
Let's back to the training sessions. So this in this afternoon, we will use two slots to present about the spatial temporal machine learn in Python and how we implemented it in the context of the open data science Europe. So the presentation will be, we will start with me and later Chris will join to do some code demonstrations as we are doing for all the sessions. And we'll have a break and after that we will back and we will discuss more some about the ensemble machine learning. So and the, to the approach that we implemented now in the EU map. So before to go to the code, I put some slides. So I prepared these slides to provide some conceptual background about machine learn. So how we it's important to provide like a common understanding of what we are doing in a general way, because of course machine learning can be used by different applications. It's kind of buzzword now. So everybody's talking about it. So considering this general introduction, I will explain how to do it specifically in using geospatial data. So what are the, what is the approach? What are the data? How we can implement a processing workflow to do it? And specifically on that, I will show two demos, two real use applications that classify a land cover. One, it's purely spatial. So that I implemented in my PhD. And now it's spatial temporal and I implemented, we are implementing it in the open data science Europe. So I see it as a kind of evolution between this purely spatial to spatial temporal. So there are some important aspects I will explain about it. And after this introduction, we will have a hands on like a demonstration and we will actually we will execute the code using mainly the EU map package. So there is a class that we've created, landmapper, where you can implement all these machine learning processing specifically for geospatial data. So let's start. So I like to, I would like to start with the definition. So what is machine learning, and if you think in a nutshell, it's essentially a subfield of artificial intelligence. So it's not the same thing. You have more subfields inside of artificial intelligence, like robotics and fuzz logic. So machine learning, it's a subfield inside of it. But most important, it use techniques to learn from the data and make accurate, accurate predictions. So the whole point to use machine learning is train over like some specific data and do predictions in new data. In the field, we call it like a generalization. So you receive new data that the model didn't see and you produce good predictions. And there are several ways to implement it, different algorithms, but in essence, it is. And other important part is without being explicit programed. So if you think what it's happening inside of machine learning model, you can code like hard code, all these rules. So if you think like a developer, I could get an image and do a lot of rules myself to try to identify, for example, where is water and work with the data, try to find some values. And I can code that. But if you think in the way that machine learn do it, it implements it for you. So it will create all these parameters, look in the data and try analyzing the data and try to establish it like automatically. So without being explicit program it. So if you think in the artificial intelligence field, you have other subfields, so machine learning is one of these subfields. And inside of machine learning, you have like other buzzword today, like deep learning. So here we are, we will not talk about deep learning. So we need to, we could have other session about it, but we will use TensorFlow. So it's not every time that you use TensorFlow that you implement deep learning. But and today all these concepts are kind of mixed. So for example, in artificial intelligence, you have like a natural processing language subfield. And today you have pretty good algorithms and really accurate algorithms in deep learn to deal with that. So some of these subfields are really changing too much. And deep learning, it's a really hot field in the research. And there are so many algorithms and really nice implementations of it. But mainly you, when we are talking about deep learning, we are talking like processing graphical card interface, processing unit like GPU, convolution neural networks and all, it's more related with it. We have like a recurrent neural networks also. But here at the end of the training session, we will use a regular artificial neural network and it will run on forest and with the gradate and boosted trees. So in this session, we are actually inside of the, just of the machine learning, not actual deep learning. So now if you think in the machine learning, you have mainly three like categories about algorithms. So we have the supervised, the unsupervised and the reinforcement. So the supervised, it's mainly when you provide like training data and using this training data, you already provide like the answer. So you need to have like raw data can be image can be text can be a table. It doesn't matter. But you need to label this data. So what it means, it means that you will look this data. So some human being will look this data and it will create like what is the expected result for this data. So if you think like in an image, we need to provide for the satellite images, where are the class that we want to map or the values that we want to map. So for example, you could provide like coordinates points, specifically on top of the image showing that this is a patchery, this is a conifer forest and things like that. Or you could provide polygons also. But you need to provide it. So you need to have a data with label, so label data to do the training and the machine learn will try to match and to analyze all this data to provide like expected results and you can compare with that. So here mainly we will use it. You have the unsupervised learner. It's more like machine learning. So you don't need like training data and you can work with the raw data and try to find some like hidden structure on it. So for example, for the spatial classification, you could try analyze purely the values of that image and try to aggregate values that are similar. So if you think maybe a pixel in a forest, it's in specific part of the image, it's close and or it's related has values related with pixels on other part that are forest also. So it's possible. But sometimes the most part of the times actually they are not really accurate. So the accuracy, it's better if you have the label data. But it's there, there are researching, there is a lot of research in this field also and you have the reinforcement learner. So machine learn this reinforce you learn by mistake. So you have like actually an agent that try to do something and you have some way to measure what to analyze if that action produce or not the expected result. And it will learn considering the response of different attempts. So I'm not familiar with this approach and you again, you can combine it so you can have like a supervised learner and you can after that you can have like a reinforcement learn. So but it's important to be aware of these these fields and these concepts. And here, mainly we will work with the supervised machine learn a specific classification algorithm. So we are not doing regression. And if you think in a classification, the difference between classification regression mainly we are producing like factor values or like integer values or class values. And in the regression you are the model will predict like a continuum value. So it's more like to predict some variable. So in our case, the land cover, it's actually like a class, we have the same exactly the same value. It's an integer, but we have the same value for different pixels of the images. But for example, if you think if we we could train a model to predict the precipitation, for example, and to predict precipitation, the output will be like a continuous value. Each pixel could have a some level of the precipitation and the this is more like a continual value. And if you need to use a regression to implement it. So and the other part it's it's about it the program. So inside of the the model, you will have like what we call like the parameters. So the parameters are actually the how the model will implement all these rules, all the training rules to split and to analyze the data and predict the expected result. So to predict if that pixel, it's a class, I'm talking about image, but for example, you can do the same thing for text. So now in nowadays, for example, the Google translate do it. So it analyzed some of the whole structure of the text and predict the output as a new in another language. And it's mainly it's the same concept. So you have the input data set, it's actually text, and you are predicting and the Google translate predicts it as a as a text, but in other language. So and if you think in the way that it was implemented, any anybody created like some anyone created some hide and rules inside of it, the model learned it considering millions of training data in the case of the Google translate, but you can have it for all the for different kinds of applications. And one important concept is when we are talking about the model, the parameters are like. We train the parameters. So if you think in the random forest, it's basically a decision tree. So you will analyze the data and this data could have like eight, 10 features, 10 variables. In our case, it's 10 images. We will use 84, but it could be like any number of covariates and features and analyze this picture, this this features and this pixels. The model will train like a tree and each part of these three, it's a decision tree, it will like try to define to divide this data to classify in pasturing in conifers forest. So you the way that this decision tree is created, this is the like specifically the rules. These are the parameters. The hyper parameters are mainly the they are related with how you control the learning process. So for example, if we use 100 trees or 10 trees for for the random forest, this is the hyper parameter. So number of trees is a hyper parameter inside of each of these trees. We have parameters that will analyze the data and it will split it and it will generate the output. So this is the the parameters of each learner and different algorithms, they have different parameters. But mainly what you need to remember is the parameter, it's actually they are learned from the data. So and it's it's it's automatically derived considering it's a data driven process. So you don't need code anything inside of it. You just need to code outside to control the process. So this is a expected machine learn supervised machine learn approach. So we have some raw data using this raw data, you can have features. Again, this could be anything tables, text, images doesn't matter. And you have labels labels are like, what are they? What is the expected result? Considering this raw data. And of course, the raw data, you can derive just some features, you can have like a bunch of raw data, but just part of it you are using to do machine learning. And you have like the model training and the model evaluation. And with a training model. So after the training, you have like a model, a training model, you can save it, you can load it. But mainly you can use this training model to do prediction over new data. So that all that effort creating the labels and create the expected result. And there it needs to be currently actually it needs to be done by someone. So human human being and me. And you can use it to predict and to generate predictions. So this is the expected workflow, but for the real world applications, we have more something like that. So it's not so beautiful, but you have the when we are talking about real applications, you have like a way more complex, for example, data and raw data than this benchmark data sets. And when I'm talking about benchmark data sets, I'm saying like, if you download, for example, the TensorFlow, you have like some digit numbers, like images with digit numbers, and they are really organized and pretty and ready to go. So you just need to get this data and do some predictions and you already have the labels. But the most part of real applications, they don't have the data really organized like that. You need to prepare the input data set, you need to prepare the labels, you need to expect to prepare the expected result. And you need to work all with all the modeling part. But at the end, you need to do predictions. So otherwise, you are not interesting, just create a model and use this model to just to predicting just a portion of the data. What is the end goal considering a real use application here? And for example, in our case, the end goal, it's really mapping the length over, map the length over classification and the continent of Europe. So we did all this effort, but at the end, we need to predict it. So we need to get this, this model and really scale it up and predict all the pixels of Europe. So there is a way more complexity than just like a simple workflow. But mainly, if I could like put the two most important aspects, you need to prepare the data before to really start doing modeling, starting to machine learn. And you need to be aware what you plan to do with the final model. So what will be your prediction strategy? So you can put it in a, I don't know, in a service and the users can access the servers as for example, we have for the Google Translate, it's a service for a machine learning model. And everybody can go there and just put some text and it will translate. So considering a real use application, you need to be aware of how you will serve your predictions. So you can do all the predictions in-house as we are doing, generate the maps and put it available to the people who use the maps or we can really serve the model and to the other users really access and do new predictions. So there are several possibilities on it and it's important to be aware of these possibilities. So but let's back to our simple example and we can expand it for the geo-spatial data. So now if we specifically explain it in the context of the geo-spatial data, this is more or less the pretty way I say it. So you have like the raw earth observation data. So this earth observation data as I was explaining the morning, so you can have like the Landsat data, Sentinel data, there are a lot of satellite images available, but you need to download it, you need to organize, you need to remove the clouds, you need to do several pre-processing steps and to produce analysis ready earth observation data. So and using this analysis ready earth observation data, you actually need to provide now the labels. So what you want to classify in this image. So you want to classify like soil organic carbon, you want to classify land cover, you want to classify, I don't know, just one class, pastry maybe, or you want to classify water. So you need to provide the labels because the performance of the most, the better models that are available now, they work with supervised classification. So you need to provide the samples, the training samples. And the training samples, it can be like points, lines, polygons, it doesn't matter. A sample, it's just like a vector that you will overlay with the data, get the pixels and put inside of the machine learning. So here in the training session, we will show the like an overlay of data, but in the U-Map, so look at, did talk about it in U-Map, you have like the functions to, I don't know, get one point that samples that you collected in field, for example, and overlay these points over like raster data, different raster layers. So we are, we, in these training sessions, we already did it, so we will work just with the overlay of data, point data, that for each point, we extracted all the pixel values. So and considering it, we will train models, so we need to train a model, we need to evaluate this model, so there are some strategies to do it. And after that, we will have like a trained model, and when I say like trained model, it's a ready to go model, so it's a model, it's a file, it's a model that it will be able to do predictions without do the training again. So we will show it for you in the code. So mainly when you do the training, you have like a model, and this model, he has like, he knows how to process new data and predict like the land cover. So and he did learn considering just the samples, but you can use to do predictions in different parts of your data, of course. So there are some limitations with that, but then go with, is you really, for example, train a model to do the prediction in the whole year. So as we are doing here, and we will, and you can load this model and you can skip all this training process and just predict. Yeah. Yeah. If it's straight going, that's the idea that you will be able to process analysis, right? Rather than the route. No, that's a good question. Yeah. So actually, you need this rule, this arrow here, you need to read from the analysis ready data. So because, yeah, I will fix it in these slides. Thank you for noticing it because this, the training, it's really sensitive. So if you, if you change a bit the pixels, you will change completely the predictions. So it's, it's, there are some techniques to really try to put, like you can simulate noise in the analysis data and to try to predict raw data, but it's, it's more, it's a different type of research. And here we are predicting analysis ready data. So this is one of one example. So I did it in the, my PhD. So for, I worked in Brazil using the Landsat data. So mainly I, we had like more than 30,000 points for each year. And these points were visually interpreted by three specialists and they classified as pastry, forest. So and they are, you can see the points here. They are really spread all over the country. So this is a, we, and in my, in my PhD, I was part of the GIS and remote sense laboratory in Brazil. And actually we generated this data from scratch. So here we actually develop all this. We develop a sampling design. We spread the points all over the country and we develop a solution to help the specialists to check the pixel values and see if it's pastry, if it's crop and things like that. So in this context, it's, it's, it's not like it's really specific, but the, what it's interesting here, we have samples for the whole country and for the, all the years. And that's, it's not the main part of the applications. So, but even have, so considering that we have points for all the years and for all the country, you can see here, this little square, little squares here are like, those are the Landsat scenes. So we have 308 tiles, Landsat scenes to cover the entire country. And we have 34 years. And what we did here were really, we, we, we, we trained one model for each scene. So if you think in the machine learned way, the training process occurred by each Landsat scene and by each year. So at the end, we trained more than 2000, almost 13,000 random forest models. And it, and we mapped individually all the years. So, but if you think we have all this data, so it would be possible training one single model. And what I mean with that, I mean, I can get all the points from all the years and for the whole country and put all these points in a single machine learning model and train it. So it will be millions of points. And but maybe it could produce a better result. And but they're mainly there are two problems with that. We need to have the capacity, software and hard to do it. So it's not easy training a machine learning with millions of points. And we need to have our monizer data. And if you remember, my first in the morning, I explained that if you work with just with the raw data with the lens, we have three sensors. These sensors are different. So one image of 285 will not have the same pixel values, the same reflectance values from like an image of 2020, accurate by Landsat eight. So you need to work with harmonize it, the Earth observation data. So and they are true products at least probably there are more. But the main products to do it, it's the glad Landsat R&D. I talked about it during the morning and there is this other product that combines landsat and sent in out shoe. So in produce like one harmonized data cube, but it's just with Landsat eight. So we will have just since 2013, but I think they process it just from 2015. So this is mainly this is mandatory to try to create one single model for all this to classify for example, multiple years and different parts of Europe. So in the open data science Europe, we are doing it mainly. So we took all the Lucas points and that it's a basically a well-known field survey here in Europe. And we combine it with some generated points from the old from from the other land cover maps Korean land cover classification. And mainly we are using all these points to train one single model, but it's important to emphasize that we have the capacity to do it. And we also have like a harmonized earth observation data as the glad Landsat R&D. And without gaps and all these parts that I was explained during the morning. So at the end, we are predicting dominant class probabilities and uncertainty. So in the end of this training session parts to you will produce it also in the Python and notebook. But we are also using more than one learner. So random forest, it's already a sample learners because inside of a random forest, we have several trees. So decision trees and it's why they call it like a forest. It's multiple trees. But we are also using the gradient boosted trees and the artificial neural network. So we will show how to do it in Python for the tiles that we prepared. And yeah, now it's time to go to the code and start to really do all classifying the land cover, considering the data that we prepared. So just to overview of what we will do mainly, we will train a regular random forest, pretty straightforward without any type of hyperparameter optimization. It will be just a purely random forest and we will predict it to see how it works and to understand how we map it's doing it internally. Later the second example will be the random forest with hyperparameter optimization. So you can test different combinations of hyperparameters to really predict it and benchmark all these different parameters and find what is the best one considering some evaluation metric. And the second part, we will train the random forest with probabilities. So if you think in the random forest, you have multiple trees. So each tree will produce like an output and you can use this output to derive like a kind of a probability. So and this probability, it will, it changes completely the output of the algorithm. So we will see it, but instead to just have a hard class, a dominant class, so it's pastury, forest, you can have like multiple probabilities and these probabilities will show, for example, maybe there are some, there is an area where you have like a not a high probability, you can have a high probability just for one class. So you have sure that the model is pretty sure that this is, for example, coniferal forest, but you can have like other pixel where the coniferal forest is 60% and the broadleaf forest is 40%, 30%. So there is some confusion there. And if you think in the image, man, it kind of makes sense because you had some limitations with the data that we are working. It's not possible really get one pixel and have around 100% sure that inside of that pixel we have just one single length over class because we can have in a transition zone. So there are several complications of it. And at the end, we will expand this and put more learners, so more three learners and explain how we are deriving the uncertainty and how we can improve this result. And this is actually our current workflow. The code that we will show here is actually the same code that we are running in our servers to produce the length over map available in the open data science Europe. So now I will pass the floor to Chris. I don't know if there is any question for this introduction. No. Okay. Should I take the microphone? Yes, please. Hi. Thank you. All right, let's look. Okay, we've got the virtual box to work. Oh, am I unmuted? Can you hear me? Oh, good. We're all set. Okay, thanks Leandro. So I will show you a little bit of the code and we'll see if you can also replicate some of this. Who is able to also join me with the code in the machine? In the virtual machine? One, two. Two people. Okay, so for the rest I will just show the code and then in your own time you can have a look if it's also applicable to your projects. So this is the Jupyter Notebook that we will look at today. And then here we see the example of the interface, the online interface that we use to also publicly make it publicly available. And here we have the reference to, because much of this work is from Martijn's work that's also here, which will soon be publishing on this as well. So that's also interesting, I think. So first of all, we load the main packages that we need. I will reset the kernel so you can see how it's working. It should be working. Let me also clear the outputs. Is there anyone here who worked with SK Learn libraries before? No one. I think I know some people who did. That's okay. Okay, so you'll be recognizing some of this code here. So we will be loading the EU Map package, which is also, as Salerno mentioned, the main package that we, the current workflow that we use for the data that we provide. And we use some standard libraries, something to find our files in the system, and to load these files into the environment. So let's load these packages. Then I will define the tile that we use, so we provide two example tiles that you can try. And this, I just run this to be able to find the data on my drive. So here we have the file system, and I put the directory of the data. So that's this directory. Okay, so let's run that. Then I have some training data that is here. It's a geo package. So that's basically a lot of these points that are mapped that are we know, okay, this is this land cover type that is all put into this database. And then we will load it using the geo pandas package. So now this data is loaded into the variable called points. So here we see just a regular table with Yeah, all the information. So here we see for this, for this point, we know the class is 112. And then this is the value that the distance to the coast. This is the weighted distance to the coast, I think that Martin developed. But also if we look further into the data, then we see the land set values, and also some the names and the survey year. And because this is a year, this is a, yeah, this is your reference data. So we here see geometry. And the nice thing for the geo pandas package is that we can also see what what what reference system is used. So this is just the, this is a geo pandas data, data frame that has this, these functions to to find all this information. And actually see what my reference system is for these points. Because if I if I look at these values, it's not is not the regular coordinate so here you can see how how should I translate this if I want to plot it in another kind of software. And I want to get some information about my data. So for instance, so I just make a kind of a bar bar bar graph about some of my values. So here for instance I have the class name and the survey year, and then I want to see how many points do I have in my training data just to get an idea of what, what I have here. So I have a lot of coniferous forest points and more San Heathland and agriculture with natural vegetation. And here I see when these points were measured so I have most data in 2018. And this we can, you know, I generally I would do a more in depth look at these data but for the, for this presentation I will just go ahead with looking at the random forest. And then I will, I will focus on doing just the spatial and then the under a will later also include the spatial temporal so then we will see. I will be focusing all the years. Okay, so I will look at the spatial temporal data actually. So here I am also including some, some escalern functions. So this is actually to do. I think we use. Yeah. So this is for a bit later where we use the hyper parameter optimization, I'll come back to that. Then the random forest layer classifier that's one of the classifiers that's included in the escalern package. And there's many, many types of classification algorithms, and they're all kind of standardized so we try to also enable you to use all these different classifiers which makes it really versatile in how you can use it also on your spatial data so I think that's that's a very cool feature of this package. So the K for this to do, yeah, K, K amount of faults on your data to get a, an interesting evaluation metric. So here I will initiate the estimator the random forest estimator with five estimators. I'm going to select points just from this, the tile that has my tile ID that I have defined before, and I will use a five fold cross validation for the training. And then I define all the, all the variables that I want to use for my training. So in, in my data set, all the columns that we saw, I will put the prefixes for all these columns here. And then I will set up the, that I know which, which variables I'm using to train my classifier. So this is something that, that was written into this package so you can really be quite sure about what you train on and how you, and it's easily configurable. So I have my target column, that's our land use class. So I will load the land mapper class with all these configurations. So I have my points data for my data frame with all the training data. I have the variables that I want to use so it will be selected from this data. And then I have the target. So what should be trained on and then the estimator, the, yeah, the class that will handle the faults and another parameter where I just select classes that have enough training points. So I'm just going to look quickly at this one so it will select only the classes that have the highest amounts of, of training samples because for these ones we cannot reasonably believe. I think that we can make a good prediction because we don't have a lot of training points. Can I just explain it because we included this parameter because when you do a cross validation, you need. You want to have a look. Okay, so when you, when you use like this, but this parameter, it's important because when you do a cross validation. Mainly you will divide like imagine you have all these points you divide in five regular groups, but you need to have enough validation points in each of these groups so I don't know maybe you have just one or two points for some class. And this point will be present only in one of these folds in one of these regular groups and sometimes some algorithms that doesn't manage well with that. So, you have like, you need to provide it because it depends off the, the, the algorithm you can classify with less amount of samples. But for example, I know that SVM has problems with that. XGBoost also has problems with that. So there, there is a limitation with the way that the validation, the cross validation it's implemented considering this regular fault because you need to have all the possible classes in each of these folds to do a proper validation. Okay. Thanks. So, again, we, so we also focus on that it's usable in a production environment. So maybe sometimes you don't want all the, all the outputs to be printed to the, to the, to the terminal, or whatever and then you can also put, you can switch off all the outputs. So that's also useful feature I think. So let me see if I run this, I run this. I run this block of code. So I get some information about what the land map is doing. So it's removing some samples because it doesn't, it doesn't comply to this condition that I specified as they under just explained. You can just explain that this is actually the percentage right. The percentage percentage. Yes. So it's the percentage for each class. Yeah, just just back to that logic just because I think it's easy to this one. Yes, just, yeah. So it's, if they are less than 5% of the total of the samples they will not be selected. Yeah. So they, these are less than 5% of the total of all the samples that we have. So these are the original classes, the numbers and then they are remapped to this because some of the packages in as killer and they need us to have, they cannot handle these kinds of numbers to do classifications. So then we will train train this one. I'm putting this number so that we have the same selection of folds for our training. So we can have a bit similar results if you run it on your own machine. Then I can look at the accuracy. So here we can see all the accuracy metrics. So we get a 56.6% overall accuracy. And then for each individual class. So if you want to know which one it is precisely you can have a look here, how they are mapped. And we will also print a confusion metric metrics that will show us some in a bit more visually appealing way. So here I am just creating because the other names are in the, are in the data that we provided. So I'm just creating a dictionary to translate the number that's mapped to the actual name so it's still understandable to us. So here we can see the true classes and then how the model is predicting them. So for instance, let's see, sports and leisure facilities are confused with broadleafed forests sometimes. Or yeah, that's happening sometimes. So this, this you can use to get some information about how your model is performing and maybe where you need to see if you can do some feature engineering to increase your accuracy for specific classes. Let's see. So then you can also access this in in a tabled way. So what you expected the class to be for every prediction that we did. So then you can generate any type of accuracy metrics that you would like to do with your own functions. So after training, then we can save this model so we can use it in a production environment or something somewhere else, where we don't have to retrain it every time. So I'm just going to save it to this, this file. So we can find it here so the, it outputs where the location is. So I'll show you where it is. So this is actually my model now so I can actually just put it on in somewhere send it to someone and he can also try to predictions with the same thing using the EU map package. And now I'm just going to reset the kernel to see if I can actually load and do predictions again with the same, same model. So I have to reload all these packages because I just killed the whole instance. And then I'm going to define the parameters again, because everything is gone. And then here I'm going to load my model again. So this into this variable. So, we prepared all the data into these directories here that Leon would just talked about so some timeless layers and also the, I won't open it because it's many files so it will take some time. But there's a lot of lens at all the lens at bands that learn would just explained for every season for every year they're all in here. And for all the quantiles, but these two functions they will find all these the past for a specific year that I want to predict so now I'm just going to predict one year. So I will find all these layers here, I just print five of them so for one year I have. Let's see. I put them together so I can import them into into my class into the landmapper. So, so I have 86 variables for this. For this prediction that I need. So that's just, I'm just, I have all the past to all the variables that I need into the in this variable. And then I need to rename them because the original ones they didn't have a year there. So now I have in every name in every data that has a year component I have the year also in there. So I want to only get the 2018 data. So, yeah, replace the year with 2018. So I can rename all these files so my, so the model can find the right data to predict on. So it loads the right files to do the prediction. So here I put all these, all these parameters into the model line. I don't, I'm not interested in the output. We just looked at it. So we're not interested in in how the renaming works, because it's a long list of names that will be replaced so it's a bit. That's a good point. So let's see what happens if we show it makes more sense actually so you see what. So here we see that the original name that I had in my model for this variable it was this one so it has no year components, but I want to find this one where it says 2017. So that's what this function did so I make sure that the names of the files are right. So this. Yeah, it's a general thing in, in I think machine learning to, you know, you have a lot of preparation on your data so this is how we help handle the preparation of your data and keep it kind of comprehensible. So let me do that one more time without. So we have one output. I think so now it's doing a prediction. So took about one second. So here's my output. So I can plot that output with also one of the plotting functions that we provide. So here we see what the classification found based on our training data and the model. Let's see if you can find that one. If it makes sense to us. So here it is in on a map. So here we have the Netherlands we are now right here. So here we are we here. I think this is us, we are here. Right here. Okay, so let's see what what what is it. So we are value zero. Seven. Okay, so we're seven. So let's see what it is. If that makes sense. Pastures. Well, we tried this was our first try. Okay, so let's see what this this is zero. So that's more than Heathland. So this can help you kind of see, you know, what's happening with my model. You have a question. Just about the credit sample. Yeah, we have it. But actually we, we have like, we have like, you know, we have all these package relies on the functionalities of signkit learn inside of this signkit learn. And you'll have a validation strategy that's used like a crew by being to split the data. So you have this C.V. group. And column. So what actually what we do in the, in our production work, we did that. So all the tiles that I presented in the morning. So they, they are like one. And then we have the C.V. group. So you have this C.V. group. And then we have the tiles that I presented in the morning. So they, they are like one 30 kilometers by 30. So we use it like the tile ID as the group column. So to do like this special, this the cross validation, it will be like sort of spatial presentation that will use this ID. So you can avoid, for example, train a model with samples from one tile and validate with samples of the same time. So, and, but it's, it's just like, you need to change the CV to group and fold. So it's, it's a, it's implemented, it is implemented in the site kit learn. What we did was just like a integrated integrated to this functionality. So you can use this one, right? Yes. So you can use this one. So now I use here somewhere. Let's see. You just use the, yeah, because here we are just using, for example, the, it's just one pile. So we could split this point in like soup tiles and create this blocking structure. But for, for this example, probably it will work, but we need to really define this in the data frame. So that's a, that was a good question. Yeah, good question. Thanks. Let's see. Yeah, so in here, I think you can see, yeah, now we select on the tile ID, but you can then use it to do a spatial cross validation. And the next thing is that so all these, all these things you can use them within the package, right? So you can, now we use a K fold, but now we spoke about this group K fold, but you can apply different kinds of strategies here. Does that make sense? Yeah, great. Okay. The models. Let's see. I think. Where are they? Let me look further. This one. Yeah, so you have, because this, this, this libraries is, they create like this base class, this base estimator. So all the algorithms that are implemented here, they have kind of the same methods. So our package actually use all these methods training predict and things like that. So for example, you have like the SVM so if you replace the rental forest for SVM, you will have the same structure that we are showing here, but with a different model, the different learner. I think on top and bottom, you can see that there. So you can use like a linear regression. So we will use more learners, but it's fully compatible what we have here in the. Yeah. And I think you will show an example where we use multiple learners to do a ensemble machine learning. Okay, so we have now looked at this example so our first try we didn't do any hyper parameter optimization we just train a very simple model on our training data. And then now we will try and do some hyper parameter optimization and see if our results are actually better. Yeah, we already saw this one so this allows is also as a learn functionality that we will also use. Then the rest, this is kind of similar to what we did before. But here we define our grid search so we will run over the different parameters that we want to test. We want to test maximum depth and the maximum features and the number of estimators. So we want to test all these combinations of parameters and the rest and we use F one score that's also a standard scoring in in the S K learn package. But here we can also define our own. I think we will do it a bit later. We just input the estimator. And then the only thing we need to do here is just add the high parameter parameter and that will allow us to actually do this. And then we will create search. Let's see if we can do this. So we get the similar outputs. And then now we will train. We will actually train the model 90 times and see which one is best. So that might take a little bit of time. Let's see. What the computer is working on it. Some nice green bars. So it's still running. So now it's almost done. I think it's done. Yeah. So now we can see if the prediction actually better than before. So we have. What is it about a 6% increase, I think, in accuracy. So it's, it's, it's something. It's an increase. So that's good. Then we will do the same. We will, we can plot the. Yeah, the, the methods to do some invitation of our results. And we can look again at our results. We can save the model again and load it again. And I think, did we get the output file. Or not. No, okay, we skip the prediction. Okay, but we can do a prediction model. Well, let's have a look first at the probabilities and then I will, we can have a look at also these files. Now I'm using log loss score. So this is an example of a custom loss function that we make. We also use the escalator and implementation of this, adapted to our, our case. And we do the same thing here, but here we also tell our class that we want probabilities. So because we use this random force with a lot of different predictors we can see if they are corresponding to each other or not and if they are all pretty sure about the class then the probability is high and otherwise, the probability is low, or could be lower. So we make prediction again. And let's see if there's different results. We need to train it of course. So now we use a different loss function so we will. It's a bit different to compare it. So here we have now our accuracy metrics but then get towards the last metric that we defined for each class again. So that's, so we get from the model we get different types of accuracy metrics that we can all access through through these probability metrics variable. And then we can also print them. All right. And then we can see if this loss function actually gives us a better results. By looking at the, by looking at the prediction results. So now I'm applying the same function so to predict with the layers. But here I'm putting all the years. No. So now we're still looking at just 2018. So here's the same function to find the right data and to put all that together. And then we're loading it into the landmapper class and producing a prediction but now not just the class but also the probabilities for the different classes. And we also provide the plot roster function to actually have a look at the results. So, yeah, it's of course, very rough interpretation but if you look at this one compared to the other one. Let's see where it is. You see it's a bit cleaner. It's not sure if that's better but at least you can visually appreciate the difference in the result. So if it's better that's of course matter of further interpretation and looking at the data. But now we have a bit more more at hand to actually do these interpretations. So we can now also have a look at the, at the probabilities. So I think these are related with. So one is related with zero in this case. Yeah, okay. So I'm not trying to visualize the probabilities for the Morrison heat length that we just produced using the landmapper class. So, here we see the probabilities for this area. So again this is where we are now. Let's see. So this is the bargaining area this is bargaining town and here is the conference center. So now I can have a look where it's likely to have more in heat length. Yeah, let's look what's this. So this actually airports. So at least now we can see what what what areas have impact on our, on our predictions. And I should actually what I like to do is, because it's still the easiest way to to act to have compared these things to go to our viewer and see, you can also see the Lancet data over time. So that actually gives us a lot of information on what it is we actually see in our training data directly and then you can. Yeah, it's not. You can probably also load it in GIS but this one is like really, I think really user friendly to go to different years. We have to. Okay. This one. Yeah, yeah. That's it. So here we can have a look again. And now we have layer coniferous forest. So let's have a look. If this is corresponding to what we found. So here we are. It's a tricky class, because it's very much like I think many other classes. But hopefully our model also picks up these other classes and actually produce higher probabilities for these so that even though we see high probability here, maybe other classes still have higher probabilities there. So let's have a look. We now created our own prediction here. Let's see. Let's stay close to where we know where we are. So this patch for instance. Yeah, that's different. So now I'm observing that our predictor kind of finds a patch of heat land here or at least a higher probability than that we have found in the prediction codes that we used for the Yeah, that we used for this. So that's interesting I think you know this is. Yeah, just some interpretation of the results that you can do and I think all these tools they really help you with. Yeah, visually interpreting also your results and see what what you find in your training data. So that's very interesting. Let's see where we're, and you can do this for all these classes so this class five for instance that's going to first forest. So this makes a lot of sense if I look at this because if you are here, Wageningen then this is the veil with that's the biggest force in the Netherlands. So for me, I think okay this is at least there is quite some some truth in this prediction because I know this is forced area. Of course, there are many much more research to be done to confirm this. And then I think the under all you want to show the people the ensemble machine learning. Yeah. Yeah, okay. Thank you. So I'm not sure any questions about this part. Please. No. Okay. Yeah, so. And now we have like the, the probability results. And I would like just to. So, please show how to execute it. And I would like to explain this report here. And the metrics that we generated so. Here you can see that this is, you can see this report because Chris instantiated the landmapper with the verbals through so even for example for for us that we are developing this this API and we are using like on a daily base. It's really important to follow the process so follow how what it part of the, the, how which part of the workflow it's executing. So these verbals it will like explain everything that happens in the landmapper. So, the main, the main points here it's really, you can use all the class that are available in the signkit learn. So, random forest for learners but you can also use SVM you can use a Keras classifier that it's a implementation compatible with signkit learn. So I will use it, but you also can use it individually. And you can use different types of validation strategies. So here we are just using mainly like cross validation so it's, it's, there are a lot of discussion if the cross validation it's, it's the right way to evaluate it. But what we decided when we were implementing the library is inside of the library will mainly use cross validation. So if you want to validate your result you can do it outside so it's just a matter of have independent points and do the predictions using like the predict points method. So, Chris presented the, the interface here so you can see that you have these predict points. So if you have like independent points you can manage it outside. So, and, for example, you here we are using these Lucas points, combined with Korean Korean land cover, and we are creating our data set. And for the production work actually we use it almost 6 million of points so it's much more points that we are presenting here in this training because it here it's just a matter of how we demonstrate these functionalities. But, for example, we have the model train it now our production model like to produce the maps that are available now. And we can conduct, for example, I independent validation strategy and we can spread points all over the Europe and expected it by visual interpretation or even go to the field if we have money and the context of the project, collect the actual class of it and just use these points to do another prediction so it's a completely independent approach that we, it's not really related with the training and how we are implementing these inside of the model inside of the land map. So, you have this, this possibility. But what it's important here is, we are using the cross validation, because it's a kind of, it's a kind of metric that allows you to evaluate your training process, how you are capturing the variability in the class how your features are really like correlated with the, the expected variable and you can use this, like a reliable way to see if the model it's doing a good job, but to provide like a real validation of your result. And you have like independent data set and it's better provide like a proper sampling design so spread points considering like all the assumptions of the statistics and like a random sign or stratified design to do it but of course, sometimes you are limited, you don't have like budget enough to collect points to produce new data. So you need to work just for example with cross validation but in the literature now you have this kind of discussion and it's, and I personally I don't believe that a cross validation it's able to provide like a real and a proper validation for your prediction so it's a nice way to understand if your model it's doing a good job. It's a nice way if you, to understand if you are optimizing it, but to provide the real validation it's like independent data set so it's, it's, it's what we presented in the beginning of the presentation so you actually you need to have no new data so and, and you can try to split it, but for for the context of the land cover it's it's, it's how it's the state of the art, sampling design to provide a better validation and a proper validation for your product. So, you, you have it here, not all these aspects but you have the possibility to use what we train it now to predict new points, not just the, the, the, the, raster raster files. But if you back here. So you can see that considering this cross validation strategy that we are using here it's a five fold cross validation, the landmapper, it's actually dual optimization approach to find what are the best hyper parameters. And it's find the best, it's something like vague. So basically what it's doing, it's calculating some currents metric. And here you can see that Chris explained it. So you have here it's a F1 score weighted and it needs to be weighted because you have, it's a multi-class classification. But for example, if you click here, you can see that all those metrics are really available in the signkit learn. So we could optimize also by the currency or by the precision and by the recall that are actually in the context of remote sensing are like user and producer currency. So it's possible do it. But so it's really like it's a modular approach. So you have like validation strategy defined, you have which parameters that you want to test. So all the parameters that I put here, it will be a combination of all these parameters. And you have different strategies also. So I always prefer to find the parameters because it's sometimes you really need to think about what not all hyper parameters needs to be optimized. And so, and it depends of each algorithm. So define these values actually force you to understand a bit more about these learners, about these models and select actually what are like the best, not the best, but the proper parameters that you need to optimize. So in here, we are optimizing the number of trees. So it's something that it's a common standard in the most part of machine learn applied to land cover classification and other domains. But if you provide like, I don't know, 60, 80 trees, so you should have a good result. So considering like dominant class, hard class. So it's a kind of standard value. But of course, if you put more trees, you will provide, you will have a higher computation time because each tree needs to be computed, need to be trained, need to be predicted. So, and this number, it's actually, it's really high correlated with the performance of the model. So for example, if you have 40 trees and you feel you have 40 course, that's not our case here, definitely, we just have four. But if you have like 40 course, each of these trees will run in parallel. So the process will be way faster. So it's this hyperparameter optimization, you want to optimize to find the best model, but there is a plateau. So you need to be aware that sometimes you are just adding computational time and it will increase, I don't know, 0.5% in your records. But for some applications, maybe this 0.5, it's really important. So if we are doing like with, for example, medical applications, it's life. So you need to optimize at most, but it depends off the way and the domain and the application that you are trying to do. So you have this, for example, flexibility here and even for land cover and for geo-spatial classification, maybe you need to really improve the accuracy at most and process more with a high computational cost. So it really depends. And here you have like these other three parameters that are actually the, what is the maximum depth of the each tree? So if you think that these three will like be trained like you will split the data set on each node of these three and this maximum depth will be actually, the default parameter here, it's non. So it will grows till like separate all the samples in the training set. But sometimes you, if you reduce this depth, you will have like a shorter trees and maybe it could help for the probabilities because you will have like more fuzzy output. And if you combine all these trees, maybe it will help. But it depends off your model and the way that you are training. Not the way that you are training, but your training data set. And the maximum feature, it's actually how much features it will be analyzed for each split. So for example, if you put here 0.5, we have 86 features. So it will use half of it to define the best variable to separate the training samples. So, and for example, sometimes I use it like one here. It's totally random. So the random forest will be trained like for each tree and each node, each part of these three, it will select randomly one variable and it will try like split. And maybe this variable, it's meaningless. So maybe this variable, it's not really performing well to separate your point samples, but it will create this node past to another level and select other variables. So maybe you could have like a really deeper trees, but it could help. So it's difficult to predict it. So you need to really think in some parameters, think about what are your computational capacity and think in how you will do the predictions. Because I don't know, maybe we can have a random forest with 1,000 trees, but how we will run it. We have a server to do it or maybe it could produce a very long computation time. So you need to balance all this to define these grids or CV. So it's why I really like to not use them, there is one strategy that you just put some random value between like one range. But I think it's nice to, if you think about it and really try to put some values that are specific for your domain, for your problem and for your application. So mainly here, what we are seeing is all the so you can see that there are 18 candidates. So it's mainly three multiplied by three. And so yes, three, three whites should be less. So 18, yeah, three. Yeah, three by three, nine. Yeah, multiplied by two, 18. So actually, if you think in all these combinations, you have 18 combinations of hyperparameters. So it grows fast. So I don't know if I put like more two parameters here in each of these options, it will be more than 100 combinations and you can quickly blows the process. Not here in our case because it's a small dataset, but mainly for all these 18 candidates, we are using a five-fold cross validation strategy. And so actually we are training 90 models. So, and here you can see the result of the F1 score, weighted F1 score for each of these combinations. And here we are optimizing it by the higher value. So, and the best parameter is this one. So maybe we could optimize a bit more, but for this dataset and for the random forest, we can put more trees if that was the, maybe that could help, but maybe not too much. So, yeah, I think this, yeah, I think here we stop for a break. And I will back for the second part and we can analyze better the probabilities and predict other years. So, and there is, I don't know if there is questions, there are questions. Nothing to chat. So, okay. So let's have a break and we back in half an hour, right?
|
Software requirements: opengeohub/py-geo docker image (gdal, rasterio, eumap, scikit-learn) This tutorial covers the theoretical background for machine learning and python implementations, as well as integrating raster data with scikit-learn models. Why use pyeumap.LandMapper? The tutorial shows how to prepare the training samples with spatial overlay, how to evaluate the ML model performance with spatial cross-validation, how to tune the ML model with hyperparameter optimization, how to get the final ML model ,and finally how to generate spatial predictions using the fitted model.
|
10.5446/55235 (DOI)
|
Hi everyone, we are back. So I will do a short recap of what we are doing here before to proceed with the new ways to implement it. And now we have more time to go deeper in the code and see and interpret the result. And we can discuss some other aspects. So mainly we did everything here, we did using the random forest. So we implemented a hyperparameter optimization and later we used the hyperparameter optimization to find the best model and classify the probabilities. So to produce the actual probabilities. So before the break I was explaining that here it's mainly the different combinations of hyperparameters that you can set here. The best one and it's actually used to calculate the evaluation metrics and the random forest. So one important aspect here, so you can see that mainly there are three processes here. One process is derive what, find what are the best hyperparameters and how using this evaluation metric. And here again this is the F1 score in this case. And so using this best parameter it calculates the evaluation metrics. So now considering this it will run again the five fold cross validation. It will split the points in five different groups and it will use each of these groups as validation and train with the other four groups. So at the end you will have like five models and these models will be used to predict different parts of the data. And then you can have like a result for your classification, currency and you can have like all these metrics. I will explain a bit about it. So and mainly you can see that this is to calculate the evaluation metric. So it's mainly execution of the cross-val predict method from the Signkit Learn. So you send the data and you receive like I expected value for all your samples and these values were predicted using like the out of fold. So it's a proper way to generate predict to generate like predictions and you can compare it. So you can generate metrics about it. So and I will show to you specifically the method that we are using here. So mainly we are using this method here so it runs inside of LandMapper. You don't really need to worry about it but it's nice to know because in the user guide you have like a good explanation of how it runs. So it's mainly like a cross-validation strategy that will predict all your samples with a valid value for derive your currency metrics. And so and after that after the evaluation you can see that it's it trains a random forest using all the samples. And I it is important it's mainly because for the evaluation metric you have like the LandMapper will train five random forests because we need to have five models to predict different parts of data and to provide like a proper cross-validation. But at the end we actually we want to have like the most accurate and precise model so we are using all the samples to train like the random forest. So all these approaches implemented here so first there is a valuation the calculation of the evaluation metrics and later it will train like the final model. And this model it's actually what you use safe and and it's it's what you use to predict because the assumption here is while as you are using all the points to train one model it will be like more precise or it will have a better than the models that you use to calculate the evaluation strategy. So and and this this everything it's all this workflow it's implemented here inside. So what I will show now it's if you look this the confusion matrix so all these metrics they are mainly derived all the metrics actually they are derived from the confusion matrix. And if you have the confusion matrix and if you have the expected and the predicted result you can derive all these these metrics. So mainly the the overall currency here it's a classification. So it's a dominant class and you can see that the overall course it's mainly like how the models how the model it's predicting right all the class. So it's like here you can see that the mixed forest this is the expected result and this is the predicted result. So considering the whole the other points available 37 points were predicted correct. And if you sum all these values here and divide by the total number of points you will have like the overall occurrence. But this is just like a overall metric. It's like more to have a general view of the performance of the classification. It's better really understand each metric. So if you look here for example you can see that we have like a 63% of overall currency but some class are performing like really purely not well right six here. So in and you can see that the these values here are really like it provides much more information about what which class are better or not. So OK and let me show the I will go back to the metrics here. I'm trying to find the there's a link here with the scoring metrics. So inside of each this metric you have the precision here you have like how it is calculated. So it's basically a rate of the true positive divided by the true positive plus the false negative. So and these are really like standard metrics for machine learn. And I think the best way it's to understand this metrics is it's really like looking the confusion matrix and calculating it over the confusion matrix. So it's a it's a nice way to understand what they mean and because it's mainly it's related with the error of commission how much the classification it see like more than it is like for example if you have like a pixel that it's not pastry and you are classifying as pastries. So it's more like a commission error and if you have like a pixel that is a pastry. So in the reference data it's pastry and you are classifying like other other class it's a mission error. You can work with this class and understand better the performance for each class so it's it's generates how you can understand it better. But if you think here we are just talking about the dominant land cover class so because a pixel it's more like a exclusive type of classification. So a pixel can be just a pastry or a forest so it can't in a hard class it cannot be like both at same time. So when you work with probabilities these metrics it it's kind of doesn't make sense because they were conceived considering this this main assumption. So a pixel or in this case a pixel can be just one or other class not both at same time and if you have a probability result you can have both at same time. So to illustrate it I will open here the all the probability results. So you executed all the. So you can see the results here so I will open directly in quantity as it's better. Here you can have the hard class. And let's sort it. And I will put here and now you can let's put some. Nice colors here. Yeah I will use like a range of 0 to 5th. So now we can see that. Here. So you can see all the. How the probability. It's changing. So and. But what I would like to show is this so let's get one region where we can have like multiple values. Yes so. And let's check. The class here. So mainly. These are the class. Yeah there is something. There is something wrong with this list. Because it should be ordered. Because you can see here that. Mainly it works. Like. Actually this is the order so 0 it's the first one. Second so this is actually the order here the expected order. So probably. Yeah we need to have this. So here we have all the unique class but inside of the landmapper. We can have the. The target class but I also have like the. Labels so. I will just get it in the documentation. Yes the. Yes the target. Here in the beginning. Here we have the target. Transformation. Yes so we have this target class so here we have the original and the. So here we have the same object and you can see that these are the. Actually these are the. Class that we have in the original data set. So. This one so mainly what I will do here. Is let's get our points so we have the. Points. Tile and we have this column here. And. For this column. We can get. All the values. Organizing it so. Last list. These are the labels. These are the class. Let me just match this order and we can. See. Label. Label. Yeah for the sake of time I will just use this. We need this is just we are trying to match it with the. Because here you can see that it's not. Yes so I will basically. Or it's. Yeah so the right order it's actually this one so. This is the first class. This is the. Second one. Because the. The you the landmapper it actually fall the. Order of the original class so it's just a matter of remap it. But here there's like a bug to. Do it. Properly just to present not to the predict. So in the way that we have now so you can see that it's just. Now these are the right. Class so. The first one it's. Industrial area. Second one is this three. Four. Five. Six. And seven. So and. These are the class that we have and again. I the landmapper actually starts from zero because it. For the XGBoost that we will present it it actually. You need to start from zero so it's I actually prefer start the mapping the class is beginning from the one. But. I when I was preparing the training session I got this problem and I actually I modify the landmapper and it's why you have this mismatch with the. With here it's been one because it's the first band of the raster but actually the predicted value in this is this guy here. So what I will do now just to provide a better. Way to analyze the result I will rename it. So. This is the. First one so. You can see this is industrial and commercial units. It's more related with the. Urban area. You just. Is it very hard. Yeah it's working you can see here. So. This is the second class. This is the. Third. So. Maybe a nice functionality would be really right the files in using like some description classes description column that we have in the side inside of the. Of the. Data frame so this is this is a nice could be a nice improvement here. So. And broadly forest. Yes. So now you can see that. This is actually the conifers forest. The pastry areas. The industry and commercial units so it's more related with the. Urban areas. The broadleaf. Forest. And so and for example if you get like this pixel and this so this is. No irrigated arable land you can see that this picture you can have a. Bit of confusion. And. If you check it here so you have. Fifty three percent. Of probability to be and no irrigated arable land but thirty one percent to be pastry. So in this case you actually you have. You can have multiple class in the same pixel and. Depending off your application you can play with this probability so so maybe if I'm. Interesting pastry this probability it's enough for me and I can use it to. Classify this picture at this this pixel as pastries so. The nice way about this probability is you really have the flexibility to. Work with that later so this decision to what value you use to separate and to define the dominant class. You can do it later and you can do it specifically for one class maybe you are just interesting forest so you can use all the probabilities available for. Forest and things like that so and because of this so it doesn't make much sense thinking a curious because. At you are working in the probability space here so and it's why we use it the. The log loss metric here and. This log loss you can see the how it is calculated here but it's mainly. The difference in the probability space with the expected result and if you think here the expected result we are using hard class to do the prediction so. Mainly we have like one pixel in our training sample one pixel it's just pastry or it's just a. Agriculture agriculture arable land no no irrigated arable land or it's just forest so. We are using inside of the landmapper we are using this class called label binarizer so it's just a way to convert all this data and hard data in a. Probability so it will be like 100% of probability for the class that we have like pastry and the training sample and zero for the others so the log loss it will. Calculate the difference between these expected result 100% so if the models. Active 100% for that pixel it's a perfect prediction and that's not the case for the most part of the pixels because we have. Confusions and we have problems with like to generalize and if you are predicting everything perfect probably your model with overfitted so. But you will have a difference so in the log loss if you have a lower log loss your prediction it's better so because you have. Less confusion in between this. The other class so for example here. And it will be a higher log loss because. And if you turn all the probabilities you actually can see the log loss it's just like a. Metric to evaluate how much. If you have like. 20% in all the classes you will have a higher log loss and compare it to act for example maybe this pixel is just mixed forest so but. We are using it because it's a nice way to evaluate the predictions in the probability space so that's that's the main reason. And you can see that it's why we implemented it here and we sent to the. We sent to the landmapper so to the hyper to do the hyper parameter optimization and here you can see the values are different because we change the metric and the lower value. It's best here and. You can see that it's it's a different parameter so if you go back here in the random forest. I think it's different yet it's different. So because the maximum depth here it's none so the trees will grow deeper and to separate all the class all the samples but here. We use it like. Three with just a five levels of depth so that's nice because we did the same thing the same approach and we just changed the metric because now we are using a. Proper metric for the probability classification and mainly we are seeing we are using a different parameter so. And here. For the probability space you can actually now if you think that each of these maps because you can see here in the quantity is. You can see the class and. Mainly if you these values will sum up 100 percent sometimes it doesn't work like that because you have some problems with the. Here we are writing the data as byte format so between zero and 100 so. May in some case you can see some mismatch between the floating point but. In general you will see like. All the probability will sum up 100 percent because it's the way that a random forest it's. Dealing with the this type of output but if you think. For your kind of application you could I don't know just get one. Map so I will get for example here just the forest one I will close it and. So I have full flexibility to. Define where would be my forest so it's a probability layer so I can. Came here and just I will just hide all the values bigger than 50 percent. And. So yeah. Here we are just seeing like maybe I was. Too strict but it's I'm seeing just pixels where I have like. 50 percent. Or more. Change like with probability values and you can play with that so you have full flexibility to define your probabilities and. Work with the data and. With the way that that you prefer in your in your domain or in your application. And. There is this nice. Matrix specific for the probabilities where you can calculate like what would be the best probability value. Considering the like all the possible thresholds because if I specify like. 30 percent as threshold I will see more forest than 40 percent. So you can test different what this metric does it's mainly test different. Different thresholds and use it to estimate what would be the best cutting point and here. You can see that the. Cutting point that time. Specifying here it's this optimal probability. So if you use this probability you will receive like these two. Values that you will achieve these two values in the precision and the recall and you can see that it's a balance. So your map will like. It will have no bias so you will not see much more pastoral than exist or you not. Overestimate or underestimate of all these classes here. So it's it's one way to use it that we implemented here but. It's for sure it depends off your application but what is the man the main message here you have full flexibility so it's why it's important provide this these probabilities so. What we will do now is. So till now we just work it with the random forest so. Now we will really use more than one learner and. Implement mailing this this workflow here so. The land mapper it supports just it supports a list of learners and a list of hyper parameter optimization so. What we will do is we will train out three learners one it's a random forest as we are doing. Other is the gradient boost at trees and other it's a regular artificial neural network so we could use other learners here but these three learners were the best. That we found in the our internal benchmarks and tests and for example you could use SVM but sometimes the this learner it's really like. High computational so it for to do prediction it's it's not really. Operational approach for us so considering these three learners. All these learners will predict probabilities so we will generate probabilities using the gradient boost in trees and the artificial narrowing. Network also and we will combine all these probabilities and generate what we are calling here as meta features so. We will use the probability to train a second level a second layer of machine learning and as it is a. Second level so you you can think that the original features their landsat data that we are using they are the input data here and we are generating the probabilities combine them. And training one more learner so it's a kind of meta features so it's the probabilities are actually here. A derived product of the original features and we use just a logistic regression approach here and we will provide the same output the probabilities. The dominant class and the uncertainty and if we think in the uncertainty it's a pretty nice way to generate it because now we have random forest. A gradient boost in tree and artificial narrowing that are so each pixel we have multiple predictions at least three and. The uncertain here it's mainly the standard deviation of the probability so if you think that maybe one pixel were was classified like with 80% of probability in all the three learners so we will have like a lower standard deviation so it means that. The three learners are agreeing did agree with did agree with the did agree with this prediction with the output of the class but if you have like random forest predict 80% and the other learners predicting like. 30% so we will have a higher standard deviation so it means that that pixel it's more tricky because the other learners are not able to deal with that the problem is you really need to use like comparable and strong learners here so for example if I put like a poor model maybe it will. Over shoot or it will mess up with my uncertainty because I can use like. I don't know just as a linear regression or. Some. Like weak learner that doesn't provide a good occurrence. My asserted will be a compromise so it's really important to find what are the best learners available and in our results we. Found it so gradient boost it's what the literature says artificial neural network and the random forest. So and. Implemented in Python and there are different strategies to combine it so you can put like multiple learners you can create several you can combine all these probabilities in different ways. But mainly here in the map we didn't use it I'm just put here because maybe it's a nice way to understand better this concept and how it's implemented in Python. But I didn't use we didn't use here because it's really this this framework it it was optimized to. Like to not preserve all the intermediate results and we would like to have access to the probability so I will show to you. After the predictions you can have the probabilities of the random forest the probability of the gradient boost and trees and the probability of the artificial neural network so we can derive each probability separately. And inside of the land map it's important has these probabilities because we will calculate the uncertainty over these probabilities so here we just use it like a regular cross validation prediction in the in the map. So. So. It's it's a robust approach if you think and you can. Use more. You can provide for example feature selection we will not cover here but in the map you have a tutorial to explain it. So it's a nice way to you can use you have like really flexibility to implement different pipelines and different workflows for your application. So here we will create the random forest using the same parameters. Here we will create the. XG boost so and you can see for the XG boost time I'm mainly using these two parameters so. Each of these learners they really have you need to understand what are the parameters to provide a nice way to to optimize them so it's not. Really simple you need to read about the. The method and the algorithm but for example here in the. Actually boost you have a lot of information about this gradient boosting tree and this library it's really compatible with and signkit learn. And. For the. Artificial neural network we are using. And the Keras classifier so and you can see this is a tensor flow class so you are using. Here we we will use this to build like a. A neural network just with some fully connected layers so we. We are calling it like in the tutorial we are calling it as dense layers. So it's just this layer here from the. Dancer flow. But in the let me show you in the map. We are using this method so basically it's a dense layer so a fully connected. And neural network. A dropout layer. And back to normalization and it's just like several layers of this combination and. You can see here that. I'm. I'm. Using this pipeline in the signkit learn so with the pipeline you can combine. Different operations inside of the the signkit learn so mainly I'm. Normalizing all the data to optimize the training for the Keras classifier and I'm calling the Keras classifier sending like a function to build the network. And here I'm sending all the parameters so you can see that it's just five epochs so it will train using the whole data set using just part of the. Epochs and. Actually one epoch will use all the data but here we are using just five because if you put 50 30 it will take more time of course and. Here it's the number of layers so. Actually we have like a dense. Dropout and a batch normalization so we have four layers. Of this combination so. And this is just like a helper to create a regular official network in the Keras classifier. And. Here we have the same object so great search CV but with a difference so you can see that we have the estimator. And this is the same name here and we have this double underscore so it will indicate to the signkit learn that these two parameters needs to be optimized so. I will test. Mainly for neural networks here one with dropout hey hey hey rate equals zero and other with dropout hey rate equal zero point. Fifteen and with this number of layers so true and for so it's mainly it's it's kind of. Creating this calling this method here with different structure for your neural network so it's a nice way to provide this test different. And architecture is inside of the the tensor flow. But again this is just like a regular neural network it's not a deep learning it's not a convolution neural network it's a pretty regular one. And so till now we created these three objects the random forest the gradient boost in three and the artificial neural network. And now we will create the last learner that it's actually a metal learner and this learner and this model will receive as input the probability values. And. We created it here and you can see in the documentation there are some options to implement the to select the solver so it's how the linear model will be fit inside of the implementation. And mainly here we are using this saga solver because it's it was the fast one that we tested but maybe it could depend of it depends off the probability values for sure. And it depends off the data that you are working. And again you can see that these parameters here are really you can put. Different values here we are just testing some specific parameters to. To train and to fit this linear model in a in a different way inside of the. The assigned key to learn so one thing that I forgot to mention is this so if you open for example I will. Back to the documentation so. This hyper parameter optimization. You can see that we are defining the fit interpret intercept sorry and the C. And these parameters they are actually the same one in this learner documentation so. You can optimize any kind of parameter that you have here so for example if you want to optimize this. Class weight you can just copy to here. And define like which values you will. Optimize and it will increase the number of possibilities so you need to be. Aware of that and careful. And because could improve a lot the computational time. And now we will keep the same. Structure so we have the same points here. And we also have the. Our cross validation and strategy and we have the. Featurey column prefix so all the columns that we will use to train the model and. We have this to meet our list and the hyper parameter selection list and this is different so you can see that. Now you are sending to the landmapper a list of learners. And a list of hyper parameter selections and of course you need to match the order of the hyper parameter selection with the order of the. Learners so now instead to just use one estimate. Or one learner it will use several. Implementation so several learners and this. Part here so all this workflow will run inside of the landmapper. So you can see here. And now we will do the training. So you can see that the first. Part is it will optimize the parameters the hyper parameters for the random forest. So. It will show all the combinations as we already saw. And now it will optimize the XG boost classifier. So now it will test a different learner with the informant hyper parameters and. It will do the same thing and find what are the best parameters and you can see here. This is the evaluation metric the log loss so as we are aiming to predict probabilities. It's it's the best way to optimize the hyper parameter optimizes to do the hyper parameter optimization. And now it's doing the tensor flow. Let's see how it works. I almost without Ram here. So it's running the cat as classifier that it's actually a tensor flow class. I think the whole workflow for this takes like. True or three minutes something like that. So now you can see that it tested these four combinations and this is not like. It's just a demonstration to do a production work you need to use more epochs. So it's really important for the artificial neural networks. But you can see that you have and the log loss for the random forest and for the XG boost like the gradient boosting trees are close. But for the tensor flow not so good. So remember a higher value for the log loss it's a worse classification. And so now if you think that we have six class so let's go back here and see that we have six class. And here this. Yeah actually we have eight class sorry and. So each of this class will have a probability value from each learner. So if you think we'll have eight probabilities for each learner and we are using three learners. So the meta feature is it's it's actually a combination of all the probabilities. So we put together the probabilities of the random forest the probability of the gradient boost and the probability of the cat as classifier. The artificial neural network and we created like 24 meta features. So actually they are. Derived from the they are actually the probabilities but for a new learner that will use it. Now we will train a new model using these probabilities as input. So we call it this approach as we are calling here as a symbol machine learn but in the literature you have some works to refer it as super learner. So actually we are training a new model that use the probability as input to predict and to improve the the the accuracy of the result. And you can see here that it's doing a good job. So it's this is just a logistic regression. It's basically a linear model and. You can see that. The log loss it's lower than all the learners what it's expected. So because we are training a new learner that you use these probabilities as as result. And now we have the land map per calculated the evaluation metrics as you can see here. And now it's trained. Over all the three models again but now using all the data. So the. The full history of the process in its here so it trains the random forest. Optimizing the hyper parameter so this is the best hyper parameter in our list to achieve the lowest. Lowest low lowest log loss in this all this hyper parameter list. And. The land mapper do the same thing for the gradient boosting trees. So and you have like the lowest log loss here again. And he's doing this it do is the same thing for the cat as classifier so you have it. So the first part it's really the hyper parameter optimization for all these three learners. And later it put together all these probabilities as meta features. So now we have. Twenty four values three learners multiplied by the number of class that we are using. And. Fit a logistic regression using it as input. And. The last part it's just. Train a random forest. Gradient boosting tree. Artificial neural network using all samples so because we already did everything. And like all the evaluations and all the hyper parameter optimization and train a meta estimator with all the samples. So the complete workflow. This. Work it and. As output let's see the output here. So you have a better log loss. And. And this is just the training so now we are ready to do the predictions. And. So to do the predictions it's it's the same. Approach. Explained by Chris so you need to retrieve. All the. Raster layers that you use it in the training part so you need to retrieve the DTM layer and all the. Landsat bands so this function this function this code here it's just to. Access this data in the. In the folder that we are providing and. Matching the names so removing the year because in the original in the training data you you don't have the year because you have points. From multiple years. And. Mainly here you can do the prediction so it's the same. Approach and now I'm just calling like a different I will use a different name here. And you can see that it's first it executes the random forest. Later the gradient boost in three. And. For last it executes that we are calling the pipeline but it's actually like two operations one it's the normalization of the data and other is. KS classifier. Let's see how it works how it's in the processing. Yeah it shouldn't take too much. Maybe I need to. Free some of the memory here. Okay. Now we have it. So when you can see that the same bands were produced the hard class. And now we have the uncertain layers also. And this uncertain layer it's it's just the standard deviation of the probabilities that we predict which using each learner. And now let's open this data it's. I think it's a good exercise if we compare it. Okay. So when I will organize these layers here. And I will use like. Human readable. Class names. Here. Okay. And. Now we can compare like. So this is the. Pastry area. And you can compare with the. I will copy the styles. Here it's more restrict. And this data it's from 2000 and. Right so. And we can also compare the hard class. And for the rare hard class I will just. Use some colors. Here to help us. And. And I will open the uncertain layer also. And. This is the uncertain layer. And you can check the values here with. So you have the this is the kind of the standard deviation for the. Probability values. And. That's one of the. Main advantage to use this approach you can. Compare. And you can generate this uncertainty using. Different. Models and. You can put more models would be. Better but again it depends off your. And go and your. Application domain because. You need to have a model that will be a that will be used for real predictions. And here you can. And you can save the same. You can save the model so as we. Presented earlier you can really. Use the model to do predictions in the future without retraining. Yeah so this is. This is just. Like we executed the four examples that we. Planet. And. Yeah we have time now in the. End of the session so we can. Some of the. Aspects that we presented and we can test for the other title. And we can. Or we can increase the number of points. But yeah now I think we can use this. Time to. Discuss about. About we what we were talking here and. The examples that we presented. Okay any. Questions. Not in the chat. What do you think would be the next thing to add to this. To the specifically to the landmapper. Yeah I think. Yeah we can definitely we can improve. Like. We can I think we can improve the output now so it's something that. Even here during the training session it's it's difficult to match. The probability layers. So for example. Because you you have like a value here that actually doesn't match. With the original values in in your class. So we could add like a functionality to improve. The way that we are generating this this output. But. And maybe we can we could support other type of. Structure for the for them. Super learner so for this because now we just have like two layers maybe we can put. Other different topology I don't see like. Much value moment because if you put that third learner here it will not add too much. To your classification but. So the rest it's here so for example if you want to do a feature selection. You just need to implement a pipeline and send it to the landmapper. And. So it's fully compatible with signkit learn so. You you can use different models and different. Approach so it's just matter to really. Use this pipeline that it's a. A nice way to integrate all these workflows and. But and but of course I think would be nice if we have more users and. To start using this code to other applications. Because we are developing it and we were developing it like in the last months. And we are fixing it and try to. Add new functionalities but maybe for some other applications we could implement. A different functionality that for our case it's not really. Important now. But yeah a short answer it's definitely the way that we are generating the output I think this this must be. Improved because we are producing too many layers and. We it could be better. Explained it to the final user think. You have any. You have. Yeah you mean like after the classification do some post processing. Yeah that's a that's a good point that's good that could be a nice functionality. Because of course you can do it outside but if we if we put like an. A post processing. Method we can implement some of them in the. In the landmapper but now we don't have I already use like this one so in the. This high pie you have some kind of in the image. And the image. And the array. Medium. Yes this one. So this is actually like a median filter but it works. Like on multi dimensional data not this one it's the future. This one. So this is nice because for example you can. So it's just a medium filter but it works on three dimension so you can actually create like a. Cube carnal like a cube. Carnal like with. I don't know three by three in the space and five window and you replace the center value the center value with the median of all this. Cube and this could be one of the. Ways to do post processing I did it for the past three. Maps that I. Show in the morning that I did show in the morning. So but and you can just work in the space for example you could try. Run like a smooth filter over the probabilities so this guy here this sign by this. Has also like. Savit go like. Filter so. Yeah. So you can use it over the probability because it's a noon pie. Array so. Definitely that could be a nice functionality in the package like inside of the package. But then there is. There is a thing about that so once you have this like. There is a possibility of. Your. Again affect. Sure. Defended data for. Yeah that's who that that would be for me that would be the right way to do because now for example we are just providing a cross validation strategy. And if you change your pixel values you will have a different map so. And for the cross validation. Yeah I don't know if it would be possible apply the filter before to calculate the metrics probably it would be but. It's to really for me a proper validation really need to use like independent data set as you mentioned. What about the. Yeah I agree I agree with you like. And. But you need to find the dependent data set. The pendant point validation validation samples and so and at some the best way like if you look in the literature to. The best way it's really. Create the map and. After that using this map you can use the class that were predicted to. Create a sampling design like a stratified random sampling so inside of each of this class you just spread points and you expect this points because we are talking about land cover so if you have like a. The land set data available you can check the points and see this is forest this is not forest some class are tricky so. It's difficult to this thing like mixed forest or from conifers forest so it depends off your legend also but and after that with this. Independent validation you can. Back to the map and just do like a proper validation it would be so. The reference here it's the. Capability and the capacity to the to the. A specialist check the image did that's work for example for the land cover but for soil organic carbon no so you need to you can do the predictions you can provide the maps but. To do a proper validation with the dependent data set maybe you need to go to the field and there. It's it's a kind of. It's difficult to to find the budget and to find the time to do it but for sure it's it's what it's more reliable I think. I'm talking from a personal experience model you give me 95% accuracy but when I run like I do accuracy for me dependent data it falls to sometimes 70%. Yeah. And how how you are doing the how you are getting the independent data set. So you can. So you can always go back to that is that that's higher resolution than that so you can use that. If you are not obviously you're not working with like regional scale that aside but like. That's not. Yeah that would not that would be nice for some of this tricky class so actually you can. Provide the sampling design and imagine you now here we are using like a land we are producing land cover product for 20 years so since 2000. So if we spread I don't know 500 1000 points all over the Europe we can have like a pretty good. A current assessment like a validation and we can use analyze these points in the Google Earth and in the Google Earth images and. So and it would be easy to identify the most part of the class like vineyards and. Mixed forest and things like that. So and of course you are generating like a better because in general you need to use a better data to do the validation and that that is the case here. Okay let's see the chat. Oh okay there is a question here about the. Hyper parameter optimization so yeah I it's possible you can change the the grid search but I don't know what are the. Options here in the sign kit learn I actually tested a few but I always prefer to. Define it but you have yeah you have all this so you have this random search CV and this. Halving grid search I'm actually not quite familiar with with them but. We are mainly we are calling the cross a val predict method from the sign kit learn so. I did test with the random. Search for so for the CV. And it it works so you can use like any of this sign kit learning implementation you can use it here. So I don't know you can get this random sampling and just import. In the. You can just import. Like. So you can just import it here and. Of course you need to define other parameters because here it's a. Param grid and you can use it and. Stantiate to define the range right with the random search you need to define like a range and. For each of the parameters and. You can send this to the so to the landmapper so it should work considering the. Considering the current implementation like about the concept of aspects of it. Yeah you have different ways to implement hyper parameter optimization and. And of course it's it's it's tricky because sometimes you need to. Use like some strategies to optimize it and. But mainly I'm I like to. I prefer use this grid search just because it forced to me it forced me to think in what could be the best values because. And it's it's more like a personal thing for me I like to. Try to understand this algorithm and think what would be would be the best hyper parameters considering my my problem so. A good example is this probability so if you think how the random forest works for the probabilities so it may. It mainly use the leaf node the end members the land the end node of each tree to derive the probabilities so if you have like. Shorter trees like with less level less deeper trees probably you can have better probabilities so because if you grow everything to the end so the leaf node will be have just like a few samples so. It's it's it's nice to maybe you could try to optimize it like the maximum depth and define what could be the best the best values but of course you can do this exercise with other. Or is this or other implementation so for the grid for this maximum that I can instead to just establish the fixed values I can use some heuristics or some random. Approach to do it so. But inside of the the land map it should work fine. And the other question was related. With the yes the cross the spatial CV so actually the land mapper. I think Chris. Talked about it but the land the mapper you can find this. Column. And group CV call so here it's mainly the. The column to that will be used to do the spatial cross validation we are using it in the project so. For example in this case here if you. Split this tile in different grids. And in different cells and you overlay it with the points you can. Provide like one tile ID like one region ID blocking ID for each of the samples and you just need to inform this column for the. For the land mapper and it will work. The downside of it he has you need to do the blocking tiles outside so. You need to have like a vector of it and calculate it by by hand and use it as a tile ID so. Probably that that's something that we could implement inside because you can. Considering like I area you can we could implement it inside of the land mapper to create automatically this. Blocking vectors overlay it with the sampling the sample points and use it to do a proper spatial cross validation so. Now you you have support on on it but you need to manage the blocking. Overlay and define the blocks the size of each block and things like that outside of the land mapper as we are doing for example with our own tiles. In the area of the project. Okay. So. Let me see if we have more questions here in the chat. And yeah I think the last thing I really would like to show is so. Again we are this this package you are it's really under development so what the goal here it's really put available all the. Tools and functions that we are using approach that we implemented in Python in the context of the open data science and. It's it's really like we are working. Like in the last month we work at a lot and so we fixed some bugs but for sure there are some problems and we are. Really working to optimize it and to create a better library so what would it's nice it would be nice for. From you if you really plan to use it if you find any problem you can create just a new issue here and describe your. And your problem I need to log in here so. Yeah okay it's working so you can really explain what the code did you executed. And what's a problem and did you found and we can we can fix it so. We plan to the planets really maintain it and. Keeping adding new functionalities and providing. Better like ways to manage and to deal with the data so we have we implement new gap feeling approach so we have a class for example here in the. API reference we have this class to. So this class access automatically at the. Glad land set product from the University of Maryland and you can see here that it's you have a call this call it's actually it runs if you have the username and the password so it's a nice way to retrieve for example the raw product from the university of Maryland we already did it and we are providing here in the. You map this the composites per season but I don't know maybe you are interesting in work with the 16 day composites and you prefer process yourself for a specific area so you can use this. Functionality to read the data and it will download the data and directly read it as a new pie array so we will not. Show it in the next sessions the focus will be more in access our data but you you have this functionality here and you can see it's there is also like. The percentile so this is actually the same function that we use it to produce the. And most eggs per season so and here the plan it's really put new class to access this data the other data not specifically this one but this harmonized landsat and sent in our product so. And we will keep increasing the library and adding new functionality so if you start use please just help us to improve it and creating here new issue if you have an idea of functionality so. And that is the point the whole point to show and to present it to you. And yes I think. And that's all for today I would like to thank you everyone so. That followed us here in physical and online so and we will see we see each other tomorrow so we'll have more sessions tomorrow about how to access the data and. In the afternoon we will have the grass. And workshop also so thank you a lot.
|
Software requirements: opengeohub/py-geo docker image (gdal, rasterio, eumap, scikit-learn) This tutorial covers the theoretical background for Ensemble ML and python implementations, exploring the general concepts and main advantages of spatiotemporal machine learning. Why use LandMapper? The tutorial also shows how to prepare the training sample via spacetime overlay, how to evaluate the EML model performance via spacetime cross-validation, how to tune the EML model via hyperparameter optimization, to finally fit the final EML model.
|
10.5446/55236 (DOI)
|
So welcome everyone, this is the day two of the training sessions. We're still doing R in the morning and then the second work you'll get a lot of Python. And if you're new to Python, I advise that you stay. If you never use Python, it's a good that you get an idea. And you can see then the differences between R and Python. Let me just set this to run. So yes, we are the second day of the training session. So Open Data Science, Europe Workshop. And today I will talk this block, I will talk a bit more about visualization. So data visualization. And also I will show you a bit, we will look at the data and I will show you some ways you can visualize data because when you do machine learning, it's very important to be able to also visualize data and visualize the models and then track the data in problems to see if there are any problems. So let me share my screen. And I did send you also some code, just example, maybe we start with that. If you remember yesterday, we had this three case studies, the very nice case studies. Every case study comes with a publication. So they're very well documented. And this is a quality data sets and you can really test your machine learning skills with this data. But it is a highly complex data. It all goes to four dimensions, right, the Cook farm is a 3D plus time. So it's a really complex data, but it's a data, as I said, it's a high quality, it's been published. And one of the data we use for this project is this European Forest Three Species data. And Carmelo yesterday explained you all the work we did. It took a long time. I think one year ago we started with the points and models and now we finally have the map. So it's been a long process. And then what we looked here, it's a one case study with modeling the Fagos, the Latika, the beach, the spread over Europe. And we saw that, yes, we have these points cleaned up. And as you can see, there are some, maybe some gaps in the Balkans. But otherwise it's a clean data sets. And when we do an overlay, space time overlays, so we also take the year. We fit at the model and we can make predictions of this piece is changing through time. And also you can see that when we look at the variable importance, so the model is quite okay where we have this Ike. So I don't know what is the R square, but I remember Carmelo showing that you see the arrows in probability space, they're like a plus minus 10%, even less than 10%. So the models is quite significant. So we can predict really nicely distribution of these pieces. And then we plot this variable importance. And so what's also very nice that all this Lanza data we prepared, it correlates. So the red band from the Lanza, the lower 25% quantile. It's the most important. Then we have the proba V F part. Also which is kind of measure biomass. And then we have the green green band coming up. So it's mainly mainly basically earth observation data that helps us map these pieces. And then maybe later on there was some be vegetation, etc. But as I said, the good news is the earth observation data can be basically used directly to map that species. And then we do predictions and we do only three years. I could now add the extra year and predict, you know, go back. But we may have these predictions and we make a little plot like this. And so now we can do some diagnostics. And so the first thing we would like to do is to open that in a GIS. So we could go and open this in a QGS, of course. So let me try to start the QGS. And if we open the QGS, we have to customize it a bit. We can add a legend. We can add a satellite image in the back. So let's do that. And then we can look at the differences between years. So turn one year on and off and look at the differences and see if there's anything correlated with, you know, maybe with some location in landscape, etc. So that's what we can do until my QGS starts. Let me see. I don't have that prepared. So I will just, oh yeah, I have it. So here's this QGS. This actually with the older points used for training, the 700,000 points. So it's a bit larger. And then now I have to drop this. So I have to turn this off. Okay. And here's the Google. And here the, as you see the points, but we have made the predictions. And because we made predictions, I can drop now these points, the prediction points, so there's only here. Oops, sorry. So my git. So in the output, you can see the prediction files. So here the predictions files and I have the predictions and the error. So let's put the, I put a one image on top. And then I zoom into that image. And now I just see black and white. So maybe I can customize display. I can pick up some nice, I don't know, legend. Maybe I can take, I can create also some new legend. So here, well, I can put this the same as the, the one that I use, let's say similar. We read this some magma. So like this. And so now we see going from zero to 100% density of the, of the, the fog was so what the cow. And you can see this is one date and this 2015. So let me add the 2019 and I can copy the color now. And so now I have a two dates and, and now we can look how much does it change. We can look at the changes through time and you can install this value tool also very nice. It shows you for every pixel. And if I zoom in, I can see the difference looks like 39% in 2015, 88% 2018. So it looks there's an increase for this pixel should be also visible here, right? You can see there's an increase. So this way you can, you can a bit to see that there's also a transparency tool. Let me put that the plugin. So transparency. This one. So we can, we can add the transparency and when I select some layer, let's say this one, I can, I can play now with the transparency. As you see it will change. So now if I zoom in, I can see actually in the background, I can see these are the 100 by 100 meter pixels. So I can see in the background what is in the background. And it looks like this predictions, they match kind of the distribution of forest, you know, so although here there's a bit more forest, but looks like they don't cover the whole, the whole area. So these are this, the way you can predict basically visualize the predictions. This is, this was the QGS is yet to drop things. There is also package called RQGS. So you can from the code from our, you could also send the files and then it's a bit easier to open. But otherwise, you know, it's a lots of point and click. So there's alternative way to visualize is to use the plot came out. So here I made a little code. This is a code to loop year by year. And so what do I do? I take that prediction object and I say I want to use this variable as a color. And I want to write in a file and I define the file name and I loop by year. And also I make for every year make a different roster. And I can clip, I can make it that it clips from zero to 100. So it's automatically they all scaled. And also I can put the date and you see I can put begin date and time end date. So I put the beginning of the year and end of the year. Okay. And if I loop this, I will create this came out files, each of the rasters will be reprojected to latitude, longitude, and there will be a PNG created. And then I can open it in, let's say here, I can open it in, in Google Earth. And then I get this thing. And now, now we have it also, we can look at the three dimensionally and we can rotate. You could even put simulate like situation of the sun. Oops, sorry, this one. So you could simulate looks like still dark in us. I think was that in Italy. Oh, suspicious of, okay, because I took this. I didn't have different date, but so I can play with that. And also I can, if I, if I move around, let me go back to this. So you see if I move around, now I have the three maps, I can see how things change. So I can simulate that change through time. Yes. And now we can see where the changes are the biggest. And also in a, when you do in 3D, what's very nice, you can, you can see that's a part of the landscape. So, so we can see that some, like beginning of the hill here, maybe I will put a bit of here, I will put transparency a bit. So like this. So you can see that's the beginning of the hill. There's the reach here. So on this side of the hill, there's not on the other side, there's no many changes, but on this side looks there's a changes. Right. So you can see really drastic changes. And then we can even zoom in and try to see the trees. Eventually you, you can drop also if there's a road, you can drop and open it in a street view, right? And the oldest things we show yesterday. So you really, this really enables you to almost like jump on the field. And it is a much more, for me, it's a much more powerful than doing it in QGS, of course, because there are some here, there's some photos. So let's take a look. There's a photo and I see that's a mainly coniferous forest. So I see the, I see what is the landscape. So that way you can, you know, you get a space time data in Google Earth and you can think visualize. And also now when you, you can put these PNGs, you can put them on a server. And then what you do, you just send the KML file to colleagues. And as they open it, then they see without having to do any Gs knowledge, they can just click and they can see what's happening and they can interact. So that's a very powerful. Of course, we could have, I could have made in this code, I could have made predictions for 20 years. That would be more interesting. Yes. And then also write up some description text. So when the people click on it, what is it? Also add to the training points on top. And then you can create a little project. You create a whole project with KML. It's so powerful. You can put so many things and you create a little project and you even zip it in the KMZ and then you send it to someone and it's kind of out of box project. It's a, it tells a story. You can even add a video. You can add slides, you know. So you can, you can, the sky's the limit really with the KML. And I think that's a, it's a good way to engage people with the, basically geography and to understand, to also understand how things are connected in landscape. Here, here's a nice example. Camilla, look at this. This is a bit steep area, but here there's a difference between years, right? So I don't know if it is really, because this, obviously the trees cannot disappear and come back, you know, in two years. So there is also some noise in predictions and it's because of the landscape images. So we are possibly going up and down with predictions, but there's not really changes. They're not exact changes, but it's really the noise, noise that you see. So then you have to look at the uncertainty for some pixels and see whether it's something that we should worry about. But let's say there's an area where you have for multiple years, the suddenly trees disappear. It could be most likely it's a clear cut or it's a forest fire or it's a change of land use. And, and then, and then when it's a, you know, when it's a really distinct jump, a drop in values, then most likely there is, this is not noise, you know, this is really something happened. There's a change of forest cover. And as I said, you can interactively now with our data portal, you can go and interact and you can, for example, you can open all the, the landscape images. These are the ones that we prepared. So this one, and you can then maybe go to the same area and see whether these satellite images, maybe they also have some issues. So it would be nice to find the same area and then scroll through the landscape images and see if there's also some issues in the landscape images. Because it could, it could be that some landscape images could be that they, they have issues with the, with the quality or how we do the gap feeling. So, but that will be something interesting to see. Yes, I don't, I don't have time now, but I'm just giving this example. And so that's very interesting to look at the changes. And as you see, when you do space time predictions, eventually, if you do everything properly, then once you plug it in, in the app, then people can go and animate and they can see changes. They're not really, as I said, Europe is not so drastic changes. So we discovered there are some changes in, there was this place in Italy by accident, we find out La Quilla, where was that somewhere here? Do you remember here? Camelo, you remember, was it this one or this one? This one. Yeah, so you can see, I think here there was this earthquake or some fires. Oh, you see how it disappears. You see? And so this is a known, I mean, you know, the really forest disappears. So this is not noise. You can see if I navigate, you can see that there's really this patch of forest that suddenly disappears. And if you want to check again, you can open some high resolution satellite, satellite image, then we can see. But the satellite image only shows the recent, recent image, right? And so we see that there is less vegetation now. And so something happened here. Did you ever discover what happened? Was it a fire? It overlaps with the earthquake at the time, no? 2007. I, my guess it's probably it was like a big fire or something. But so that's what we discovered. And yes? Okay, let's check Ireland. Which any, any specific place, give me some, all of Ireland. Let's see. Let's go somewhere here. Kill Kenny. Is it with K? Kill. I don't know how to spell it with the one L. Kill Kenny. No. That's where. Where? This thing. Oh, yeah, we zoom in. Let's take a look. Oh, looks pretty stable. This forest patch. But yeah, here you write. Here. Yeah, I see. Yeah, it comes back. You know, we made so much data. We don't have time to check it visually. So that's why we make a viewer and we want, we want to enable people. That's why one of the reasons we do this workshop and, and on, on Thursday, you will see they put the world wind. And it's integrated now. So it's also three dimensional and it's like in Google Earth, you can look, but it doesn't use a Google software. So we use the world wind and also we put the cesium. This one. And so you can visualize it also in 3D. So we can see here. This is also some landscape. And you can also see that it's really drastic changes. This is, I think it should work also if I click. Yeah, it works. So if I click, I also see in which year we can trace back. Oh, you 2000. 12 or 13. You see, this is noise. This thing. It's a noise between years. But then if you smooth the noise over like three, four years, then the difference you see, it's not noise anymore. This is so there's the time citizen analysis, basically. So, so thanks for pointing to that. Yes, please use the use the viewer and also if you you're not sure what you're looking for. You just have to use this search and we will keep on adding layers and so you can then visualize also Corina and many datasets will keep on adding layers and so you'll be able to follow that. So that was about the Google Earth. So it's very powerful right to do this to switch from from our to Google Earth and how is that possible why I made that little functions. I made it so let me talk about that a bit. I made the functions to plot directly from our and the supply code plot came out. And also now I work with the GDL two tiles, ever use GDL two tiles. It's also for like the plot came out is for small data sets for sample data sets. So if you have just this tutorial data, right, but imagine if I had a data image which is like, I don't know, 20,000 by 20,000 pixels. Would you load it into our know, if you load it into our you crash your you need a round to infinity. So you wouldn't load it to our and so but you still would like to see it, and you want to see it the same way Google Earth displays things which is with pyramids and with tiles. So you can use the GDL two tiles. It takes a bit of time to process, but let's say 10 minutes and off you go you have then came out which is the ground overlay, and it can be a large, large geotip, and you can visualize it. So it's also quite amazing. So these are the two things we recommend for the for the small data you can use what came out. And for the larger data for ground overlays you can use GDL two tiles. It only works with the so raster overlays and there's some tricks how to do it. You can read more about the plot came out in this paper, publishing the journal statistical software. I'm going to show you that paper. It also comes in a package it's a vignette for the package when you publish in general statistical software then your paper becomes also vignette so it comes in the, in the package and every time I make a new version of the package I have to rebuild the vignette. So it's the ultimate test that my code still runs. And so every time I update the package, the, the paper is also updated so, and that's a nice for people you can go and download from the journal copy but you can also take the PDF from the package. And then if there's a change in the code. Then people will see where's the change. And so there's few things in change but it will compile. And so if you see this a plot came out, we started looking something like object based. We wanted a package object base so we said okay you have a 2D spatial points. If it's a point structure if it's a grid structure then it's a spatial grid special pixels. So you have area than the spatial polygons. Then you have the 3D, you can have walk sauce monoliths. Then you can have irregular walk sauce. Then if you have space time the same thing. And then 3D plus time. So we looked at all these cases and we said let's try to visualize them all. Well we don't have the walk sauce never worked out. There is a way you can do kind of walk sauce but you know it's still tricky to like a Minecraft you know to visualize a Minecraft type objects. And I don't know I'm not sure how would I do that but it is theoretically it's possible in Google Earth to also add walk sauce so you have not only pixels but walk sauce. And because this collada type of objects you can build and then you can display them so this is all these buildings. So this is just a simple example with the we also played a lot with this aesthetics what we call aesthetics. So like when you make a plot. So you can, and I think in the team app package. It's also very well explained. So you use this visual variables to display a value. And so in this case, how many visual variables do you see. So what what is. So I want to emphasize the values. So they're point samples and their values. So this is I think same content. And so how many visual variables do you see here. No visual variables is like you use for example color, color to emphasize differences, you use the size of the bubble. And we also use the visual, let's say variables. And we also use the text. So the here have three. So you can see by text by size of the bubble and by color. You can see the differences. And why why would you use three. Why would you use three. So the visualization maybe it's inefficient. It's kind of when you use three you, you, you emphasize it is kind of like you underline a text and you highlight the text. Right, then you're sure that people don't miss it. Yeah. So then you are you emphasize it maybe you make it more dramatic I don't know. And it's nice to put a text also, because the text is like people see okay these are the higher values here, higher values are here somewhere. But what are the values so they can people can actually see the actual values. And this is just comparison to see the same type of aesthetics in used to be like lattice and today people use to plot. So you can see the same type of aesthetics except without numbers you see now and you see in Google Earth. And obviously if you get, soon as you get something in Google Earth. It is a much nicer to interact and to basically open and look at things. And emphasis this aesthetics. And this is a step further. So you see this is aesthetics the shape is aesthetic the colors aesthetic labels, altitude, etc. And you can go further you can also add elevation proportional to the value. So, where you have a high values you will have a high. It's like, how is it called this acupuncture you know and this is the acupuncture to earth. So you can put this is very simple I mean it just put this altitude basically so altitude is a variable multiply value multiply by 10 so emphasis is even more. And you get this thing. And this way you can you can do this different aesthetics. And sometimes it helps people to realize what is the trend where the high values are the low values and sometimes you can see also high values next to the low values and you can see that's kind of like a bridge here, bridge of values right. And so, so that's the one way to utilize. And then we also made this lower level functions. So you can create your own functions to visualize. And you once you figure it out what you want. Then you create your own functions so you can do camell open. So you open to write camell, then you can put multiple layers. And then you do camell close. You can also add a logo screen overlay, so you can do lots of things. And that's what they do here that's an example, exactly this. You see this is a lower level programming. I don't use the camell open camell close I just use the camell. And but then I put a lot of things I define, and I put a lot of parameters and actually I put probably too many parameters you can change. I don't know how to do it. But if you look at the documentation and if you read, look at the examples, you can see that yeah you can change a lot of parameters and by changing them you get, you get something like this. How much time will it take you to make something like this by hand? Probably to put also the legend in the plot. It will take you quite some time. So this saves you a lot of time. And even the PNGs, I had to figure out the PNGs they are usually in Google Earth, they don't display so well. So I had to figure out the way to go around it. So I made also like to build in transparency and to how to identify the PNG so you can see actual pixels. And then eventually the effect is very good because if you have a space time object, you can now see changes in that distribution of that species. And also you can switch to this time, time scale, time lapse in Google, and this is something very nice because you can really do interpretation. You can go back in time and we can see this area. Now if I go to 1997, what happens? There's no high resolution satellites only from the 2006 for this area, only from 2006 I'm getting high resolution, and then I can scroll a bit further, and you can see if there's any changes. And there are some changes we can see here, this patch, this patch for example here, it used to be a bit more open, and then a couple of years after now it's filled in. Do you see the difference? So something is changing, everything is dynamic. The only Greek philosopher that was really right in many of his theories, the other Greek philosophers, you know, they eventually they discovered no it was a too naive theory, but do you know which one is the that got many of his theories are still actual one of them is the one that is the only thing which is sure the only thing which is stable is the change. It's, yeah. So, so you see this, this is the way you can then do visual interpretation and there's a lot of these photographs but to be honest with you, there are no more photographs, you know, but they photographs a scattered all around different systems, it's a real pity. Like for example there's the I naturalist, I naturalist photographs, and it's a pity that we don't have them in, in Google, maybe there's a way to put them in Google Earth, I don't know, because it depends on the license Google doesn't take some licenses. But it's a pity that we cannot add, maybe there's a way to add photos but eventually it's also very nice to look at the photo and you can see how that area looks like even though there's no maybe Google Street view and etc. So, so that's why I think this package, what came out is still I don't maintaining so much I haven't touched it actually a couple of years. So I don't maintain it but it works and it works now with SF. Thanks to. Andrea, it works also with SF classes, so you can put SF classes, but you cannot plot large rasters. It's not going to work so be careful. Also sometimes with rasters, maybe I showed that also with the rasters it's a good idea to to plot it. I want to see the effect of pixels. Good way to plot it is to put, for example, to convert pixels to think is this one. This one. So here, I can convert, we can convert pixels to polygons. And you can do that so it's a much bigger file. I mean, but why did they convert pixels to polygons because if you do this you can you can actually really see how much when you do this aggregation, you can see everything falls into pixel. And now because now the pixels are defined up like precisely. If I put a PNG, then unfortunately, Google Earth compresses PNG compress I don't know how it renders it but you lose that beat you lose the edges you don't see the edges. And when you put. Sorry. When you convert. So, when you convert pixels to grid, it's a single line, it's called great to poly I made it. And so it's, you know, you, you do it in one line, but every pixel it's becomes a polygon. So the size of the polygon, don't know 10 times bigger than the raster, because raster is a super simple structure you don't you don't have to. The polygon has to know every corner right they have a corner of every corner so it's very precise, but sometimes it's good to see that, because if you have this all the scale issue aggregation issues you know you want to really see what falls, what falls in one pixel you know so I can see that there was a bit of fraud bit of this bit of that. So you can see that pixel complexity because once I zoom out, then I'm not aware it looks to me that the pixels are homogeneous but they're not so there's a quite diversity still. So today you know with the Sentinel to the cutting edge resolution for the world is 10 meter, and there's the first products at 10 meter as you hear maybe there was the land cover product 10 meter global. So there's a lot of products popping up at 10 meter, but even beyond. If you go beyond 10 meter you can see objects, which are less than 10 meter and eventually you can see individual trees. And yeah, it all makes difference of course, depending on what is the purpose of the data and analysis. So that's also another another thing about what came out that you know when I know the package maybe I didn't, I didn't maybe spend so much time optimizing it, but I did spend a lot of time thinking of diversity of objects. So here's the tutorial. So the tutorial comes with the with the visual tutorial. So let's say if I go to that tutorial here. So here's a visual tutorial. So how does it work you have the, you have a visual example. So like this one. And then you see the code. So you can reproduce your you can put your data so here's an example of a code. So you can put your own data and you should get a similar type of you. So that was kind of the idea to have like a visual visual tutorial. And then, and then I put a lot of examples there's a when you do like terrain modeling, you do some hydrology you can put contours, and you can put them together. You can overlay, you can put this grid to poly. That's the one I was showing, and you can do also some space time. So here's some space time examples. This one and this one. And so let's take a look at the space time I already opened it in the Google Earth so we can look. So here's an example. And then we also created a data set by the way the we also use it in modeling. And so what you see here, this is this time series of modus, and we can scroll through time. We can scroll through time and and see the values changing. I think I put only this is maybe just a few months. But I also put. I also put this thing, which is the stations. And remember this is a way to visualize changes, because he was a station, and I can make a lot. I make an art plot in our code. If I had like 1000 stations, I make 1000 plots. And then I put this PNG is with the pot came out. I define them here. So, let's see, so I define them as a description. And I can put it as like this I can generate it. And when I do that, when you click on that point, or this point, you can see the differences in a different methodological station for that year. And you can see there's a missing values. But this is the actual observe data. And then you compare that of course with with the satellite images. And this is the one kilometer modest right. So you can see how the things change also gives you an idea that sometimes some of the layers, they don't overlap so there's different missing values. So yeah, this is all the, the reality of this modus data right. And so that's the space time space time data set. Now I had this thing turn on so I was going. I couldn't see. So there's something you have to learn how to play with the Google but this is very fast it renders very fast when you when you do a smaller png. So any png up to I don't know 10 megabyte a png, you know, it's going to work very fast. It's doesn't eat a lot of from. So, so you can, you can really visualize things and see the differences. And that was, and then you combine it with the plotting are plotting. And so you have kind of scientific visualization that is people interact with the map, they will click on stations and they get this curves. And that's where you visualize space time data. Then this data, this cook farm data also remember we it's in the, it's in the tutorial. This is a small data set that nothing png but it's a 3d plus time. So I created here I think this is a one month, a one month of soil moisture. Right, doesn't look nothing special. But actually I think it's 2000 pngs. Because you have every day, and then you have 12345 depths. So you have a five, five by every day and easily multiply and goes up. And so now we can look how this. So that's the really. So the 3d plus time, I can zoom in maybe somewhere here, and I can animate what happens with soil moisture. And I can see sometimes it's in the deeper soil, you have the higher soil moisture, there will be some place where the water accumulates, and we discovered that usually it's the soil texture, the soil texture if it's more clay soil or BT horizon, then it keeps the moisture. But so this is a full fledged for for DGS. Have you ever seen for DGS. Maybe somewhere in a good in a computer games. But this is the free really scientific for the GS. And this is only possible at the moment in Google Earth, as far as I know, you can do it maybe in grass GS. Veronica will talk today about grass GS and also space time digest. I don't know about the, the 40 so 3d plus time, but this is a full, full space time. So these are the depth this is like 3060 9020 150 centimeters and what they do I do a trick. I put the layers on top of each other, like that in the sky. Right. I do a little trick and this way I simulate like. I do it in a landscape but you simulate that it's actually the layers above the ground, because there's no way in Google Earth to go. Have you tried going below the ground. There's no way. But have you tried going in the ocean. Have you tried going in the ocean you can do you can zoom in into the ocean. And you will, you will think sink. Eventually go under the ocean. Have you tried it. And there are some points where they put the underground underground models and things so that's this, I think even maybe you can know. But, but there's a ocean you can you can go inside and in the year here in the ocean. There is a point I don't know. And then we can, I think they put the photography or something. So now we are on the bottom of the ocean. And we can see. Have you ever looked at that you can see the mountains in the underground. So now I'm going below. So people even don't know the lots of functionality but remember probably this people that made Google Earth and people that use it. They may be never imagine that somebody is going to figure out to visualize underground structures and processes by putting it above ground. But the camera is very flexible and when I started using it I said wow this really skies the limit. You know if you're creative and if you know the code. You can really create this scientific visualization especially for geography. And then you just send them a came out file and people open and and if you put the most of the data on the web. So then the camel file just calls the data from the episode the camel file can be one kilobyte. You know you send them a one kilobyte and you can even tell a story you can put as I said slides and videos, and you can even animate it you can record how the story should go. So it says the first show this then show that so it's really out of box solution. And I was thinking that in future we'll develop whole projects, we just give people came out file. But it's not so simple, because we started like with the open land map. We also had this ambition to put the open land map to put everything in Google Earth. And there is a button here see Google Earth. So, but it was not simple it took us long time to convert things. And, and I don't know if it's a good result can show you that now. So all the layers that are in geo server, they can be also added in in the Google Earth. And then you have you can pick up the layer you want so there's all the special layers. So let's say this is the topographic redness index. So let's turn it on. So now it will product, and I have to turn off the time somehow. Time somewhere. So it was still loading so so it was not it was not easy so we still don't have the, the best effect. Let me see this one doesn't, it doesn't come so it's still creating the geo server. Let me try something else. Northness. Let's see this works. So yeah, it doesn't work we have to update it's not, it's not simple but it's supposed to call the PNG is directly from WMS. And then show it this is the formula, but it could be that this, this URL, yeah I think the URL changed. Yeah, we changed the URL. So I can create still forever. It's not going to work. So I have to update this but that was the idea to put everything in in Google, Google Earth. And I think that's going to be the way to do projects, but unfortunately it's not so simple and this protein L it has the potential but I will have to reprogram it I think from scratch. I did time I didn't even know how a program in our just, I did the best I can. Now I have kind of, I understand the logic. I will reprogram it and make it. I was hoping that somebody else will take off also but it doesn't unfortunately happens to easily. So that was the plot came out. Let me go back to the slides. And I don't know if you never use it but the member just the most the easiest way to use it is to visually go to the gallery. And you say I would like visualization something like this. And then you will see the sample code. This is the distribution of the yeti in us is actual data by the way, there's a paper published, they published a paper in us, lots of people see yeti's aliens, you know, that's us. And then, then they does organization and they collect this data and, and then somebody made the species distribution model for yeti and they published the paper. This is the species. And that's the, and I reconstructed I take this data from the paper and I, I prepared this little logo for yet I found it somewhere. And so you. Sorry. Yeah, this in the back what you see is the, yeah, here, the, there's the code and as the paper. Where's the paper. Maybe I didn't put it. Maybe someone here. So you go to the, to the data, the data set name. And so then you can reproduce that. And I will, I will download it now let's take a look big foot. And you can see these are the predictions. There's something with this pixels they overlap, let me see. And this is the model, I can put also model exactly for the, for the yeti this is the result of the model the importance. So what's the most important. And, yeah, and you can see here valuable importance that's the probability. So, I mean, you see the matches. And they discovered them in it's highly correlated with some brown bear I think was something. So it was highly cost so they kind of wanted to prove that actually what people say it's the brown bear. But there's a lot of a lot of observations. I think, let me check. I have to pay for big foot. Big foot yes. And there is a help text. And, and there is a there's also some article in the journal yeah big foot research organization BFRO. I've heard of it. Big foot research organization. And you see here's the court how you can reconstruct that it's a couple of steps it's not. You can put the model and, and then you plot the predictions. You make the predictions and plot the predictions. And here's the papers predicting the solution of Sascoch in Western North America. Anything goes with ecological niche modeling. That's the name of the paper. I'll send you that I'll send you that through metamol. So, sometimes nice way to popular science is to publish publish papers like this. So yeah so that's also with the pot came out. You have you can do many things and, and you can kind of pack the results of analysis with this screen overlays ground overlays points polygons. You can play with the statics. And then once you make the ones to visualize the data. Then, when you share the data, as I said we will focus on this focus a lot of time. So when you share the data, then people can actually see, they can set up transparency right and they can see where exactly and why they are changes. And that's quite powerful right because when you zoom in and display the in 3D you can see where in the landscape. So I was a lot it comes to the side was a lot of it usually comes on a slope, a slope in sloping areas, is that correct this slope comes up as a good quality. Okay, well, also you know most of the forest are still on the steps on the areas which are difficult to access. Touch anything. Thank you. So yeah that's the plot came out there's other ways you can visualize data. One of them it's to use this Google Earth Engine map, but this one is in Python. It's a Python package. And then you have to basically you do it by uploading data to Google Earth Engine, and then visualizing yes, Pablo. So you can use slides. Oh yes sure. You can use this package called Google Earth map. Oh yeah by the way, the Google also they don't do much, many changes in Google Earth for me it was amazing software 10 years ago, I was going like finally this is the proper GIS. Right, because, especially when you do, you do came out and you mash up all the photographs and I says what this is this is it and it's fast scalable. And now I notice they don't make much decisions very little change last five years and for cooperation like Google that means that they're not interested. So then they started with the earth.google.com. And, and this is a web version, and it has extra things. However, if I upload my KML file. It doesn't support. So I don't know what they want to do with this now this is like you exclusively you can just watch, watch only what they have because if I go and I say, I want to upload somewhere. It's even difficult to remember where is the, I can do projects I think, and see create create project, create project in Google Drive. I can do that. So I can call it somehow test. And then I say new feature. So now, I think I can upload. I mean, so I can share with people. But how do I upload the place mark. This one will just so I can just add this. But if I want to upload the KML file. I'm not sure how I do that. Have you maybe tried using this. It's, it looks quite, you know, quite difficult and almost it seems to me that there's really no way that I can put a KML. I can definitely not drag and drop it. Here I have some voyager. I'm feeling lucky. These are the projects style distance but I'm not sure how do I upload and put the KML file. Have you, have you tried playing with this. There is a, it makes it on my Google Drive. So I wish I could just put a full screen slide, draw line placement. It looks very, it has very little functionality. I can export I could manually do stuff but I don't know if I can do anything serious, you know, like to upload a lot of data and I would, I mean, I would like this a lot of course, because it will be nice to be able to do that. It will be nice to be able to, you know, put the data here and then send the URL, you just send the URL then to people and then they see. I'm looking at where, which one. You have it, I don't have it. Yeah, I don't know. I can reload, let me see. Yeah, I just don't, and you get some button to add. Okay, let me try to make a new project. Okay, so open, create project, open project. The port KML, KML from computer. Okay, let's do this. So, I go to this ones that I prepared. But I mean I want to literally I want this last ones I prepared. So, let's take a look. Even if I put just something like this. These are just points. If I put this. I want to work. Yes, so I can share the points. So this is exactly the same as I created them in Google Earth. Yes, but if I want to add a ground overlay. So I said I want this forest predictions I made. Which one is the forest. I'm not sorry, sorry, I'm in the wrong folder. So here the forest predictions I made. And if I, I cannot upload both. It doesn't work. But of course if I just put this one, I get a no, no ground overlay. Right, so, so how do I get the ground over that's the, that's the question. I could put here a picture, maybe. Okay, so now I can, I could maybe link it select from your device. Okay, let's take a look. That was the 2000. Doesn't matter but I put this one. Still as we want to show. So yeah it's not doesn't look simple. It's definitely not not simple. Now it's show it look it shows here in the preview. I know I think it attached the probably attached as a description. This is just out the description I think. Let's move that title, but I cannot put the, the, the bottom the ground over there I cannot put it. So, but let's say if it does if if I manage maybe I am doing it wrong way but I tried it few times and I gave up I said they have time. I think that that don't support that the ground overlays on some third party that has to be all in Google Drive or something I don't know. So, but let's say if it's possible, then you can once you finish the project. So you, let's go to projects. So I created this one then you can share that project. Somehow. Right, you can go and say, I'm not a HTML, but there should be a way you can also share just a URL. But again I didn't leave it. So I didn't I didn't play too much with that so I don't know. I don't know how to do it but if you figure it out that will be a way otherwise you can always use of course the Google Earth that works no problem. And maybe it's, it's better to use it because it has a cash, you can increase you know the in Google Earth you can increase cash on the options so you can say I want a larger cash here. Now it's a one gigabyte I can put, you know, six gigabytes. So increase the cash and then it's a bit faster. You can see I can move really fast. And, and because it created the cash, it goes very fast it goes as fast as a QGS. But if you figure out how to do this please let me know at the moment them start also how to share that, how to share the data. So just by sending a URL so people can see exactly that will be very nice but not not easy. No. Maybe you think maybe there you could upload came out easier. It looks to me that I think maybe it's combining the Google Drive you know you put on Google Drive or something and then but how does it know the address of that file so this thing I don't know. So maybe that's something about that to do, but as I said, I haven't, I haven't figured out the easy way to do it. Yeah. Okay, and with this thing I can stop. I think I showed you more or less everything let me just make the conclusions. So you have, oh yeah there's also the there's also a main view. So it's in our nice package and maintain actively, but it's usually it's a 2D and you create HTML basically, and bills on top of leaflet I think. So you can create this and the interactive so you can click on the points and you can change transparency background layers. It's also GS out of box, you send HTML file to someone, and they open HTML file and get getting a browser of FGS. It's also kind of out of box. And that also has aesthetics and you can do lots of customization. You can also do comparison. And then if you want to visualize something in Google Earth, but it's a large roster. Remember the plot came out don't load large roster and plot came out. You're going to just crash your machine. So there's there's a road to nowhere. And then what you do you use the GDL tiles. When you install GDL, and if you have Python, then you just run this Python script. And you basically you specify which projection you want as output, and you specify output directory. And then that's it it creates a KML, and then you open it and you have the pyramids and you have the scales, and you can pick up how many scales you want. I put here one to six. The most case you want the more it computes exponentially more. So if you say I want to only five scales. Okay, if you want six scales then it computes, I don't know four times longer. If you're on seven scale then computes 16 times longer. So it's exponentially more. And but before you can create, you have to first take layer you customize it. Maybe I could do that. So I take the layer in QG S. Then I have to customize it. So I have to be happy with it. Then I have to let me show this off. So here I customize it. Okay I put transparency I can turn off transparency. I can't do that. So here I customize the layer. And then what I do is I save that layer I say export. And that's if you don't do that then you won't get the changes. Then you say rendered image. And basically we export to geotiff. But we export it as RGB. So it will let me make a test. So what it will create is a. I can put it in the same folder. So what it creates it's a it's a thief, which is basically RGB. Maybe I can put a name RGB. So now it's a I can put compression. Some like this. So now I created this RGB. And this RGB it's actually, so it's not the values, but it's a colored map. So it's a if I look at it in my file folder. Then you will see it's a I can I can actually open it also in windows and you see exactly the colors. Right. It's not just a raw data, but it's a rendered view. Okay. And now when they have the trend of you, I can go here to this folder actually it's the same folder, and I can copy this code. I have. So let's see. And now remember I will, I will put up maybe some some folder for testing. Like this. So now it's empty. And if I go to my session I'm not on the windows I haven't tested it actually, I'm on the windows so let's see if this is going to work so I say some title, it can be insert title test. One to six, geodetic. And I say test if test RGB. Let's take a look. So there's the file called test RGB yes, and I put it in the local folder, which is test. I can't remember this K what is the K for. I don't remember I'll delete it when you when you don't remember what you did. So let's see if this works. No, I get 127 error message. So, let me see I can delete bit further this thing. Maybe this G2 tiles. Let me see. Maybe I don't have it. You. Terminal. Let me see if I have this G2 tiles. No module and more is your yeah I will have to update the G2 tiles but if I would run that in the, let me run it under Ubuntu. Should be much easier. I just have to copy that if I created. So here's this little tip. And I copy it to this exchange folder. And I try to do it. And so if everything is good then it makes a camel with all the pyramids automatically and so nice that people share that code. It's universal. And then you can create a large, large, a teeth ground overlays that you can open in Google it. And if you put the older PNG if you put on a server, then you just send the camel file. So that's the last thing I wanted to show I will still show that example, just to wrap up. So came out you there's many things you don't even know what you can do but there's a really a lot of things you can do with came out you can make for the GS you can also make walk sales it's possible. You can put photographs and better photographs and you can also put the photographs in a display. So that looks something like, like this. And this is really actually mind blowing. Let me open this. Okay, I just have to remember which one is there. So here's a photograph. Yeah, here's a photograph somewhere. Maybe that put some date, maybe. Maybe I moved I moved the photograph but you can see the photograph like rendered like this. So it's like in rendered in object. So you can put a polygon and then you put photographs to represent something. So, so that's also possible. Let me see the. I'm looking back at the, the. I'm looking back at the same time. We have to worry about my Ram. I see that my computer is slowing down a bit. So good moment to check the Ram. This was the. This was this folder data. And I had this exchange. And here I put this RGB file somewhere. Where are you. What if I, yeah, this one. So I will copy it to some local folder here. And now I can go to the studio. And then I have to create here some test folder test. And now let's try to do this command. I have to turn some things off. And so the code was distinct. Let's see now I run it under Ubuntu. Let's put a new code. I'm in the right folder now. Let's remove all this. And then I have to go to the workspace. And I have to run this folder and let's try to run now the digital to tiles. Now it is running. And so it created this overlays and you see, we have in the test folder. We get the, we can look at it through open layers, Google Maps, so you can look at the different ways. Let's try the Google Maps. This one doesn't show anything. Let's, let's check the KML. I didn't put that it's a, because I don't see any KML creator, let me check. G.2 tiles. I think it tries to find a domain G.2 tiles by. So how to get it, I think it's the minus K, that's the KML minus K, for KML. Okay. So I just had to put here minus K, that's the one I deleted. I don't have time. Now I have the KML. And let's see if this works. Yes. And now we see it. We get this test, but it zooms it somewhere, I don't know. It doesn't show it. Maybe it puts the whole, whole world. It doesn't show it, but it generates basically the pyramids. Where is the area somewhere in Italy? I don't see it anywhere. Let me see. Do you remember that tile where is it on the border with Switzerland or something. I don't see, I don't see anything is difficult to see, but I have to also turn off all these roads and stuff. 3D buildings. So it looks like something went wrong. Maybe I have to do with the, I forgot now I haven't done this for some time. Maybe I have to project to the lat loan. But I have to project to lat loan, I think. I have to project. Probably I have to project. So I didn't do that I took the European coordinate system. Yeah, the case supplied supplied map uses for three to six. So I will have to project first, then make that geotip and then I build the tiles but it works you get a KML, and you can open it, and it's also out of box. So I'm going to make it now because this, this one out of the geotip I will have to first project. So and with this thing I will stop now sharing live. Let me see if there's any questions this thing in the chat. So you're complaining about sound. No, sounds okay. Sounds okay. Okay, uploading to Google Earth web seems described here so somebody shared that. Let me also put that in the metamost. So thank you. No, we don't want to watch commercial, but looks like it's explained here. So let me put that somebody shared. So this is how to also upload ground overlays to upload KML files to web. Google Earth. So yeah let's take a look, look at it please. Let's see if it works. If you can get the ground overlays that's how we interested. And if you can also do the screen sharing. And I will stop now the live. Stop live stream. And also I will stop the video recording. So thank you all for following that was a short lesson about how to get the data from our to QGIS and to Google Earth using coding.
|
Software requirements: opengeohub/r-geo docker image (R, rgdal, terra, mlr3), QGIS, Google Earth Pro This tutorial shows how to access COG files using QGIS, how to use gdaltiler to produce plots in Google Earth (local copy), and how to use plotKML package to visualize data in Google Earth.
|
10.5446/55244 (DOI)
|
Let's get started here. What's the motivation? Well, one, for example, to understand versus system processes, a classical scenario where we do have lots of data being involved to fully understand how it functions. And all these different data sets are located in different archives, very often cloud-based these days. And one of the key aspects is really that we not only have so many different data sets, but we have lots of processes that actually work on these data sets. These processes then need to be applied to very heterogeneous data. And often enough, it is not that we download the data anymore, but we need to somehow bring the application to the data to enable the in-cloud processing. Key challenge then is, well, which of the processes actually can be applied to what data, this mapping between processes and data is extremely important. So let's look at a typical in-cloud scientific workflow scenario. So here's a scenario I receive from a German aerospace DLR. It's a multi-data processing environment that uses data from different satellites, conducts a number of pre-processing steps, as we see on the left-hand side, like cloud masking, indice calculation, temporal aggregation statistics, and so on. And then there is a second major processing step where the actual classification of settlements based on a random forest classifier is executed. And then we get a world settlement footprint as the final product. So a classical workflow, if we go back to our data cloud environment, well, we do have these three different data sets. We do have the major processing steps, and we want to produce our results. So if we decompose this now a little bit with our actual scenario, we do have Sentinel-2, the multispectral data, we have analysis-ready data coming from the Landsat, Collection-2, and we have some SAR data from Sentinel-1. We do run all these different processes that we can see here, and then at the end we get the settlement mask. We can now do this all in a cloud environment. Well, we have all the components in an environment, the data, and the processes. So we basically need to design our workflow, our scientific workflow, or our processing graph, and then we have the results. As long as this is all in one toolbox, like you could, for example, try to execute something like that in a Google Earth engine, it's rather simple, but then we are fully limited to the functionality and the data that is provided by this toolbox. What we want to have instead is we want to have multiple environments, and for sure then we need some inter-cloud communication. That's for sure. We do process different data sets at the native location, and then we may need to exchange some results. And we do have a number of, let's say, widely accepted algorithms, for example, the ESA SNAP package for the satellite data processing. Now it would be much easier for these different cloud environments if we would have a single API supporting, well, both of them in this case, or maybe all three of them if we consider three processing clouds like in our example. So this API could support some sort of processing, some sort of widely accepted processing, and it needs to allow the fine grained access to those results that come out of these different data processing steps. Then what in addition do we need? Well, we need a mechanism to develop additional applications because as we have seen in our scenario, some are very standard processes like cloud masking. But others, for example, the random forest classifier is a tool that needs to develop by experts, is very custom-made, and what we need to do is, well, we need to develop this tool somewhere and then deploy it in the clouds, and then we want to execute it there next to the physical location of the data to avoid these costly data transfers. In the OpenGL spatial consortium, what we did over the last couple of years with support, mostly from ESA and some support from NASA and National Resources Canada, was that we developed an architecture and API. The API is called DAPA, Data Access and Processing API, that allows access to the data sets and it provides support for a number of classical processing steps or functionalities like statistical values, min, max, average, temperate aggregation, and so on. In addition to that API, we developed a second one which is called ADES, Application Deployment and Execution Service, and that one now allows us to develop an application submitted to that cloud via a standardized interface, request its deployment, and then its execution with a specific parameterization. We do have two APIs that complement each other. One is providing the access to the data and the access to, let's say, built-in functions, and the other one is an API that allows us to develop an application, describe it with all necessary detail, and then submit it to the cloud environment where it can be used to process the data that is already part of the cloud environment. Here's what it looks like. So we do have these two processes on the left-hand side. We have the application developer, that application developer develops the application, maybe locally, and then packs it into a container, ships the container together with some metadata to the cloud environment, and there it gets deployed and then executed on demand. On the right-hand side, we have the application consumer who is on the one side using the DARPA API to access the products that are already there, or uses the functions that are already supported, these classical widely accepted functions. But on the other side, this application consumer can even say, hey, I want to use applications as developed by these application developers. The beauty is that we have two completely decoupled cycles here. This works pretty much like any app store we are using from our cell phones where someone develops an application, the consumer discovers this application, thinks, oh, this random forest classifier could be used for my world settlement application. So I want to give that a try and request the deployment within the cloud and then execute it to see what products come out of it. So this is actually a first, very important step towards a marketplace for Earth observation applications. And it's a great chance for application developers because whenever they have produced something that they think could be of value to others, they can make it available on the one side. I mean, they could do this before as open source, but here they have a chance to, in addition, make it available as an application that can be loaded on request and then they can sell their knowledge and the work they invested into the development of that application actually gets reimbursed. So it's the generation of a new market that we see here. If you want to try things out, well, there are options to do so. One is a product coming out of an OTC initiative called Testbed 16. So you see the link here, that environment developed by EOX in Austria allows to play around with the data access and processing API. It makes data from a number of collections available, maps and lands that Sentinel motives and it provides a Jupyter environment to run all the experiments. It's all cloud based. It's all for free. You can apply for a free account and then you can execute these different processing workflows. There are a number of tutorials available. The data sets are described in detail. Everything runs in a Jupyter notebook or a couple of these. So it's straightforward to experiment with it and it's quite interesting to see how easy it can be these days to actually use something that is developed by someone else for your own scientific workflows. There are future experiments that we did. One is with the Testbed 16 that we have already mentioned. We have three major engineering reports that came out of this initiative. One is about data access and processing. So how can I apply functions in the cloud? The other one is about the API that we have seen. So how does the API actually looks like? How can I interact with it? And the third one is specific to the usage of what we call first activation application packages which are the package of the application, the container, the software container, all the metadata you need with Jupyter notebooks which is probably among the best mechanisms we do have these days to interact with those different applications that we have. Then everything was stress tested. So everything we have seen in the last couple of slides, the data access and processing as well as the application deployment execution in an initiative called the Earth Observation Applications Pilot. If you go to ogc.org slash eoapps you find quite a number of engineering reports that describe this in detail. You find a number of YouTube videos if you follow the link that demonstrate how you can now package your application into a container. You describe your application in terms of what it produces and what it requires. You describe how the cloud environment mounts the data for your application. So keep in mind that your container is a very versatile thing that is loaded when it exists but then it disappears after its termination. So somehow you need to make the data available to this environment, to this container-sized application. That is all described in this application package. And if you deliver that together with the description, what additional parameters a user needs to set in order to execute it, then you're already all set to allow the user to make use of your application. There is future work that is currently ongoing in Testbed 17. One is about geodata cubes. And even though it appears on the first slide that World Data Cube, it's a multi-dimensional structure that allows us to access data in a very efficient way. Well, that's what we thought at the beginning of this initiative. And now what we see is a process away from a cube that's just a multi-dimensional structure or just an indexing system to some data located in the cloud. We see that data cubes are often snapshots within scientific workflows or within any processing workflows. Whereas the modern conception of a data cube is not only a storage for data, but it's a combination of a storage for data with a set of functions to interact with that data cube. And the interaction is not only limited to retrieve subsets of data from a cube, but it again allows in-cube processing. When we say in-cube processing, well, it may happen directly in the cube or it happens in the cloud environment around the cube. But we see that the concept of geodata cubes is more and more aligned actually with the idea that you generate an application locally, you submit it to a cloud environment, there you can embed it into a workflow, and then you can make individual snapshots of your processing chain available as data cubes and allow the consumers to even interact with the snapshot so that you may generate different visualizations or you may continue the workflow differently within specific user communities. So this is a very exciting field where we thought about, well, we just generate an API to a cube. What we now see is that it actually continues this application to the data paradigm and mechanism. And then for sure there are some other interesting elements like cloud optimized geotif and the SAR format to store the data and both play a role in this context because first of all, lots of data these days is available as cloud optimized geotif so you can access it very, very efficiently, which is interesting in the application to the cloud environment. And second, you need some mechanism to deliver your results and interability to a large extent depends on the fact that your consumer is actually understanding what you are delivering. And we see SAR possibly playing an important role in this context. And I think that concludes my presentation. So I hope I could transport a bit this application to the data status quo in this presentation. And yeah, thank you very much for your attention. And I hope I stayed more or less in time. Yes, I may actually be very sorry. I told you 15 minutes your talk your specific talk section has 25 minutes so but it's excellent so we have more time for questions. So we're opening the floor. Questions. Okay, I will start. So you mentioned fair right. So fair is also like fair means. So I mean just I'm just trying to think how do you connect fair and business, because fair is a recipe how you do business so how does the OTC survives by supporting both business and fair. If you understand what I mean if somebody makes a fair up like in the workflow you mentioned, if you make a fair up then it means I come and I, I copy that up and I make another up. And that's correct. Yes, and that and that's of course businesses wouldn't like that. So how do you, how do you basically have both. Right, so imagine you develop an application and you register it with a cloud environment. The consumer side is not actually retrieving this application right. The consumer sees the application in a registry with a clear description so my application for example produces the settlement map on a global scale it requires the following input data and it produces the following output data. So the consumer can actually request the deployment and execution of that app without ever seeing the app because the app itself is deployed and executed within the cloud environment. And then someone needs to pay for it right at the moment what do we have we have different business models in the cloud space from credit systems to flat rates to buy area by processing cycles by storage I mean at the moment we see a very, very interesting mix of business models. The way we see it that the cloud is offering then the execution of that application to the consumer at a specific price saying okay you can execute for your area of interest and your time window you can execute our application for the following costs if you agree with it well then provide your credit card details and hit the execute button. And so from the OGC side what's important for us is on the one side we generate fair interfaces that make the application and the data products findable we make it accessible because you can interact with it via a standardized interface we make it interoperable because it is well described in terms of what it needs what it produces what formats it delivers and it is reusable because once developed the application can be reused by as many consumers as possible but the consumers cannot necessarily directly copy the application right this is prevented by protecting the application within this cloud execution environment. Okay, yeah, that sounds like you theoretically you saw that I just worry a bit in practice because of course as I said the businesses that they don't want to share the recipes. So eventually if you make some up or map, you know, you cannot see behind. But I have another question for you. Okay, that's from a colleague connected online from Harness Reuter. So how do you see the OGC API in the future, especially for Earth observation. So I think Harness wants to know if you could maybe predict what's the next frontier. We already see that on the one side we started with rather generic applications like OGC API for features or for coverages or for records and piles and maps. What we see that started right away was the development of standardized or for specific profiles of these like the environmental data retrieval API, EDR. And I think what we will see in future is more of these types of profiled APIs that use elements from specific APIs, combine them with specific resource models to serve the specific needs of the community. So the APIs as we see them right now have the advances that you can bring them live very efficiently, very quickly, and then you can do a couple of things with them. But in order to really serve the specific needs of a community, you will need to develop specific profiles. So you will need to say, okay, well, this API always delivers these types of results. And that's the situation that we currently see. And then we have a spectrum, right? We, I mean, you can argue that basically everything is a process, even the production of a map could be considered being a process. But on the one side, if you if you think of the full spectrum of everything is a process and needs to be handled like a process. And on the other side, all you want to have is a map. So what we see is that we will have specific APIs within this spectrum. Some are very convenient, we call them the convenience APIs. These APIs do a very specific thing and nothing else. Like with the OTC API maps, you can get a map, but you cannot generate a multi dimensional type or something like that. Then we do have APIs that are fully generic. They allow you to execute any type of process. This is OTC API processes with which then requires that you describe the process in details rather tedious detailed description. You have lots of levels of freedom, but almost every process is supported and application program interfaces like EDR, the environmental data retrieval or DAPA, the data access and processing API, then they are somewhere in between these extremes. And I think what we will definitely see is that the full spectrum will be covered and specialized APIs will emerge like the EDR API already did. And we will see more and more of them. And one of the key challenges for OTC and that's why we are investing heavily into semantic reference and link data mechanism at the moment is to make sure that you understand what your profile is actually doing. What is your profile being based off to make sure that any user of a specific API knows what they're actually dealing with? Just the last question. So Cloud Doc and GTIF, we use it now. It's a main data format. I really wish this thing existed 10 years ago. But what about the vector data? What's the cloud format? The OTC recommends for vector data as a cloud, so fully scalable. Yeah. Please ask me about two months time. Yeah. Right now working on that topic. We know that Cog works well for the Rasta data and we have now started an internal initiative where we look at data warehouses that are storing vector data. And we are currently comparing the big vendors data warehouse solutions, like from Google, from Microsoft, from Oracle, from the Apache side. So we are comparing all of these. We are investigating what mechanisms they offer to support geospatial properties. And then we compare the results sets. And we are currently busy doing this comparison. And once it is completed, which I think will be happening within the next two months, then I think we are much closer at defining or providing recommendations for the ideal cloud optimus vector format.
|
Ingo Simonis is chief technology innovation officer at the Open Geospatial Consortium (OGC). At the ODSE conference,he explained how platforms for the Exploitation of Earth Observation (EO) data have been developed by public and private companies in order to foster the usage of EO data and expand the market of Earth Observation-derived information. His talk described the general architecture, demonstrates the Best Practices, and includes recommendations for the application design patterns, package encoding, container and data interfaces for data stage-in and stage-out strategies. The session further outlined how to interact with a respective system using Jupyter Notebooks and OGC Web APIs.
|
10.5446/55245 (DOI)
|
My name is Hannes Reuter. I'm speaking here on behalf of Sugisco team. And I will present to you today about harmonizing PEN UB data sets and sharing a little bit of what are we doing in Sugisco. And you might wonder what I'm doing here and why I'm presenting. I will take a little bit of a different detail because you have already heard Daniela Ritze from the political views this morning and the colleague from, what was his name? Sorry, I forgot Matt from Sinea. So I'm now going to just go where we're doing really implementation work. And you might wonder why I'm speaking here on behalf of Eurostat. Because I'm working in the Sugisco team and due to legacy purpose, we are since somehow 20, 25 years in the statistical office of the European Union, located in the beautiful town of Luxembourg in the state of Luxembourg. So if you ever come by, you can drop by in our nice building, which you see here in the bottom right and visit us. And look what we're doing for statistics. But just go as less to do with statistics. We just published one data set, which is the total surface area and total land area for the whole European Union. But the rest is more on really geospatial data. And you might wonder where Giscogh stands from. A little joke apart. The commission is known for the acronyms. Giscogh stands for the geographic information system of the commission. Or you can also sometimes I'm making the joke on coordination. And what we're doing in there is the whole stream of work, what you're doing in GIS. We're doing localizing, analyzing, visualizing of data sets. And I hope by the end of my talk today, you will have a bit of understanding what we're doing and why actually we're also doing this data harmonization efforts, which we're doing. As you see here for that one, we run quite a number of things like data procurements from the member states, from eurogeographics, from commercial sources, from OpenStreetMap, not data procurements, but data optimization. We do things with them, we're analyzing, and then in the end, we spit them out with visualization products and you will see that during the talk. So we have a really a triple rule. So we are a service provider for Eurostat. So if you look at the Eurostat visualizations like regions in Europe or the regional yearbook, we do visualizations for them. We are a service provider for the European Commission. Like we compile and this is what I want to share here today, how we compile and harmonize pan-UPD data sets, for example, location of healthcare services. And then also we coordinate an even partnership with member states and we give out grants where we try to bring in together statistical and geo information to produce added value, similar to what you're trying to do with the geo harmonizer. So I don't know, I hope you can see all the slides here. Just go services. I mean, if I try to tell my mom what I'm doing, then sometimes and explain what I've done now to you is like, sometimes I'm getting, Hanes, what are you doing? And my standard sentence now is, does that I'm telling them guys, where's the Google Maps of the European Commission? And then I see, ah, you're doing the maps and I can type and I can route. Yes, yes, we do that, but I'm not Google Maps, we are not Google Maps, no, no, but we do that for the European Commission. So we have around 5 million users a day and we are spread across all the European institutions. Just mentioned a couple of here and just want to show you a couple of examples here. So for example, Daniel had mentioned earlier is data.eu, which is received funding from DG Connect and the Publications Office and you see here, for example, the map on the left side is overview map is coming from JISCO and is used there quite popular. Then if any one of you came, made the Erasmus application to study in a country abroad, there, if you did that and you needed to calculate the distance, so there's a distance calculator on the Erasmus application program driven, not developed by us, this is developed by the respective business units in the Erasmus program, but we are providing the backend to that one. If you apply to a visa, you see also which in the EU immigration portal, you can see that again, the backend by something, by map driven by a provided register. Last but not least, you can even see the billing outlines. Here is my entry for our, who is through, like the telephone book of the European Commission and then you can see where we are placed. So last but not least, we have not only a European focus, but also a global focus, that's the reason why we rely also heavily on open street map, which needs to be modified for the political realities of our political leaders. So if you want to see globally where our colleagues from external action service or near or echo are working in the world, also we're providing map. So just to give you a general overview of what we're doing and with that respect, we provide corporate level service. We have our central database from that one. We're making our background maps. We provide geo coding, reverse geo coding services, root things. We have an ID service and I really like to talk by Ingo just now where he said the simplification of the OGCI. And this is the ID service proof, for example, for us quite popular because suddenly people are using it where you have never expected them to be used in the world, where you just spit in some geometry, some coordinates and gives you back some codes or geometries for the nuts, for the countries. And it sometimes hits with 9 million requests over 12 hours, something like that. And we do something simple, simple like making a quick map for our policy officers like image, or we disseminate the files and even provide metadata portal intern. Why I spent a bit of time on introduction here and I want to put it now on a bigger framework of the United Nations. If we're talking about the global statistical geo spatial framework and you coming into the same play here and if you look at to the pyramid here of the key elements, you see the use of the fundamental what we call fundamental geospatial infrastructure and geocoding, which is a base to have geocoded unit record data in any data management environment. So for someone who works on Earth observation, that's not really so much important. It only becomes important if you want to have to have your ground-trust data. How do you geocode an geocode an address? How do you locate a building? And then this one becomes important. And that's the reason why we are working on trying to create pan-European data sets to bring really in our this fundamental data. And you will see that this is a key document or also a key vision of where we are working on and where we what drives us. Because if we're talking about interoperability, we heard about OGCA API, we know all this, we know common geometries if it's a country or statistical unit. But if we're really going down here to the base, then this is really interesting and becomes also quite challenging technically. From our side, we have set out a couple of themes which are geospatial data requirements, which we want to have. This is in line with United Nations UNDGM core reference data is in line with Inspire. So here we have a link back to the all the legal talks which you had seen earlier from Daniela Ritzi. Here on the right side, you for example, you see an artistic representation of postal codes, which we're already disseminating from Giskarsite for a couple of years. So now I'm already coming into the part where I want to show you what we are compiling if we're talking about pan-European data sets, compilation issues I will and all these kind of issues I will comment in my lessons learned session. So here for example, you see healthcare and education, especially with COVID healthcare was of quite high interest in the user community, how to use it, what is the current status, what is available, and what can needs to be done. Utility services. So here for example, an example from healthcare. So what we did as a time type of analysis, we averaged the travel time to the nearest free hospitals at the country level. And this is what you sometimes get at a political level, this kind of statistical discussion. I put down here the URL. If you look at EC.eu, back to the GiroStat, WebGISCO, geo data reference data, healthcare, you can get the data sets and for downloading. Not this one because this is still an undervalidation, but for the point information underlying, which I will show in the last slide. So here's it for the country level. But then as we have actually really record level data, due to our geo coded infrastructure, we can actually do it at the nuts level. This is a statistical nuts, sorry if I'm not everybody's aware, this are statistical units, which are allowing comparison across all the member states of the European Union. There's a legal act behind just to mention that. You can do that at the nuts level, or you can do it at the local administrative unit level, which you see here. And then you look at it and you say, oh, we have an issue. For example, if you just simply look at the law level in Spain, or for example, here in the Capaz and mountains, or even in Sweden in certain area where you're saying, oh, yes, wow, we see quite a lot of brown. So we have lots of travel time to the nearest three hospitals, but is that a reality? Because luckily, what we also have now, we have a one kilometer population grid from the census and extensors is currently in execution. So we will have in the next two, three years, another one kilometer grid based on one kilometer data sets. And then suddenly you see, oh, we don't have an issue here in Spain, because in the most of the countryside, we have no one living there also in the Alps or here in the Capaz and mountain, but in other areas like in Sweden, or in the eastern part of Poland, there might be an issue. And this is really interesting where it becomes analytically interesting from our side. Here's again, the healthcare locations which we have used for this one in this geospatial infrastructure, GSGF protocol, which I mentioned a second example, which I would like to present today to you is addresses where we're compiling data again from member states, bringing it together. And just want to show you a small front end to that one, because this is something what we learned actually is a hard way. API is not enough. You always need to put in an human friendly interface on top of that one. This is actually not for me. This is from UK colleagues, which have figured that one out, I am Cody, and we're always trying to do that. We're providing time to make the download package, we're trying to make the API, and we're trying to make a human friendly interface on top of that. And here, for example, you see a house number, a street, and even the open location code and the URL, how to do that. And we're trying to complement that one. And why actually do you might wonder why are we actually doing this? Because we receive geocoding requests to our infrastructure. And we see that people are entering what we call dirty information. The people who are the data which are entered are not sanitized. They are not cross checked at data entry. And there's this famous one, 10, 100% rule of data, total cost of quality. It costs you 1% to fix it at data entry. It costs you 10% of your cost to fix it while it is in the database. But it costs you 100% of your cost if you cannot make the business decision, the political decision, or the sale to a client, which you want to. And that's the reason why we're trying to get a handle on that one, to allow our colleagues in the U.P. Commission, being at Erasmus colleagues, being at RTD, for the future, using this artist API to validate the addresses at the data entry point. And you see over here, what is one of the issues which I want to highlight for some of the countries, we have data for some of the countries like Ireland, and UK, Sweden, Greece, we have, we are in progress. And for some of the countries, apparently, other states do not exist, which is sometimes in 2021 really surprising. Last but not least, Daniela mentioned this morning also the European Green Deal. And for that one, we are currently also looking into what is available from building units, because that's interesting for them in terms of energy efficiency. And just to give you a little flavor of how do we actually do the work, because it's really like a tip of the iceberg, the whole process, what we're showing you today. So first of all, we're going data hunting. We're going to the Inspire Geoportal run by our colleagues in the GSC. Panos will present later on some solid data. But I mean here in the Inspire Geoportal, we should see all the data sets from the member state. Listen, we should. It's not always a case. If this doesn't help, we go to data.org.au, which Harvard's different. Then if that doesn't help, we go to the National Geoportals. If that doesn't help, we actually try, we need to contact our local contacts, which we have in the National Mapping Agency or statistical agencies, to identify the data sets. If we have that one, we are making a map like that, which allows us to clarify which countries do we have identified data, which how do we have ingested. And then, for example, here, all the building units from our colleagues in Slovakia, which is really nice, really cool. And here we even have the building hikes for that one, just to give you an flavor of impression. But then we're looking into that one, especially if we're looking at energy efficiency. The data model of Inspire would allow to share all this information. But suddenly we see the only thing which we have in there is building hype, but everything else is missing or what we call in Inspire voidable. So they have sent it away for the moment. But what we're actually interested is like, and here, Tommy, I would love to be in Wageningen with you, just to see what the national energy at last from the Dutch does. Because there, for example, you see for every single building in the Netherlands, what's the energy energy efficiency. And then this becomes interesting, because with this one, you can make policy decisions. And this can be aggregated and given away for someones. So come into lessons learned. What do we learn? You need to have creative people, geo, and also IT knowledge people. And they need to be motivated. And then you can reach a lot. If you do this kind of work, which we are doing, expect really long times to change things at the corporate level. Because these people, they might be motivated, but they need to allocate the resources, they need to get the resources. And you should not get frustrated. I know you are working in the research community, most of the people here in the home, you are in the research community, and you do it like that. Yeah, cool. Working in the European Commission now for eight years, and being a previous researcher, it takes a long time. And we need to be not getting frustrated. What I also learned the hard way, various tools for different things. It's always good in a mixed environment, being it as three, a QGIS, whatever, also human environment. And if I'm looking at what we're doing on the daily business, what happens if you do it with five people or with five students, or with 5,000 people, and in a concurrent environment, and because that might change completely how you do certain things. This lesson I learned, especially the hard way coming from a research perspective. And but I had good trainers. Lessons learned on the harmonizing dataset, something. I just want to mention a couple for you here. What we see is that we receive temporary outdated data. Sometimes datasets are hidden. They're not documented. So then it's really difficult to find. We find only generalized data. Data sets not aligned. So the building dataset does not match the addresses or vice versa. We see in the member states of the European Union different approaches. Certain countries have a centralized endpoint. And certain member states says, yeah, you need to go to every commune and without a central state. While some other, for example, the Spanish are really good. Every member state for the addresses, every commune is responsible for those addresses. But they have a central point. And you have over 5,000 files, which you can download and everything according to standard and code. While another member state has a central dataset, but there's no quality control on that one. Which goes into what I mentioned here with requirement quality control. Because we see varying quality across themes and countries. And believe me, we have seen everything now from really good dataset to people who are really engaged and want to fix it to something like, oh, we cannot do that. We don't want to do that. And sometimes, which is really worrying me, sometimes no official data exists, but the commercial dataset exists for the same area. And for me, this is as a public, as a European citizen, this is something really challenging for me. I must say. What we also learned, said our member states, if we tell them what we discover in our feedback loops, they're always really interested in what we're doing. And they want to, they are eager to fix it. And they are eager to allocate the resources to move forward with that one. Last but not least, in our team or in our unit, we also run the Luca survey. And I know you are a heavy user for that one. And I just want to mention that one. This is a dataset, which we are not doing and obtaining from a member state organization. This is a dataset with Eurostat does themselves. And we have a sampling strategy where we do a first level stratification with all the photos, then we make a survey where people are going out. And our colleagues at the GSC, here's a paper I've linked down there from Raphael de Andrémond, have actually brought together all the different years, from 2006 to 2018 in one big database. There's an R script there. You have the survey geometries. You have all the 5.4 million images, which we have at the GSC server in the 1.3 million points. And which you can access with the R script and get really other pictures. And I'm really looking forward that people like you now here in the room, which are much more involved in the Earth observation datasets, use it for their daily work. I'm really looking forward that these images you see here, for example, a couple of them from the survey are used for anything to work on that one. Just to give you a little bit of flavor, what you can expect if you go out in the surveying. So for example, we require that everyone makes a picture on the ground, and then the cows are coming or a bear is coming or a snake. We have seen all that one. This is a couple of survey experience in the forest where people are going up into the mountains or here are some roads tracks where people are going to this survey plots. And so for example, in the last one, there were 500 teams out doing the gowns, just to keep that one. And with that one, I'm finishing up today for you. Just a little animation from our colleagues from Copernicus from the EEA, which just have released the high resolution vegetation phenology and productivity product, which goes down to 10 my 10 meter resolution and might be of interest for you to work on in the future. Thank you very much. And I'm open for any questions to be used. Thank you, Hanna, so much for this talk. We have some questions. We have some time for questions. So let's start. I see the question from Matthew Martins. Maybe I go that one. This is a little bit of a challenging one or not a challenging one. We're debating to open source it or to open it up to everyone in the outside. First of all, where you need to understand this geocoding API is for European institutions use. First of all, because some of the data sets which we have requested in there are only for EC policy purpose first and not for everyone. So yes, this is a point. If you are a public administration, you can access that one. We have made that case already a couple of times for a private, I don't know from which organization you are. If you are a public administration, you can access that one. You email is.justgo and we will discuss that. If it's something else, if you're a private organization, apologies. No. Okay, we're looking for more questions. Yes, I want to say something. I really like that example with distance to the hospital. And basically what you show that if you don't have the right data, if you don't have enough data that like with the population density and everything, you don't really see the picture. I mean, you don't see what is the how does this you? Yes. So you don't see the, yeah, you don't see the real picture. I mean, only when you go and standardize by the population density. And then you also showed like in Netherlands, they have this, let's say, the how energy efficient buildings. And so that will be super interesting to see for whole Europe and see where the biggest gaps and where the biggest problems. So this is really amazing. I mean, if you your department could make these views of Europe that we can see, you know, whether a chronicle critical problems and the only way to see it is is from the, you know, getting the best data and then preparing the data so people can directly make decisions. So my question to you is, you know, you, you are like official, you know, office of the European Commission. How do you deal with uncertainty? How like in this case of the distance to the hospital? I mean, for sure, there is also uncertainty. And as you said, the data has a variable quality from different countries. How do you deal with uncertainty in in this very important information? Do you visualize it also? Do you, is it, do you have a standard for that? I'm sorry if I ask you to. No, no, no, it's fine. I like interesting questions. Tom, we know each other. I wish I wish we would have enough resources to work on uncertainties. We are aware of this. But we are rather resource limited. And the question is, where do we start interacting with the member states? So for example, if we're talking about hospitals and distance time to traveling to hospitals, it starts already. Where is the data entry? Where's the door? Because we know there are six different ways to geocode the location of an address. You know, so the point is, in each one, each country does it slightly different. The best thing what we can do for the moment is we document in our metadata files what we observe. And with that one, maybe in the future, we can try to work on also uncertainties. But frankly speaking, we don't have the resources. We are aware of that one. So for example, we are aware that I haven't shown you apologies. I seem to have missed the slide where I showed you about what we're doing for transport networks. Because we also see the different varying quality of transport networks. We obtain a commercial data set from one of the biggest providers. Then we have OpenStreetMap and then we have the National Mapping Agencies and running all these three different, my colleague, Julien Gafferis has made, for example, an analysis where he used all these three different data sources with varying quality to make his calculations of distances. And the outputs are quite different. But we are not there yet to make it visually pleasant. Okay. That's something you show here. I assume in this slide, no? The slides are showing because I see there's uncertainty around the line. No. Unfortunately, Tom, that's not uncertainty. This is just a topi-nanen. I hope topi apologies if I mispronounce your name now. Pencilish, hugest style, which I really like. And that's the reason why I said it's an artistic representation.
|
Hannes Reuter, Statistical Officer - EUROSTAT, outlined the ‘Geographical Information System of the COmmission’ (GISCO), a permanent service of Eurostat that fulfils the requirements of both Eurostat and the European Commission for geographic information and related services at European Union (EU), Member State and regional levels. These services are also provided to European citizens at large. GISCO’s goal is to promote and stimulate the use of geographic information within the European Statistical System and the European Commission.
|
10.5446/55256 (DOI)
|
Yes, as Tom mentioned, today we will talk about vegetation mapping. For those who follow the training session on Monday, it will be talking about vegetation mapping again. So I hope I won't bore you a lot with more of the theoretical stuff that I already mentioned. So if for everyone that is not familiar with species distribution modeling, which is the broad term to which we identify vegetation mapping, I will just talk a bit about the theory. So yes, about the species distribution modeling, as I said, which is one of the techniques that we use to model the distribution either potential or actual of different species could be vegetation, as in this case, for example, forests, so it could be also mammals, all the kind of species. Then we will focus on which data and which covariates we use to actually try to understand how these species occur on a European scale. I will give some details about the modeling that we use, which is very similar to what already Martijn showed us, and also some results in the development server as the demo that already Martijn showed us. So we will talk about species distribution modeling on a correlative scale. So what we will do is just having a training dataset with the occurrence of the species that we want to predict, and then we associate some values from different predictor variables to the location where we actually observe the species or not. We fit a model to estimate similarity between these sites, and then we predict on either space or space time in the region that we are interested in, which in our case is Europe in terms of space, so the whole Europe, plus in terms of time on a time coverage that goes from 2000 to 2020. So let me also move this so you can see. So basically what I was talking about is the difference between the potential distribution of a species and the actual distribution of a species. Here you can see on the left the geographical space, so how we actually find the species in the environment. So for example here this is the area that considering all the conditions we see in the environment the species could live, and we can see that in gray is the area where the species is actually observed, while in white we can see that these are the areas where the species could potentially occur but is not observed. Therefore for example problems in terms of migration of moving of the species or because this niche is already occupied by the more competitive species. So what we do is actually observe the different predictor variables that theoretically speaking would describe the distribution of the species and analyze them in the feature space and see how the species basically occur in this feature space. So this is for example one of the two outputs that we can get from species distribution modeling. We can get either a map that shows on a 01 scale where the species could occur. For example here you can see this coming from the European Atlas of Forestry species for the European beach. The green area is where the species could occur, while instead the area that is in yellow is the North Aida, so there is zero, so all the areas where the species does not occur. Or we can get probability values. So areas where the species could potentially occur for isofabability is values that come from 90 to 100 percent where the species could actually live and prosper, while instead all the other areas are low probability values where the species does not occur. So what we did was during the last summer school, Ioannis Aisi created this harmonized dataset with all the tree distribution points that we have available for Europe coming from three different kinds of datasets. The National Forest Inventory dataset, the GBIF dataset and also some Lucas points. The code is available online, the dataset is available on Zenodo, so if you want to use the same dataset to replicate our procedure you can always do that. This is how the data looks like. So after we harmonize and put all together all these points we have more than three million of points scattered across all Europe divided by 70 different forestry species. And we can see that we have a higher number of points here in Spain compared to all the other countries, but actually if we analyze how these points are distributed we see that they are highly clusterized in some areas, while some areas for example have no survey or samples at all. So we faced this problem by using the land cover products that Martín was talking about. We created a forest mask for all 20 years that we are actually trying to study and then we overlay the points from this dataset with the forest mask. And all the points they show for all 20 years are higher value of high value of probability, so more than 19% in one of the classes that were put in the forest masks then we will use as points for the modeling. So for example you can see here that this is one of the points that we actually included in the dataset because it shows high value of probability of coniferous forest. This is for example a Norwitz-Bruz that comes from a European national forest inventory dataset that's located in Germany, while instead this point comes from the GBIF dataset and we didn't include it because we can see that the values of probabilities for the classes that we put in the forest mask are very low across all 20 years. This instead an overview of the covariates that we use. We use different climatic covariates both on a long term scale and on a time series scale. Terrain covariates come from the DDMs or DDM derivatives, then reflectance data that comes from Lancet, that's the same dataset that we also use for land cover data and also maps that come from the European forest outlets to map the possibility of the same place being occupied by another species. So it's a way for us to include the interspecific competition between the species. This is how we face the clusterization of the points. So as we can see for example this is a cork oak which is mostly located in Spain and there are different clusters also in Sardinia or in Corsica, but we can see that among all the 300,000 points that we have been straining data for this species, one third of it was actually in one of the tiles of 30 kilometers size that we use for tiling the whole Europe. So of course this cluster can influence and possibly impair the models and we can actually get very big biases in the distribution of the species. So what we did was actually checking the distribution of the points across all the tiles for all the species in Europe and then we took a median that would be used as threshold value for how many points can be used in modeling in each tile. This is how the modeling looks like. So we first started testing different machine learning algorithms without parameter optimization or feature selection and we trained these different models on a subset of species, testing also how the species and the present data and the absent data would look like in terms of either the most occurring species which were more like than 500,000 points or very less occurring species which can be less than 1000 points across all Europe. And the three best learners that we got from this first part of the modeling were then used with a bigger sample, 5% of the training data set for the whole, for all the species that we are trying to model which at this point are 50 and we trained them on and we fine-tuned them and we trained them and I can show you the feature selection processing a bit. The results, so the parameters that we got in the feature space for all these three learners were then used to train an ensemble model where we use a logistic regression as a meta learner and we use a 70-30 split in the data set. So we have 70% for each species use a training and 30% as test and then we use 5-fold spatial cross-validation during the whole process and the parameters that we use for the base learners of the ensemble model come from the phase two. We then use everything, I didn't mention that everything here that you've seen is modeled using R and especially the MLR framework and for the feature selection we use Scikit-learn library in Python and we use log loss because we have different data set for different species so the proportion between presence and absence data is constantly changing and to avoid this uncertainty in how the probabilities can be predicted we use log loss instead of overall accuracy or other metrics to assess how the features can be selected. So the process goes by using a random forest classifier with a 5-fold spatial cross-validation and recursive feature elimination we checked the first deep in the values of log loss across all the covariates and that's the value of the covariates that is used in each model for each species. To have consistency in this we replicate this procedure five times for each species and we then took the average of the number that we get in the first deep for each species. And now you can see how this map looks like in the development server. They are not completely online, they will be online I think by tomorrow most of the species will be online so you can also check by yourselves and that's how this species looks like. Yeah this development server let me ask this. So we can for example check the probability distribution for the Norway spruce which is the same that was available in the slides. You can see the legend the colors go from probabilities that below 10% to 90% we can recover the whole Europe and our time frame as I mentioned it goes from 2000 to 2019. So you can check somewhere like in the length cover classification points and check for example the probability distribution for the forest species to be there. We also have points available so you can check and we also have a compare tool where we can actually compare the actual distribution which is modeled using reflectance so you can actually exclude CDs or also farming sites or other urban kind of length cover classification with instead probability distribution as I mentioned and they show in the one of the first slides in the geographical space usually the potential distribution is very much bigger than the actual distribution and that's something that you can check here with the slider. As you can see the condition for the species to be present in a specific area are already available for a lot of areas while instead the actual distribution of the species is very much smaller. Yeah because we do not include anything that does that's to do with urbanization or entropization in the environment so everything is basically excluded for the time frame that we are actually analysing. Yeah let's see for example Scandinavia. Here for example yes we have the conditions for the norwespers to be present but this is actually one of the species that is less occurring in the southernmost part of Sweden because it's mostly covered with broadleafs or for example here in Germany we can see that it's the same or any kind of patients or the Alps. This area for example. You can see the difference and it's kind of makes sense of course because it's from expert knowledge you can actually say that the top of the mountains or the glaciers is not an area where you would actually see the species or where the cities are. Yeah so this is to turn off the comparison map so as we said we focus on usability of the maps so what we actually provide is a probability of values for where one pixel for example can show the potential distribution of one species but we also provide for each layer an uncertainty map that shows okay should I use this value is it reliable or not. So we have a scale that here shows the variance based on the procedure that Tom explained in the first days of the training session so we use correction factor for the ensemble models to avoid for example the extrapolation problem that we talked about in the training sessions so you can actually check for each pixel or each area that you're interested in if the values that we provide have a high tolerance in terms of reliability or not. And I think that's about it so if you have any question please. Let's see the chart. Anything on YouTube or mother most? Yup. I don't know if you hear that one. Obviously, thezać actively. So in general terms when thePlastic,lers out front. Yeah. On a long term scale we know that for example this is not possible we also provide uncertainty maps that we can say that that value for the specific here is probably due to noise, or also we can another thing that we can use on a long term scale is actually remove the noise by using trend analysis. So that's also some post processing that we could do. So we could provide potentially raw values for the maps and also post-processed maps. Oh, let's see. I see new messages in the chat. Yeah. Okay. Okay, that's a there's a question from Pablo, which says it is only possible to model part of the actual distribution if you consider a biotic biotic or movement limits. But we do we use this kind of predictor after the feature selection of the session and which variables are in there. Okay, so the maybe something that wasn't clear during my presentation is that we don't have one model that models all the species, each species as its own model, its feature selection, optimization and part. So we have to check basically in every in every model which ones were used or not. But yes, for example, the a biotic and a biotic is always there like climatic variables are always in there. And we also can see the differences between higher resolution layer that we use as predictors, for example, the DTM sort of the slopes that are highly available, or for example, long term a biotic covariates like the bio clean that about very course resolution. And other things that we that pops up in terms of covariates for almost all the species is the reflectance data, especially for the, for the, for the actual distribution because those are can actually help us exclude all those areas that instead are covered by cities. And in the potential distribution we also have some layers that come from the University of Maryland the show for example the barren areas of the Berlin, and that for some species actually comes up as one of the first covariates so we can actually exclude very quickly all the areas that do not show vegetation. But yes, we will publish the paper which is now in writing will provide like data for each model and you can see which ones are the covariates that actually pops up at the most important for every species. Oh, yeah, okay, yeah, this is another question is just an idea. Yeah, yeah. Yeah, someone here in the chat was suggesting for example to connect with JRC, we try to do that but they also have the same problem as us the amount of points that are available to model these kinds of things, the ones that we are using right now this is the highest amount it's publicly available. I mean, just for the audience. Yeah, it's possible. It's not a problem we have the computing infrastructure to do that. It would be also wiser to not use all of the million of points that you were mentioning like we could use most of them to validate our mix for example, which is looking for an independent data set in terms of trees and especially if you are modeling or forecasting is always a problem. So using all these points for example for as an independent data set in terms of sampling bias or everything could be also useful to make the maps more accurate. Yeah, that's fine. Yeah, basically that's a good question. You can first check the areas that you're interested in. And then in this area you can check which ones of the species that we model have a high value of probability in terms of actual distribution. So we, from that you can say okay these are the species that will actually do well in this area. And then based on your interest then on your timeframe interest so for example if you want to have like short rotation for a serial if you want to plan for like 100 years. And also what you're interested in which kind of forest do you want is that just production. Do you want do you have interest for example in protection of the area for example like forest that are planted in high in the mountains to protect against no fall and all these kind of problems. So that depends. But yeah you can, you can for sure use the maps as having a kind of an overview of the trees that you can plant there.
|
Carmelo Bonnanella, PhD candidate & research assistant at OpenGeoHub, presented the results of modeling species distribution maps for both potential and actual natural vegetation through spatiotemporal machine learning using a data-driven, robust, objective and fully reproducible workflow. The presentation focussed on the benefits of using ensemble machine learning for species distribution modeling to capture patterns of niche changes in both space and time: yearly (from 2000 to 2020) probability distribution maps for both potential and actual natural vegetation were shown for forest tree species that live in different climatic conditions across Europe. The high spatial (30 m) and temporal (1 year) resolution of the outputs should allow us to enhance and better understand the patterns of niche change.
|
10.5446/55257 (DOI)
|
Hi, good morning. I hope everybody is listening to me. And so I will present about a product that we are preparing in the context of the open data science in the OpenGeo Hub. So in this product, one of the main applications of this is detecting land degradation. So I will explain about the workflow that we are implementing, what are the ways to access the data and preliminary results that we have till now. So if you think in the graded land, so I would like to start with the definition. So the graded land is a land that has lost some degree of natural productivity due to human-cousin process. So there is no general definition or a proven definition across this concept. But if you think, for example, for pastoral land, degradation over the pastoral land, it's mainly when you have problems with the management and you will decrease the productivity in the land and it will affect, for example, the livestock activity. But if you think in a forest degradation, so it's a different way to see the problem and mainly it's related with this carbon stock. So it's a big problem today and I'm not here to solve it at all, but I'm here to present one product that could help and that could enable users to deal with that. So Gibbs and Salmon did a comprehensive review about the topic and they identified at least three ways to detect degradation in land. So expert opinion, satellite-derived net primary productivity and biophysical models. So these three methods, they have different benefits and different limitations. And specifically for the satellite, we could have like a global consistent methodology and qualitative, readily, repeatedly, and we can associate these measures with potential changes and there are other limitations. Also for example, mapping like degradation in soil, but it's a good starting point for to try to solve and deal with this problem. So mainly if you think like in a satellite image, you can use, for example, the vegetation index products, we have several global products now available and you can use it to actually analyze the time series and you can understand how this NDVI is changing across the years and you can correlate the NDVI with the primary productivity. And if you get this time series and you can perform different analysis to understand it, but mainly what you need to work, you need to have a baseline because degradation, you need to follow the process across the time and see if you are getting or losing some of the signal. And of course, we have multiple source of Earth observation data, we were discussing about it. And here we have an example for most Landsat Sentinel and PlanetScope, they are different challenges to work with this data. But mainly we know that most part of this data is not real analysis ready. So they are not artifact or cloud free, they need to be gap-filled and some of these data are not fully accessible via these new protocols like cloud optimize the DOT for a stack. And if you think in these three aspects together, we don't have a product that is ready to go and you can access and retrieve, for example, a time series with this tree of clouds and get filled in where you can really perform. So, and considering that, yes, considering that, yes, there is one more part here in the slide that is really important, so it's a nice podcast by Mac escaping that discuss that actually satellite data is not accommodated. So we have this view today that we have multiple sources and a lot of information outside and we can this information, this data is really collected and ready to go, but that's not the case in almost all of these data because you need to download, you need to organize, you need to pre-process this time series to really do some type of analysis. And to deal with this problem, what we are developing now is a workflow that process, we are testing it with modes first, but we have plans to implement with ProBV and with Landsat, with Landsat we already implemented in the scope of Open Data Science Europe and my colleague Chris will present about it, but mainly what we are doing specifically for modes is we are accessing the Google Earth Engine and we are removing all the clouds using this pixel reliability and specifically for the mode 13 key one. And this is a 16-bit composite, so it's a level three product, so you already have like a lot of pre-processing this product to make it available and usable for the users and we are aggregating it by two months, so we are putting together another level of temporal composite and we are removing the outliers and get feeling time first and space later. And the output of this approach, we are using this ready to go data, this analysis ready data, we are calculating a trend analysis to analyze only the trend component, but removing the seasonality that is really important for, for example, Pashree or Savannah's where you have high correlation with the NDVI signal with the precipitation data, but also we are providing a snapshot of our workflow, so each of these parts of the workflow we are putting it available, so we are delivering one way to do this trend analysis, but the goal here is really enable other users to work with this data in the easiest, easy way and to access it and do, for example, different ways to implement the approach and detect, for example, the land degradation. So this is an example of what we retrieved from Google Revenge and you can see we still have a lot of gaps, this is a, Moldez is a great product, but even aggregating it for two months, we still have some gaps and after a lot of computational, we produce it. So mainly, this is a debt-fillet product aggregated by two months and it's full resolution here and it is, we took the EVI data here, it's better to analyze, for example, crop and it has less saturation for high values like really conserved forests and different types of land cover where you have like a high NDVI value here, you have a less saturation with the EVI. So and here we are seeing this time series and it's basically, we can see that the seasonality of the planet and mainly this is, it's more like an intranational signal. So using this data, we perform it like this, this is a classical method to remove the seasonality and actually separate the signal in trend, season and noise and the residuals here in this plot. But if we get just this trend, so we will remove this seasonality aspect and this correlation with the precipitation, that it's mainly the response of the vegetation to the precipitation. You can see the same result and now this is animation. So you can see that it's slowing, it's changing like during the years. And what we are doing actually, it's analyzing this data to derive like the slope and the intercept and basically a linear regression for each pixel of the planet considering the modes there. And here you have the beginning and the end of this trend signal and you can see some difference and this is the result. So mainly we trained like one linear model for each pixel and with this linear model we have different statistics so we can derive the slope, we can derive the intercept and specifically here we are seeing the slope. So where we have like green values, it's mainly we have like an increase trend considering the whole period. So this is important, we executed it for 2000 to 2020. So if you change for example this interval for 2010 to 2020, for example probably the result it will be different but you can use the gap field data. So the goal here is really enable other applications to work with this data and not specifically use the final result of our analysis. And in the pink we can see where we are, where we can, where the EVI is decreasing. So here is an example for Europe we can see in France so I'm not quite familiar with this area but if you look in this area you can see that there is a decrease in the NDVI and we can see it for this image so this is more like a summary of the whole time series you can think like that and if you get two pixels in this area you can see like where you have a positive trend and you, if you analyze the time series you can actually see that there is a positive trend and you can see the original time series, the gap field that we are putting available also and you can see the negative trend for these areas in pink. We can go to other place so I know this better Brazil and in Brazil we can see some of the forested areas in the Amazon forest and we can do the same analysis and the first time series it's increasing the NDVI and mainly it's because knowing this area it was patchery and was converted for cropland for example and in the opposite the decreasing time series here was like forest and converted to patchery and later to convert to cropland so you need to consider that you could have different land cover transitions here and it will impact in your analysis but if you plan for example take this product and just analyze what are the stable, the trend signal for the stable class we have land cover products that will allow us to do it so you can work with just stable areas where we're not, not identify it, none of land cover transitions and here it's in China so you can see and in China we can see the same examples and mainly where we had urbanization and maybe I'm guessing here but maybe it's a gain in productivity considering like a crop area and but what it's really important we want to provide this product to do like other type of analysis so here it's an example of actually this is an animation I don't know why it's not working but this we provided this product for one of our colleagues in Brazil and now they are using it to map in patchery land degradation in patchery land so basically they have their own approach and they are using this filter time series and mainly analyze it across different areas and normalize and integrated here for example with patchery map produced with Landsat so that's the main goal actually in produce these type of data so the next steps this is we are working this product so we still we run one more time because we found some artifacts and some problems in our workflow so we will produce a new version and we will put available as cloud optimized geotif to everyone access through S3 so we presented for example in these training sessions and this week we presented how you can for example access one of this file and I'm talking here like a global file so it's actually a big geotif where you can just do a carry and retrieve part of I don't know any part of the world but you can retrieve part of Brazil and do your work with that so we really want to provide it as cloud optimized geotif to enable the users to access it and do their own analysis but also the result of the analysis in the time series we will host it in the openland map that it's a nice viewer where you can analyze the result and have like a better idea of what we are producing so we need to discuss some possible validation strategies for the results that we are seeing here and it's really difficult because it's not easy found points where you have like reference points for land degradation and we are open to suggestions and we are working also in a publication and the plan is when we have this workflow really ready to go and validated we want to scale up to proba v and land site and just to explain to you this is a kind of problem that we have when you are working with the when we are working with the satellite data so this is a proba v it's a great data but if you look for example what we have available in Google Earth Engine I checked it also in the V2W web portal as from August 2020 we just have this part of the work so this data is nice it's there but you need to be aware that even to work with it and to start running the analysis there is a lot of work a lot of pre processing and now we go here it's really help in this direction and provide a data that is ready to do analysis for different kind of applications. Okay so the question is why we not use BFEST and why we are using this ordinary use algorithm and we can use BFEST and it's nice now we have a nice implementation of BFEST called BFEST light and it will be presented here and the advantage of the BFEST it's really you can now with the BFEST you can detect the break points so it works for different when you have a length over change but mainly there are if you think in generate like an image for the BFEST there are some difficulties for example you have a break point and you will have like a trend before the break point and a trend after the break point so definitely we can use BFEST but we will provide more output because maybe you have three break points and you will have different trend signals between these break points and we need to save for example the slope the intercept where the break point occurred so there is this challenge here and the problem the multi source now we are dealing with this problem separately and of course they could be different but mainly the best way to deal with that is do harmonization process before and so for example here we are getting the data modes doing like modes data but from different data so if we have our harmonization process to put all these pixels in the same like range in the same scale and really harmonize then we can get few for example modes with the lens out or the opposite but this is a whole process so it's a whole project and I think there is this possibility but okay.
|
Leandro Parente is a postdoctoral researcher at OpenGeoHub, supporting the foundation's current and future European Commission-funded and other international projects where there is a need to develop new solutions for geocomputing, optimize and automate modeling frameworks and deliver scientific outputs. In this talk, Leandro illustrated how to detect land degradation using time-series analysis of EO Vegetation products (MODIS EVI, Proba-V, Landsat).
|
10.5446/55261 (DOI)
|
I'm Josep Križan. I come from Multivan Company. So we made this Pan-European seasonal cloudless mosaic with Sentinel-2 images. Can you see my screen? So there is a lot of publicly available remote-resense data, and it has been rapidly increasing in recent decades. But the data are not generally freely available as hominized and analysis-ready products. So the process of mosaicing these images to produce something that's homogeneous and cloudless can become a little bit cumbersome at larger scales. So this is the description of that as said. So the source of data were Sentinel-2 level 2A open data on AWS. We produced, we took the six bands from there, blue, green, red, nearly infrared sphere 1 sphere 2. We took, for every band, we took three statistics, 25, 50, 75 percentile and count of valid values, so per pixel. We produced four seasons per year in 2018 and 2019 and winter and spring for 2020. So these are dates when these dates are aligned with GLAAD IRD products. So that it was the idea that we made the fusion of this dataset with that Landsat product. And all data is a sample of 30 meters resolution in this projection in 2035. Okay. So data can be accessed in Zenodo platform. There is, you can download it from here. Or... Okay. Or there is also made the cloud optimized geotifs on Wasabi. So it can be accessed directly via web. It can be accessed directly via through Pantungis. This is the pattern of links. And so for band there is red, green, blue, near sphere 1, sphere 2. And for seasons, you have to replace this start date and end date in the links. If you want to access this with GLAAD tools, then you have to add prefix VCI curl. So this is example. How can you access this data? And yes, this data are, yes, and this data are scaled to bytes. Originally, data is in 16 bits. But just for a format of size. Size is very big. So you have to scale it to bytes. So this is how it looks. I can show you, I will show you in Pantungis. For example, this is summer 219. And this is autumn 219. It's not perfect, especially this autumn part. So how it's done? First, we calculate the mosaic of one season, one band in one centimeter tile and relative orbit. And then we just saw all the big values that has clouds or cloud shadows is masked out. And then it's calculated 25, 50, 75 percentile and count of all valid values for every pixel location. And this intermediate result is saved back to temporary S3 location. So this is done for all, for all, there was all 137 centi of two tiles. So this is, so this is a grid of centi of two tiles. So first it was every tile is calculated independently. And then, and then all tiles that are in the same season band and orbit is stitched together. It's just taking mean value of overlapping, not tiles, overlapping pixels. So this looks, it looks like something like this. So we have these tiles, but we have these orbits. So everything that is in one orbit is consistent because satellite is, it's, all these images are taken one after another. So there is no visible difference between them. But when we put neighboring orbit one by another, then we have see this that we have, they're not aligned perfectly. So, so this was these two orbits are then stitched together by using inverse distance weight mean with respect to the distances, distances from all from every pixel to to this. So, that way, when we put this all orbits together, we can get we have this final product where there is no stripes, there is no visible. Transitions from one orbit to another. Yes, this is. So, for example, winter 219 RGB, RGB composite looks something like this. There is, there is a lot of this white values, because there is just, it just snow, snow isn't, wasn't must out. And spring and summer. Maybe you could zoom into some area. Yeah. Yes, I can zoom it. So this is directly from wasabi. It's really fast. So this is. That's it. It's 30 30 meters resolution. And okay this is RGB composite. I made this near sphere composite to it looks interesting. This is. So this is from near sphere one and two bands. Okay. So, yes, that was our team that we used. And this was computed on multiple spot instances on the AWS. Every instance have 64 to send to the CPU send 500 to send 100 gigabytes of RAM. And so this one. So every season band style orbit mosaic is was one independent job. And it's, it could be calculated in parallel. So, there's a system is made of two components. And script that runs on this multiple instances, and it fetches job descriptions from on central server. And that server, there is a happy and behind this up is postgres database that keeps track of all jobs done all jobs that need to be done. And take care that there's no duplicate jobs and so on. And this approach is scalable to any number of instance with shoes with CPUs. So, of course, up to total number of jobs. It's not. Well, it's not really correct this scalable. If you have too many instances, then this central server becomes quite big. But okay, that that can be. So there is 137 s two tiles. And it was around 42 to 43,000 images sent in two images for a for each of the band for each season to make this, this all was like, it was really a lot of data. Well, that's, that's it. I was really quick. Okay, any questions. Okay, thank you. And so this data, can you go back to the nodo just show us one more time at the moment so it's available in the nodo and it will be put also in the open data viewer. And also at the moment is the value of choice. I don't have to download everything. But it will basically the data is ready. What about the gap feeling. Well, there's no, there's no, we didn't make any gap feeling. We have this account layers, where there is for every pixel there is a count of numbers so if it's lower number then this pixel is not really reliable. So it can be used like you get feeling but we didn't make any gap feeling at the moment for this. But it's a fantastic piece of work and we have both so Lancet and Sentinel. We have inside the project prepared analysis ready okay new video data that gaps with the Lancet we fill in all the gaps. What is your impression of the difference between Lancet and Sentinel, because there's a, you know, it's the same resolution is the kind of multi spectral same type of bands. So, impression so far about the differences, Lancet and Sentinel. Lancet has 16 days with revisiting time Sentinel has five days so it's almost three times more data on timeline. So that's the big deal. So there is less, less, less gaps for that. But then Lancet has this one's at seven months at five months at eight so that's you have, you have more than 20 years of Lancet, while Sentinel is from 216 17 so on.
|
Josip Križan is the owner of MultiOne. In this talk, Josip introduced how the rapidly increasing amount of publically available remotely sensed data in recent decades has revolutionized large-scale research and context-informed decision making. However, these data are generally not freely available as homogenized products ready for analysis at continental (or larger) scales. This is widely observed with datasets generated by EO satellites, particularly those with optical sensors and those capable of high-resolution imaging, where the process of mosaicking imagery to produce a homogenous, cloudless dataset across a particular area of interest often grows increasingly cumbersome at larger scales.
|
10.5446/55263 (DOI)
|
I work at the Eastern WF, the European Center for Medium Range, where the forecasts, but I work in the Copanicus Department for the Copanicus Climate Change Service. And indeed, I'll be talking about something called the European State of the Climate from data to information and back. And I'll mainly use this European State of the Climate, which is a report, a way to illustrate where we're trying to head in terms of the services provided within C3S. So just a brief outline on my talks. I will talk very, very briefly about the Copanicus Climate Change Service in general and the Copanicus program as a little introduction. And then I'll move on to what in Copanicus, Cpoint Cere, which started a month and a half ago. We're naming climate intelligence activities, which mainly is concerned with climate monitoring. And this is under the work I'm most concerned with within C3S. But then I want to spend quite some time in the talk talking about how to kind of solidify and the connection between these activities and the rest of the C3S service chain, all kind of under the big umbrella of APIN data, of traceability, of APIN access to code. And so I want to say, but I think this will all become clearer during the presentation. So I assume that some of you at least are familiar with the Copanicus program. And for those who are not, the Copanicus program is a really big European Commission-funded program. One part of the program is a satellite program. So taking satellites, say the sentinels, interoperational production and sustained operations forward in time. So that's kind of a huge data production part of the program, let's say. But the program also contains these different service elements, which are there to facilitate the uptake of that data, but also to facilitate the uptake of data within their own realms. And I think one of the keywords that you might know if you know about Copanicus is APIN and free. The main is to have as much of this data as APIN as free as possible. And for the satellites, this is the case. But for instance, the climate change service, we also, of course, have to rely on other data sources. And sometimes these are breakered and might come with their own licenses, et cetera. So as I said, I work for the climate change service. And really our role is to provide data and data processing to inform policymaking and inform decision making in the realm of climate change. So using that climate data to understand climate variability now and climate change as we go forward and climate variability in the climate that has changed already. So if we look at kind of C3S as a construct, I would say it's quite constructed around an infrastructure. And I will come back to that later. And we're on the one end, we have data and data provision. So some of this comes from satellites, but some of it also comes from other sources. One big part is on reanalysis. For instance, which is a lot of the work I'm working on kind of took a start in the reanalysis. Whereas other parts of the program work more heavily, for instance, with model based estimates, so like climate projections or climate predictions. My work is generally kind of at the other end of this long chain. So after a lot of processing and understanding of the data and what can you do with it? And the people in my group and in my team, we work mainly on climate monitoring and we have kind of two key products, I would say that we regularly produce. And one is the monthly summary. So this information on the monthly time scale and then the European state of the climate, which we publish annually. So I will just talk a little bit about these two products. And then I will come back to how I think or how we're trying to kind of strengthen the connection between these products and the whole chain that leads leads to them. So we have this monthly climate bulletin, which is mainly based on data from reanalysis and from in situ. We publish this around the fifth to the eighth of the month, depending on when the weekends happen. And it's kind of along the lines of what many weather services or natural natural meteorological services will do on a national basis. They will look at the past month and put that month into the perspective of the longer term climate say was it warmer than the average was cooler than average, whether any particular events that kind of stood out from the normal or how was that month in the kind of light view point of climatology. And so we have three, actually four parts of this bulletin. One is about surface temperature. One is about sea ice. One is about hydrological variables. And then one is based on an in situ monitoring product, which comes out a little bit later in the months. And now during Cophanicus 2, we're trying to kind of bring these a bit more closely together than we had managed to do during the first phase. And these can all be found on the website if you're interested. And what I kind of wanted to point out here is not so much maybe the content. So we talk a lot about global temperatures and we focus on the pan-European domain and also possibly sub regions within that domain, but always on a larger than national level because that's kind of where our remit lies. And within this bulletin, which is purely web based, there's also kind of access to various types of information. So for instance, the reference period we refer to, and also kind of from a kind of communications point of view, the ability to download the images, but kind of more to the point of my talk today is the kind of access to data. And at the moment access to data for time series is done purely via a CSV on the website. Whereas the grid of data behind the maps you can find in our climate data store, which I will come to in a second. If we then look at the annual product, this is much, much larger in scope. And we also make use of kind of the whole range of data sources which are provided as part of the C3S program. So this is based on satellites, also on in C2, again on reanalysis, and also to some extent on model based estimates, depending on what variable we're looking at, because as we all know, some things are well observed, some are less well observed. And in order to kind of make certain statements about certain events, you need to kind of re-resource to all of these different types of information. And this report is kind of with contributions from institutes across Europe, part of the Copanicus kind of larger kind of program. And we report on the past year in the climate context. And we for instance look at very regular reporting on temperature and precipitation and say one time said it's kind of a continuity from year to year, but then we also have more topically based parts of this report which will look at different events, for instance, or kind of looking at how one particular event might have affected or be represented by different parts of the other climate system. And so the last one was published in April of this year, covering 2020. And I just want to give one example of a type of topic that we cover in there. Say one example for 2020 was the very warm winter at the start of the year, so going out from 2019 to 2020. And it was quite by far the warmest winter on record. And especially this warm was concentrated to the east of Europe. And then in the kind of monthly bulletin, we look at very, I would say quite fairly simple straightforward variables. Whereas in the state of the climate, we try to go more towards indices which might have like a further meaning than just kind of mean temperature, let's say. So here's the example of ice days, which was extremely low during that winter in that particular region. So an ice day is a day where the maximum temperature, hope that day is below zero. And then we also look at how that event had other effects, say for instance on sea ice in the Baltic Sea region and also other things. And as it pays to maybe the climate bulletin, we do try to kind of give a traceability back to the source. So in this case, the data behind all of this. And so the approach we have within this report is to have one quite general summary of the content of the whole report. And then these individual sections here, again, the example of the warm winter and it's all web based. And then you can kind of get step by step deeper and deeper into the report, see which data sets were used for the different sections and then get to these data sets in this infamous climate data store that I will come to in a second. But by doing that, we're kind of covering the two ends of the spectrum. So we're looking at the report on one end and we kind of give indications of this is the data we used, but not necessarily of all the processing in between. And what we like to strengthen is for that chain to become more transparent to the user. So if you see something in this report, you may want to try to reproduce it or you might want to repeat the same thing for another year or whatever you may want to do, you may want to know what processing was done rather than just knowing what data was behind it. So that's what I'm going to talk about in the remainder of the talk is how we're trying and some of the building blocks that we're working on to make that happen. And this is quite complex if you want to try to do it for everything in a report like this. But where I kind of in some places in the starting blocks and in some places more advanced and I will just kind of give some examples. So I've mentioned it a couple of times, the climate data store. This is really the backbone, I would say, of C3S and its services. It's a data store, but it's also a processing facility. I'm mainly for please processing at a reduced level, so it's not for running climate models, for instance. And at the moment, we have almost 100,000 registered users and there's about 70 terabytes being downloaded every day from the climate data store. And if you were to visit the welcoming page of the climate data store, you will be met by this and I will kind of deep, quickly dive into some of these areas in the CDS and how we can strengthen this service chain and connect it better to the report, really. Say one first step, and this is quite an obvious one, which is implemented already, is of course the data download and I didn't quite dare to take you into a live session of the CDS because you never know how these things work online. So I decided to do some short movies and I can just walk you through where you would end up if you were to click on this link. So everything in the CDS is kind of structured in the same way. You'll have an overview for the data set, you'll find documentation and then for the data set, obviously you'll find download data widgets, which you can either do in this GUI, so you can scroll down and make a selection of different things. And in this particular case, I've chosen a data set which is behind the monthly climate bulletin that we have. And as I said, earlier at the beginning of Copanicus, you always have to kind of accept the license agreement for this data and in the majority of cases, we try to keep it the Copanicus license, which is a very even and free license, but for some data sets because they come from other providers. And this is not always the case. So you can download the data by clicking on this submit form and it may take some time. In this particular case, I've downloaded the data earlier, but you can also go back to your old form. And what you can do, if you're not interested in this GUI interface, is to show the API request. And then you can download the data via API and instead of doing one month at a time, you can do a couple of more months at a time. And there will be a limit, kind of an advice limit on how much to download in each kind of request. So you have kind of the data download on one hand of the kind of raw original data set. And then another way to connect with the report is kind of a visual one. Say within the CDS, there are also a number of applications. Some of them are very complex and some of them are relatively simple. And this example here is called the Climate Bulletin Explorer, which basically just allows you to explore the data set, which is using the Climate Bulletin and kind of look at the same variables as in the Bulletin. But you can go back in time or you can look at kind of various, various regions. So if you are to get to this application, you can kind of choose the different things that also appear in the Bulletin. And there, as I said, we look at the kind of pan-European level and you can daily kind of see what we might look at is kind of sub regions of Europe. And if you zoom in on this application, and you can kind of choose one of these sub regions, for instance, Southwestern Europe and kind of get the same, the same, the same graphics as we show in the Bulletin, but for other regions because they want to evaluate everyone with 1000 graphics every month. So there's a pre-selection, which is kind of made. And then finally, what this application also allows you to do is to explore the data set kind of further. And for instance, looking at nuts levels across Europe. But here, I would also like to say that here you can look at the national scale, but we may not have compared this data set against like reference temperature data sets, for instance, for the different nations. So it's not something we report on in our kind of monitoring. It's just one way for people to explore the data further. So we've gone from downloading data to viewing the data. Another step, of course, is processing the data and getting closer to what is actually shown in the report, for instance, this case with the time series, because in the viewer, you can't actually download anything at the moment. And I just want to give one example, which I think is quite nice because it's quite simple. And so it kind of illustrates the concept quite well. And that's an application which calculates daily statistics based on a data set that has hourly statistics. And it's a huge data set. Say from a kind of download point of view, if you're not interested in the hourly data, you would want to precess it before actually trying to start a download. And if we visit this page, you get a very similar kind of thing as you would have done in the two different examples with the AVVU and the documentation. And then instead of a data download for the application, of course, you have the application button, and in this particular case, it's looking at a couple of different data sets from which you can then extract daily statistics for months going back in time. And I have to say, I ran this before, so it's not as quick as that. If you could just run it normally on your own, but then you can download the data here in a file by file, and you can all say, if you click on the source code, see the source code behind this application. And if you wanted to, you could replicate the application within your own environment and change it. But what you can also do, which I will show you in a second, is that instead of downloading this manually hand by hand, and you can also use an API script, which will basically replicate this, so you can then loop over all the years and all the months, et cetera. And the reason I'm showing this is not necessary to tell people you need to look at this particular application, but just to kind of illustrate the kind of different building blocks we'd like to see in place in order for the whole chain to be traceable and reproducible by a larger audience. Say this application is online, still the one I showed before, then outdated version, I would say, is online at the moment, and a new one will be released relatively soon. And then finally, in terms of these kind of building blocks, within the state of the climate in particular, we look at climate indices quite a lot. Say for instance, this example of the ice days. And then of course, you can go one step further and not look at purely, on indices purely from a climatological point of view, but maybe indices that kind of become more relevant for specific sectors. And there's a lot of different examples of this in the climate data store already at the moment. And one example, which is currently in progress, is looking at heating and cooling degree days. So it's basically an index for understanding for when heating is needed of houses or when cooling might be needed. And this is, I think, one of the first applications that we have where we really try to make the whole workflow from the data until the prices index traceable within the toolbox and the source code itself. Whereas in Copernicus one, there was quite a lot of focus on creating these indices and putting them in the CDS for people to download. And now we're trying to move towards a more workflow based approach where you could potentially redo the same calculation, but using different data. So the example here, for instance, for the heating and cooling degree days is that they also use climate models to go forward in time. And of course, with a relative regularity, there are new climate projections coming out. And the idea would be to then that you could then kind of pull those in and rerun the same workflow again. And based on the data that's in the CDS. And then the idea here when we come back to the state of the climate as well is to be able to harness some of these statistics for the reporting that we do in the state of the climate. And so this is kind of really looking forward in kind of work we want to do in the coming months and years. So I've kind of gone through these blocks of the service chain trying to show what parts are maybe there in the CDS already and what parts we're kind of working on or some of the ideas of what we're trying to make kind of transparent and accessible. And as a final thing, I just wanted to show the climate adapt portal by the EA where we've been working together with the EA to kind of transport some of these indices not into a report but into a live environment. But in live environment, which is not as complicated as the CDS and as such, maybe more useful or digestible by the EA's at European Environment Agencies, customers and clients and the people they work with the most. And of course, a portal like this one will then benefit a lot from having these different workflows more adaptable and updatable going forward. So with that in a way, I'd like to conclude and already say, as I said, there's quite a lot of ongoing work in different parts of the program, but really working towards, well, from my point of view, and this kind of climate intelligence activities of strengthening the link between the reporting and the kind of data behind to make it more traceable and transparent, both in terms of data set and applications, but then also vice versa to make use of more of the kind of mature offering, let's say, of other CDS activities so that we can harness more things on the, which is more maybe a sector relevant and more targeted than we've been doing in the state of the climate in the past. So that's kind of like an outlook for that side of CDS and for those who might be familiar with the CDS and have worked with it already. And I also want to say there's also a lot of working on in the CDS itself for improved user experience and performance, etc. But it's not really my place of expertise, I can't really at least not make a presentation about it and go into the details of that. But I just want to say there's a lot of work ongoing and it's very exciting with Kapanicus too, just starting and knowing that there's another seven years of program ahead of us where all of these things can be implemented and updated as we go along. And with that, I want to thank you for your attention and just leave you with some links, which links to the climate monitoring, climate intelligence activities. And I'm very happy to take your questions or go back to any of the slides if you're interested. Thank you very much. I hope you heard the audience. No, I didn't. But that's fine. I had my microphone muted so I don't need to. Yeah, there was a there was a clap. So we are now we have quite some time for questions. Let me see they will pop up. Let me start with the question. Can you make some modeling some climatic variable, like we do different, we model here different environmental variables, and like modeling soils and vegetation and land cover. We tend to provide also measure of uncertainty per pixel. But that's something I rarely see actually with the with the climate data. Can you say something about that. Like if somebody would like to know what I have a pixel now and it says some temperature number of days under ice and what's the uncertainty of that. Yeah, so I think it depends a little bit on on what you the level of where you're at with the data. So if we look at the data, you know, which in the climate data store, for instance, there are quite large and was equal to like documentation around around the data sets and the associated uncertainty, and some of them might also come with uncertainty and variables where whether it's an ensemble of realizations. So for instance one of the in situ data sets EOPS has kind of been run several times, and the way they put for that grid of data set together and then because of that you have a kind of uncertainty estimate around that. And when I'm on then something I didn't go into too much but there's also quite a large eqc function in the CTS, and which is kind of rolling out I would say at the moment. So if you go to some of the data sets on the CDS and you won't be all of them at the moment, and I can probably show you a bit back to the example I had here of the data set. You see that there's a thing that says quality assessment. And so they range to be necessarily like an additional data set that you can download, which gives you the point grid point based uncertainty estimate, but there will be documentation that allows you to understand that uncertainty some to some extent. And when it comes to the state of the climate itself, what we try to do as much as possible is to use more than one data set. So to use a multi source assessment and to engage uncertainty that way as well. Okay, next question. So I really like this combination of you know your data you're really proper user oriented data service so you make it easy for people to grab the code and extend it and really speed up download. But what about cloud solutions for data what about cloud.org and stack. So some of these would mean nothing to me, but I can, I can answer partly. So, first of all, the CDS is a cloud itself. And there's also work to connect it with some well with one of the diocese within the Copanicus program. And then of course it's easy to be able to also working with the European weather cloud. So there is quite a lot work on going kind of in in the cloud side of things. And, but I can't really go beyond more into the technical details of it because it's just simply not my expertise. But let's say GDAL I mean you're familiar with GDAL so you know how it's easy to load this data using GDAL. So the maps, like you know I saw the NC extension so it's an FCDF so I will assume because it's a map. It should, it should be GDAL compatible. It should be relatively easy to load the data in QGS I don't know. I mean, I think yes if you load it but the question is where do you dock in in the cloud say for instance in the CDS itself, you can't access that cloud, say to say, without downloading the data. So processing would have to be done like within the kind of the infrastructure of the CDS. You can access in theory why the diocese, but I've never I've never worked with him so I can't say how well it's working. Then you would maybe be able to work more directly with the data. Okay, the next question. So you can do modeling locally like in a country, the country based on the logical and climate services, then you have the European that's basically your, your, your institute, and then you have global. How do you harmonize the tree, and who does that. That's a good question. There's kind of a lot of harmonization we try to do via the auspices of the WMAs, the world meteorological organization. And say, when it comes for instance to the state of the climate. There is a global statement that comes out every year and kind of different people involved in that and I've been in, for instance, I've been involved in the past few years so we try to align that and make sure that the kind of communication is the key things that are communicated all levels, kind of a line. And then when it comes to the national level is kind of similar we tried to work with the WMA regional center. And so it's a little bit of a question, but I have to say is also it's quite, it's quite complex to make all of these things align. So I think the important thing is to kind of try to use the same methodologies. So Chris might be saying the question and I will ask. So the question is, for example, you have global data. And we are now Chris from the audience is needs data for us, let's say, but there's a US also US has some data set there. So I think it's a good example or something they made a data set. So how to make a decision which data set to use. I think that really depends on on the application and what you try to do. So, I think if you are saying, let's say it's the same variable you know like, I don't know temperatures rainfall. So, what would you, maybe, maybe we have to do some ensemble estimate or what do you recommend usually these situations. Well, so I think it does actually depend on what you try to do. But like I generally I think if you use more than one source of information, then you have a stronger case than just using one data set. So I think if we take the example of the US if you're interested in understanding whether no I will say that there's been record warms in contiguous USA, they will make that statement based on a reference network. And then you will get one number out. Whereas if you would use let's say the reanalysis which covers the US, you will not get the same number because you're not measuring exactly the same thing, but it cannot depend on what you're interested in if you're interested in the trend. Probably the best thing is to compare both of the data sets to see if there's if there are differences or if they agree. And if they agree then you can be more more certain of your conclusion. So, the question about the resolution. Spatial resolution. So many climatic variables like rainfall in all, you wouldn't go in. You don't need like, I don't know how to meet the resolution because it's it's like a, it's a feature like atmospheric so some some for example temperature now there's a new European space agency has a new project. I think it's next year. I think it's called LST or something the, the temperature satellite. And it will be I think 50 meter 50 meter temperature measurements 50 meter resolution. How much do you think it's that's needed. And then the spatial resolution for some climatic variables. How much is it needed. Do you think it's most of variables actually you're fine with the course resolution or I mean this is this is a really, this is a really boring answer because I think again it really depends on the application you're trying to use it for. So if you look at the LST for instance, is the land surface temperature say actually personally I think quite often there's some confusion in the communication, because things will be very different. And I think the resolution depends on, you know, if you want to understand the trend of change in a, in a city or a country then maybe doesn't really matter. You don't need exact location but if you need to do something like precision agriculture, based on based on that then then the resolution becomes important. And then you don't have to track like fire danger or, you know, something can like happen locally, you know, so then then possibly spatial resolution will make a huge difference. Just the last question about IBM, I think they have a system called graph or something. So, high resolution weather forecast system. Do they really get a much better accuracy in forecasting? Are you, are you people jealous for the equipment they have and the system they have? What is your impression, if I may ask? You have to ask my colleagues in the forecasting department. Okay, but they kind of have I think a global system at one kilometer and the last thing I read about it. So, and they claim that, you know, they, by doing a more higher spatial resolution than they also increase the accuracy. Yes, I'm not seeing the, in every month or so there's the score comparison between the different centers and the different models out there. But I've not seen any of them for a long time. So, okay, there's a question for my forecasting colleagues. With this thing, I would, I need to, we need to stop because we have another speaker coming. I just want to thank you for your time and for connecting and hopefully maybe join the next workshop which was in June next year in Prague. And again, the European data cube and the mental data cubes. So thank you one more time for your talk and good luck with your work.
|
Freja Vamborg, Senior researcher at Copernicus Climate Change Service, illustrated the European State of the Climate report: this annual report on behalf of the European Commission provides an analysis of the monitoring for Europe for the past calendar year, with descriptions of climate conditions and events.
|
10.5446/55265 (DOI)
|
I'm coming from the same institution like previous presenter, Dr. Guti Protec and Milan Kilibarda, that means from the University of Belgrade Faculty of Civil Engineering and the Department of Geodesy and Geinformatics. CERES project is a project on the national level founded by the science found of the Republic of Serbia. Okay, it's a project on the national level. However, we hope that some of the results will be available on the broader level. So that's the good opportunity to present also these results in this kind of workshop. First, I would like to briefly present some goals of this project. But first, it's very important to say that agriculture is one of the most crucial sectors in the Serbian economy and that sector employs more than 20% of the total labor force in Serbia and also it produces significant part of total exports, the goods from Serbia also again more than 20%. So the main problem here is that the obvious decrease of rural population causes the lack of labor force in agriculture and also inefficient practices makes it barely profitable for the most of the farmers in Serbia. One solution to improve this situation is to use a novel technologies that provides timely information relevant for the decision making in agriculture and that will significantly decrease production costs and increase yields and quality of agricultural products. Another opportunity to increase profitability of Serbia and agri-food value chain is a wider adoption of carbon farming practices which result in carbon sequestration in soil and to combat with climate changes. Taking this in mind, the overall objective of Serbian projects is to develop a set of geospatial information products affordable to any type of end users that will be designed to support information driven decision in agriculture and to increase profitability of agricultural sector in Serbia and also to support carbon farming. The set of products, first identification of the crop growth disturbance, second one is a yield estimation, then estimation of soil organic carbon, identification of tillage activities, extracting information from web textual resources and finally higher resolution daily temperature and precipitation prediction that will be exactly the input for the rest of the projects but anyway we have already very good results in this field. All information and all those products will be generated by applying artificial intelligence methodologies to geospatial data from various sources, primarily from earth observation data that means compared to submissions, meteorological data, global soil data like land GIS, soil grades and also from the web textual sources and also from other available data sets. Okay, I will pass briefly through all those tasks. The first one is prediction of crop growth disturbance events. The path recognition and early detection of crop disturbance will be based on the artificial neural network. The algorithm will be automated and it will deliver the results without need of any visual inspection of images. That's our idea, the earth observation core data that will be used derived from freely available compared to certain Sentinel missions and training data in situ data will be collected in the context of service project but also potentially compared to in situ data and as well from the other European projects that means from the previous one. Okay, the second one task is yield estimation. Currently crop yield estimation is mainly done through the time consuming and very expensive field sampling which includes the measurement of biomass weights and grain size, we know that and our idea is that commonly utilized biophysical characteristics of vegetation in estimating crop yield will be calculated from vegetation indices derived from Sentinel to data. So far as universal model that will be applicable for all crops types is hardly to be achieved, we know that but our aim is to develop machine learning crop specific model based on biophysical parameters those vegetation indices rather than optical data and also methodological data. The next step is identification of tillage activities, we know that carbon farming includes the cultivation techniques and of regenerative agriculture that take carbon dioxide out of the atmosphere where it causes global warming, we know that and to convert it into carbon based compounds in the soil that ate plant growth. Within this objective we will present an approach for mapping tillage activity changes by using Sentinel-1 and Sentinel-2 data at high spatial resolution that means 100 meters and by applying temporal change detection algorithm based on artificial neural networks as a base machine learning classifier in this case. The next task is estimation of soil organic carbon, there was really a lot of presentation related to soil organic carbon but we know that building up soil carbon is one key to achieving high yields without chemical inputs, we know that. The surface reflectance and vegetation indices derived from the optical satellite imaging that means from Sentinel-2 soil texture from the rudder imaging that climate variables and terrain vectors will be used as covariates in this case to build a predictive model of soil organic carbon, for that purpose the performance of the several different machine learning algorithms based on in situ measure data values will be tested with the objective to design the best predictive model for a particular site, a local site. Three main sub tasks are related to a matter of soil organic carbon that needs to obtain all available training soil data to pre-process all predictors of covariates which are important for this model, all predictors should be downscaled or upscaled to 30 meters resolution and the last one sub task is to automate the generation of soil organic carbon maps from point data with associated prediction uncertainty by again by using state of art machine learning methods. The next one task is interesting in this case and it is related to natural language processing, we know that information about undesirable agricultural events like calamity, drought, disease, pest attack and also yield estimates are often published in numerous local or statewide news sites, agricultural reports also on social media and such information if it is properly recognized could complement the available earth observation data when building and validating during the process of building and validating different estimation models. Therefore it is necessary to apply different artificial intelligence methods which include the natural language processing of Serbia language in this case because it is a product of national level, again expert defines rules and machine learning techniques to extract and to classify the information about relevant time and space determined events. The fulfillment of the objective could be a conductive for interlinked tasks, the first one is the identification and classification of the available tech sources, the second one is development of related data colors, again information extraction, geocoding and finally application of the extracted information in building and validating different estimation models. The last one task is high resolution to produce high resolution climatological grids for Serbia, our aim is to generate high resolution daily temperature maps from freely available temperature observation by using geostatistical and machine learning techniques. Also those data will be also used for precipitation maps and for this purpose a new method for spatial temporal interpolation based on machine learning we call them random forest spatial interpolation was already developed. In the next step I will introduce some, I will inform you about some ongoing activities related to each of those tasks. The first one was prediction, not each one, but some of those tasks. The first one was prediction of crop growth disturbance events. Each, the moment we are working on the spatial temporal data cubes with the Sentinel-1 and Sentinel-2 data, also with meteorological data for each parcel, libraries for data download, the preprocessing already developed and at the moment final preparation of predictors and the harmonization of different in situ data has been performed in this case. We prepare four years of analysis for Sentinel-1 and Sentinel-2 data together with better data for northern Serbia because our case study will be Voivodina, that means the northern part of Serbia where agriculture is dominant, agricultural area in our country. We collect already 3000 fields with diligent yield information and some initial modeling was performed on the part of the data, but we are still working on the baseline models. The next interesting task is estimation of soil organic carbon. We know that the quantity of publicly available data has been growing at a massive amount and that will continue to do so in the near future. However, we have a problem in Serbia with the limited amount of available data, especially reliable data and this challenge will be how to implement the knowledge based on global models from the wider areas to the case study of Serbia. As a possible solution, domain adaptation, the transfer learning technique in which the system aims to adopt the knowledge learned from the source domain and apply it to another related domain will be utilized in this case. Considered by the sources of relevant databases on global scale will be also called multi-source domain adaptation as an additional form of domain adaptation technique and we will practically use more than one source to adapt it and to compensate the lack of data in our case in Serbia. We are still waiting, just look at this map, we are still waiting for Lukas 2018 data for the best Balkan countries and just to test some models, we started with Lukas 2015. We are interested only in the in-situ data related to cropland classes and here you have a map with all those samples, those Lukas data but cropland data, cropland samples over the Europe. You know, there's some very short statistic in Malta, we have only two samples related to the croplands but in Spain we have almost 2000 of the samples and we use some additional covariates, some of those are soil chemical properties which are also presented in the Lukas data, again, meteorological data, some digital elevation products and also satellite observation. This moment we use Landsat, Mosaic, also Modis data and so the coordinates will be also involved in this modeling and we just tried to do some basic machine learning techniques like gradient boosting regression but first just to say that those maps are really very interesting because we use whole set of data for the training and testing the model and the model is validate always with the leaving one country approach. That means we use whole data from Lukas, we left data from one country just for validation and results are really very similar in this case. You see some results like gradient boosting regression, random forest results, ridge regression, vertical vector regression and also extreme gradient boosting regression. All those models were processed in Python environment and with NumPy is a fundamental package for scientific computing but also we use SecretLearn as an open source machine learning library, also eGeeBoost as an optimized distributed gradient boosting library and but as I told you we are still waiting for the results, we are still waiting for the inputs from the Lukas 2018 and also we are waiting for the collected data from Serbia, from in-situ data, hopefully reliable data that's always a problem. The next one, we are running it over time. Okay, I will finish in the next two minutes. The next one is extracting information using NLP, here you have the tasks and at the moment we are still working on, we have already some results related to embedding library for the data on Serbian, you can try this link but anyway they are especially related to the Serbian language so probably you can just look how it looks like and the next one is the final one, result is something that is totally accomplished that means that we already prepare one kilometer and one day timers maps or grids with one kilometer spatial resolution and one day time resolution for the climate elements like minimum maximum temperatures, mean atmospheric pressure and precipitation, we also aggregate those data on monthly and annual level. For this purpose we already developed the random forest spatial interpolation methodology and it is already embedded in our package method, you can look and find the information about this package in this link, also all details are given in this paper, we produce those data based on SIS1, OGMET data, 28 are in Serbia, we also use independent data for Vojvodina from some automated meteorological stations in Vojvodina and we produce those grids, everyday grids for the 20 years, there will be inputs in our models, validation of this grids are already done and finally you can also find that data in the following repository, so it is open and free for all people. Thank you Brani Slavs, let's see, we went a bit over time, you are doing a lot of work, mainly project is focused on Serbia, if I understand correctly, so that is also again interesting to see how will the predictions on national level match the European level and how do we harmonize that, that is why we call the project by the way geoharmonizer, so thank you so much for your talk.
|
Branislav Bajat, Faculty Member of the University of Belgrade, illustrated the CERES project. In this talk, he explained how the use of AI in agriculture should be especially important in Serbia, as agriculture is one of the crucial sectors of Serbian economy. This project will be an important step forward in the application of a wide range of relevant data generated on a daily basis and offering a huge potential for improving agricultural production and developing the concept of smart and regenerative agriculture.
|
10.5446/55267 (DOI)
|
Mae'r unig iaith am yng ng MHF. Mae'n gwas cylliad gyrch gŷn fod yn ein ochon ideology. Fodos, mae Renfrewdd i ni ar bobl Huw automat a band years I an Herz obsesono Marihyd per meинnegod Slennan yn bywb plantol Bydd surface Diolch Cyn Hyw marcheb a'r anarthaddig Bydd an safill Llwy Oroedd a g Cadw battle Ten ni bron fy fan� Deputy ddat ahhm wrtheg a mynd i Gws yng ng kontod hefyd mewn y Gymraeg. A drwy i'r golffydd rydyn nhw gweld o'r 분llun cyfalu volcano,解 nid o ports. So yn ddigonחu gwneud i wych ymgyrchu wych yn achonchaf gondol, dawn o'r Jae. A bob warnliau a gomusi никufaidd sefyd darkerick azylwyddiant gave ar ôl nath rhwng daam am hyn pob deuluao arne uteddeddun hwn. Tref ni'n ddorfell ar draws yr hynny, ham i wyd, amhowydd yr at動 a fawr o thosefa meddyliad wedi ar altyr ahol i. Baudw i RSa Michaels i�uio sof buds might arranged sy'n gotio stoed. Dy flynydd ydyn ni'n cyfan efallai iawn am mor lanech iawn. Ychydig o eu datblygu cymnodeb hayat birthday wedi'i gynnig hefyd yn oeddiad iddo'Neiliaeth yn mynd i b devais yma, a os i ookir ar constat yng ngosri fydd ym 100,큢iff a g械lus yn fach y cyfrigiaeth deyn tenants. Roedd dri o wneud y cefnogi Cirmlinegau Apannu Skwynt, i'veuarender Dwell ynечas, a'w eudai'r cyfrifiad Dwellydd Pw Ngha equitable? Mae ym 1,res i 23 8- будет, yn d mentally eiornol iawn, a ondy<|sv|><|transcribe|> trots, i pastrontibog attached minerals. Istimulatio en agersad gross Bandham. As we find important, we were treated sufficiently by our government. It's that kind of that has always been part of you.체적, that sort of it has always been part of you. So, last December last month, last and last month why we translated this into English mae hwilig iawn y criwrun gwahanol ni yn ymwale, ymwale, roedd dw i'r cyffrw ramp tyn nhw i ynair Gwylff, chyfnod yr ymddangos i gyflачwyr medydd y-bligol yn y sektorwyll piartafol pan fydd gan maoundfyn, gradd yr ad Ydwydoedd peri cofnodol ein cwmpio, hesit views yn adnwys i ddig mRNA telol, am holluch aio deynuc ar gyfer ymargole約el o bwrth cyflonidol yn siŵr oerk закutau yw cael eu si 24 Show. Ond Team Cymru yn papurnau Half 2023 – rhaid iddo math rhai y Llanlers Cyngor. Ond, ddiwrnod,ocwm y plaeth gael deич clwy inni wedi strifeis unedig yr ae scriptures yng ngred parenthou chocolate, ymargole pa laethdaniaodau'r tîm cael eu hun sustainability bainant at Lewarty wrth ddeith stained haddiad argym matches. Sianrolion wnaeth bobnyaf, rhan iawn bow foundation, beth y Dusbryd, a ganddechu gambling gyda cy transportation dylai gwneud ac chi'ch anlaxu i gael eich gwen Willyntau Cymru yn holl Ó Rhaith y Llywodraeth Cymru? Ym lawer, Cymru, rham yw o phrian bizeid y susind yn bobl y Linkedru mae'r good yn i gwn ni, –rhum 자냐면 yn gan drys a Llyfod Ortwyd nes, gan unrhyw rhaid yw un yn sjadwm ar cael am pigwyd. o'r ddych chi'n ddigon i wychydd llwy enhanceru そy düşün â'r엽 yn Maesh As well as y gallwn y tro re каким-rhyw otonnwch – rym rhetoric iawn ar gyfer yng NghymruGreen y Prifetau Caerdydd, sy'n Oftrar yng Nghymru Green Newydd, ar ei Throwo rosalei combloben o ddr woven iawn du. quélysc putnolaeth iddyn nhw, i symud lle ac eu clywpedd sythysgau ar holygu gwant nosiwr sydd wedi Ministry mommyadol ac beig i poll builda'ch們ai sydd f keeperiaeth iddyn nhw'n ei cofett at ei gart gameplay y Gweith avenaeth telch wedi gweld eholgive a bonwleddau ac configure I Bydd hyn mae ei cyfahr ni ar bobl yn y nombre phoeddu unigau ac gwylafodr inteiro. ond gan nhw'n gwbodチALLunus o fy ymodel, ond mi<|nl|><|transcribe|>كون effebodd y metbalu â'r mawr dryer i Hanggau, met prospective llawigets. 2017 a 2017 a 2017 ought uends ond yma. De s admirewasll wahanol hefyd fan Alton os y gallwch yn rhyf Caerdychol i gyd yn osrs voltage hynny, yn holl clrwar i gŷnwon gydau flyno. Rwyfawsydd o ein hel o ddau, ac mae'r fawr clyw peth sefydlu o'r hŵwwear hŵ radiation o'r dares a llllws sydd bod eisiau byw penderfyniadau ei yapㅋㅋ ohtlinellnau du a llwyfawyr o'n wneud ar ôl hyn oherwydd allwch chi â ch heir thy a llwskae iechyd yn cyffredig o'r le Moto yn canllun gynnig a bod huisi talerwch felly dweud yn nadifach oisteil iawn, ac rhan neveridd,dest Authority of Small EIES, fyt dyna'r再見 o gerio, mae'n san unclesau إن styrлаeth yn gar jedoch. Ta mor di destinych g gnyd-dramog arall yn un whes 있yddol, un rhan ganmaed bach dymniadavu, hynny wedi Wyth Wi'n per bikel ydw lider Dyna Lanslid Rysg, Y Met Office. Ymgyrchai'r Y Met Office yn adeiladu data yma o'r clymau, yn ymgyrchai 30 yma o'r ffordd. Ymgyrchai'r dda, ymgyrchai, ymgyrchai'r ddysgu ymgyrchai, ymgyrchai'r dda o'r dda i'r ddysgu yn adeiladu data o'r ffordd. Ymgyrchai'r unisap o'r ddysgastur i ddysgu'r adeiladu. Ymgyrchai'r ddysgu ymgyrchai'r ddysgu ymgyrchai'r adeiladu data ymgyrchai'r adeiladu. Rwy'n meddwl am y dyfodol, ymgyrchai'r ddysgu'r ddysgu'r adeiladu, yn ei ddysgu'r ddysgu'r adeiladu, yn ei ddysgu'r ddysgu'r adeiladu. Mae'r ffordd wedi amser o'i ddysgu'r ddysgu'r adeiladu, yn fawr y dyfodol, ac yn ymgyrchai'r ddysgu'r adeiladu, yn ymgyrchai'r ddysgu'r adeiladu, Roedd choppedel gydechdiemwen yn eich bob empath o'r Gymryd huge yw lle ond ysgidio i talkshwn ynло watnyddio mewn oldicigydoli'r gweldid i pet画ion yno, ond roedd yn gall Spider- chilliion ac os alla nhw os iawn at y nud A<|zh|><|transcribe|> Fy enw, raddids Dick multiple twdyn, a wedyn han i waht o staffbwydd referenced mewnerydol a phwyddoch chi i gynrychiall warehouseناeth, fel mod chi wedyn per<|sv|><|transcribe|> startingam folk- och deächselg said karit fe pob퍼fain mirror swe Kantobla. Esbyt i'n mynd i Tan DaMAin. Felly ar y canafor ac yn unig iawn o'r aw поверх a cyhoedd gynnalodd. starpy a'r holl y gallwn aledolèn 12 ehwaith e'r holl y inchwad ar y bobl ramfa dew Catherine. Yr an 떠f mewn holl y mi,! Gwch'n f Goog mewn hiraf i L advert wrth bod fis sig adatalwyr wedi nos twelve o gyllud ymryd ac inverted o bwyd advers Going見て a bwyd hynny lawwwyr sydd dod yn falch ar ati-iddo. Felly mae'r numer feme. Ond Tryd adelwyr ynising angrybcyngging. Gymuned. Fy fydd eya'r Sage ddweud beth mae'r dyfodolfyn yn musicians. A werth mawr ny plis g scent yma mewn gweld ar tan Cymru a ysgol. Yn ystod, y system yw'r ysgol yw'r UK, y system yw'r ystod yn catapult. Yn ystod, mae'n ddysgu'r cyflwynt yn ymddi, ac mae'n ddysgu'r cyflwynt yn ymddi, mae'n ddysgu'r cyflwynt yn y proses, mae'n ddysgu'r cyflwynt yn ymddi, yng nghymru yn ymddi, ac mae'n ddysgu'r cyflwynt yn ffiji. Mae hi'n wasabynt a dymut Oliverionyng rydym yn ni, dy siliconeidd, ac efo we Agora arall ar y sprרהg hefyd ac roedd yn ôl eu cyflryn administer. Nid ddim i'r cerddod o newlytdoedd dros cymryd yn digwyd i ooed yn mynediadol. Mae yna'r sgol anrhyst i hyw urnun iourt adnodd oblarau yn ei ddysgu, a ph rusig yw yng Nghyzaethau ERO Nai, o fe fydd iawn o'u ei werth corectwlad, Gyntwar gan Wladirk i ADwg See spinnin, gyda gefnodau dyfyn dweudáun os y Purfrannu Gweithg termin menginwys. Cwr blew해요 chwarae hwnnw i Lontgfianfa'r Deygwm di Cafhithidgrif benefaith firmau yma hynьяodd chi, rydych chi'n caelwch chi oedd rhai a pleiddoch chi roedd ddy skaimon eraill o dwyl ni'n ddyfynîn I assume that you can see my pointer. How can you go? So this box in the bottom right, so I've sort of put together a slightly messy diagram of how this looks with with kind of the technologies that we're talking about. So this is what I would call our data cube processing chain, where we're going from sort of having this well-fasasth-like imagery. Most of the sort of well, originally the sort of program was to have Sentinel-1, Sentinel-2 from ESA, well sort of the imagery, the data from ESA downloaded from various portals for various reasons, as well as Landsat. But we've actually sort of expanded beyond this, bringing in kind of modus data. There's also some spot imagery in there as well. And more recently, NovaSar, which is a new S-band SAR satellite, for those of you who are sort of more satellitey experts, which is a kind of a UK operated satellite, but we've passed some imagery over Fiji. So what we've sort of developed is a system where it's quite easy to sort of bring in any sort of satellite data, as well as any other kind of geospatial data into this data cube. So the first kind of step of this data cube is to create this ARD, which as an Earth observation specialist, I can say is the majority of the legwork of using satellite data. So we sort of, our ARD processing chains are in line with kind of industry standards. There's a lot of work being done, particularly out of Australia, on kind of these standardisations. And we've sort of integrated these with technologies such as STAC, which enables kind of the indexing of the data in a commonly known way. It's all kind of dockriding containerised within our GitHub, all of these processing chains, and the outputs are cloud-optimised geotifs. So all that's been really interesting work, and as I said, kind of takes out a lot of the legwork along the way. And these kind of open source technologies have been really integral to that. I'm sure that these sort of, most of the technologies within these slides have been mentioned sort of yesterday in talks as well as today. And I imagine that you all will be highlighted fairly heavily as well. These sort of three across the top here are ones which have been used throughout the whole process. So once we've got our ARD, which is analysis ready data, we've put that into an S3-like storage. Currently, this is hosted on CEMS, our kind of internal computing system, cloud computing system, but to be moved to the University of South Pacific. Many of the other data cubes operate on kind of AWS currently, including kind of the African data cube and, sorry, Digital Earth Africa and Digital Earth Australia. We began with this system being hosted on AWS, but made because of this kind of sustainability aspect and sort of conversations which took us in the direction of hosting it elsewhere. We moved on to our own systems and then sort of made this connection with University of South Pacific. So it's an S3-like storage that we're using, but it's not AWS S3. And then we've got this product generation, which I'll go into a bit more in a minute. So this is sort of bringing in this really cool technology of the open data cube alongside sort of DASC for scaling the processing, enabling us to do processing across scales up to kind of countrywide analysis, which as a kind of scientist was sort of a bit of a learning curve for me, but this sort of product generation is happening within Jupyter Notebooks, which I again will go into in a bit more detail quite soon. And then we've got this kind of user access. So there's two ways that the users can kind of access this data. One is through Jupyter Notebooks directly, sort of talking back to our S3 storage of ARD. And that's kind of taking advantage of this stack indexing in the way that you access. And the data will have a little bit of code kind of later on. But yeah, if you've sort of looked into any of the other data cubes, we operate in a very similar way. And the other way, apart from accessing it directly through Notebooks, is through in Fiji, kind of using this every system where we've developed kind of a portal along with the app provided by the other partners such as the Climate App and the Disaster Resilience App, and then for Solomon Island in it in general, so it's a Terrier GIS based system, or portal rather. So, as kind of several of the talks I've heard recently, which have built systems off the open data cube from a technical standpoint, our data cube is an implementation of all of the amazing work which has gone on with the open data cube, particularly by the likes of Geoscience Australia. It's enabled us to sort of hit the ground running with implementation. And for those of you who kind of don't know the principles of an open data cube, I'd certainly go and check out the open data cube website. The documentation is pretty readable. And yeah, I'd recommend going and have a look around. But in principle, it's kind of a way of taking satellite imagery and stacking it up, and then you can access it in a way of being able to take small chunks of data through space and time and provide analysis on it sort of straight away, rather than having to download whole images, is really beneficial. So, here's just kind of an overview of what kind of data we're within our data cube. As I've said, this is sort of what the project, what we initially said we'd do on this project, which was to bring in Sentinel and Sentinel 2 and the Landsat archive. We have added some more data sets as well, which is really exciting. And they will be accessible by kind of the users, especially when it's handed over. When sort of the data cube was, well, sorry, when this project was set up, we were sort of saying, oh look, we can go back to 30 years worth of Landsat data as you kind of do. But the realities are that Landsat 4 and Landsat 5 doesn't have great availability over the South Pacific. So, this is the Landsat series wasn't always imaging. It was being turned off particularly over cloudy regions, which the South Pacific is for many months of the year. So, we have got some quite large data gaps in this kind of time series going back, which is a shame for some of the use cases that we have, and also the sort of faults on board Landsat 7, which have led to the strike mapping. But when you sort of mosey a clod of data sets together that does sort of go away. Landsat 8 onwards, there's a really fantastic kind of data sets in there. I should have actually put in the statistics of how many images that we have loaded in and that kind of thing across across the South Pacific. But it is a very impressive amount. One thing I realise I haven't particularly covered yet is what sort of stage we are at in this project. We're coming very much to the end of it. All of the data is loaded in and all of the product there and available and we're currently kind of in a testing phase with some of those government bodies who are going to be using this data cube, as well as sort of at the point of doing trainings within University of the South Pacific as well, to sort of ensure that there's a smooth handover of the system. So, all of the data is there and done. There's a couple of things with sort of integrating the data cube into Terrier GIS, which our developers are still working on. It's sort of very near the end of that process and the Fiji solution is very much there, that was sort of our starting point. So, as well as this kind of these ARD products, which we've sort of pre-processed, we've also gone for some things and derived through team products into our data store. So, this is when we've developed things at certain cadence and then pushed back into the Fiji like store. Currently, kind of the more on demand products, which is the ones that the users are making, don't get pushed back into our S3 storage, but our routine products do. So, routine ones are two main products. We've got Wathmasks, which have been developed for every sort of single scene across all of the sensors. Many of the other data cubes have used kind of something called Waths, which is also observations from space. But we sort of found quite a few flaws with that, especially looking for sensors. So, we wanted this kind of Sentinel one to be a large part of this solution. So, we went for producing water masks by machine learning techniques to give a two bound raster, one of which is a binary mask and the other kind of predicted confidence, which has meant that we can kind of use water masks with more certainty throughout our analysis. So, why did we make water masks routine and sort of add to that ARD by adding a water mask to every single product? It's because they are commonly used across many of our products and having these pre-process reduces the processing time and kind of prevents or makes it less common that we run into kind of processing issues. So, that's something which really helps us along and keeps everything running quickly. So, as well as these kind of for every individual image, we're also doing it for annual aggregations. So, it's quite common that we found the users want to run one of our kind of products for a year, so if they can take that annual water mask, then that kind of again reduces that processing time. So, it's a similar concept for the geomedians. We've produced these, sorry I realised that the text on the bottom left is actually not correct, that's the water mask text again. I'll make sure that's changed for kind of the recording. So, the geomedium products we've done on an annual basis and what this gives us is kind of a best cloud free composite of sort of the obstacle data set. So, we've done this for Sentinel 2 and for Landsat 8, they're kind of our fullest data sets. So, what it is is kind of like an immediate function, but it's done on all bands kind of at the same time. So, to remove the likelihood of cloud in the best way possible and get the best kind of spectral representation across each of those bands. So, that's something which took quite a lot of processing power to do those for each of the three countries for every year since 2013 and sort of certainly kept our DevOps team entertained for a while. So, just here kind of touching on access again. I've explained most of this already actually. So, on the left here we have kind of this JupyterLab instance, kind of a sandbox environment similarly to how many of the other data cubes are kind of accessed where we've got a set number of scripts which can be ran and they can also be changed etc. Currently this sort of data cube system isn't completely open source. It's currently designed to be for users within the government ministries of Fiji Vanuatu and Solomon Islands as well as users within the University of South Pacific, but this could be subject to change in the future. But for those users they can go in and access the JupyterLab instance and run many products or this shows our kind of entry solution. So, Fiji, where you can see how you can kind of order these products via a form to the parameters in both sides of the same in that you put in the parameters that you want such as the area of interest, you want the time range, you want the resolution and kind of the projection which are common across all of our products, as well as sort of ones which are more bespoke to each of the products, which I'm going to show you the products in a minute, I realise I haven't really mentioned those so much yet. One thing which has been quite the bug there for us in this project is the anti meridian, which crosses Fiji as you can kind of see represented within this geomedian here. I think this was an attempt at making a geomedian for 2019, where you see we get data gaps along with anti meridian. We had quite a few issues with missing data issues when when displaying in certain productions, especially global ones such as EPSG 436, which any kind of geospatial people on the call will not not be surprised about this but trying to get the open data cube technology to kind of kind of work with this in a way which doesn't have too much missing data. It's been been a challenge, particularly as the idea of this, this the way the system works enables a user to sort of select any area of interest, or if the area of interest sort of only partially crosses this anti and as only the side of the anti meridian, or is too large over this area then it quite often results in kind of missing data such as we had a large square up here missing for a well and quite a lot of these, these islands in the southeast were missing for a while as well. If you're interested in this, then one of our DevOps resources Luigi, who's, he's no longer at the catapults anymore, but he did a lot of work, work on this about a year and a half ago, and he's written most of it down and quite interesting blogs if if that sort of thing interests you then then get and take a look at. So these are some of the data cube products which we are kind of supplied as kind of, they're almost demo apps in a way in that we're hoping that the users will go and kind of create more, more afterwards and these were based on kind of this user centered design approach of talking to those within, within various ministries, government ministries to see which products would would be of most use to them. Many of them are similar to those, those produced by digital Africa and digital Australia. We've got ones around around vegetations ones around water quality once around permanency of water. All of these things which which are important when we're looking at environmental factors such as such as sea level rise, which which are pertinent in the small island nations. So I think for the time I'll not dwell on these too much. But if any of you have any kind of questions about specific products then we can cover those in the question session. This kind of shows shows one nice example of a change between the year 2000 2010 going from kind of greens being 2000 to the oranges being being 2019. And we sort of looked at this, this one area and saw there's a lot of a lot of change going on in this sort of estuary. And then we sort of spoke to our kind of in country representatives about this. So you can see this this box on the right shows where there's a lot of change going on the box on the bottom left of the sort of blue box shows this there's not much change going on there at all. And what had happened was, it'd been dredging within this, within this river here, and a lot of the sand was being sort of deposited into into this this area here which should sort of change this the local morphology of this estuary. And there's a lot of a lot of corals and things like that, along this coastline. And so it just kind of is one extremely quick and brief example of of how we anticipate this this sort of data being used towards these kind of more environmental type type of applications of sort of evidencing how a decision has has impacted or will impact kind of an ecosystem. So, kind of some very brief reflections for me isn't an observation scientist so I came straight onto this project after I left university. I was just so happy enough that I could kind of start working on these on these products straight away. And just the kind of the power which is in there in the ease which is in there of accessing these data sets, compared to when when sort of you need to go and do do work on on other projects elsewhere and then a whole room of room of data. So, yeah, for me, the kind of open data cube, and all of the other sort of software which which I've sort of briefly mentioned as well, really comes together in a powerful way and you can see that by how many data cubes are being used all over the world, based off these open source kind of solutions. So I think that was kind of a brief, brief reflections from me on sort of the ease of access which data cubes provided. So I suppose now we've got a couple of minutes for for any questions. I don't know how you want to do this, Tommy, whether we're taking questions over chat or verbally or I'm unable to hear you. I'm not sure if you're talking. I can hear you now. Yes, so we're ready to start with questions. Okay, I'm looking at the audience and I'm looking at the chat. If there are questions for Sarah. Let me kick off. I want to apologize for promoting you to PhD. I do. It's a, I do wish you to engage in science and related to that. I would like to ask you in your, in your talk you spoke a lot about technology and implementation. Are you facing some specific scientific challenges also in terms of statistical methodology, spatial analysis methodology? How do you deal with the uncertainty? So what are the top, top, let's say two scientific challenges? So I think that it's, yep, so very good point I've sort of focused on on technological implementation but there are sort of. Yeah, all of these questions around sort of if you were doing any one of the products which we've kind of provided the kind of baseline that you can run. You would, from a scientific standpoint, yeah, you would, you would certainly want to go and validate those and kind of implement them and see how, to what level of accuracy they are. Currently we don't, there's been very limited kind of kind of ground data going into these and, and yeah, sort of way of validating. We've sort of taken, most of the products are based upon quite, quite, quite standardised earth observation techniques and things such as such as NDVI and quite often the outputs aren't definitive. They're kind of a range with with kind of a likelihood of, of or a confidence attached to them of how those outputs look. What we're sort of what what the outputs are is kind of a sort of country wide or regional wide initial assessment with the idea that it would sort of highlight possible problem areas or areas of concern to these ministries who could then go in and sort of take a closer look. So it's kind of that first step in that analysis kind of chain, as well as providing sort of evidence looking looking backwards. But sort of what, yeah, as I sort of said towards the end there what's been provided is is sort of these earth observation products. So they're sort of there in the same ways as with with the other data cubes as kind of almost as as demonstrates as demonstrators of how you can get get these systems working so that so that then more complex kind of machine learning with training data etc. So that's the capacity for those to be built into the to the final product more through that kind of access to the notebooks and we're especially conversations with with kind of university of South Pacifica, they're sort of looking to take on that that's kind of scientific ownership of the products going forward within their kind of programme. We have one question from from the online participant if such applications catapult is also assessing data for offshore marine part of the islands. So, yes, we've, we've a lot of the kind of marine areas are covered within the analysis ready data. So that's a bit of a look into what we could do with that and what one of the products is is water quality, which is looking at the ability of the water kind of coming out of estuaries in particular. So that's a platform, most of which has been provided by the government ministry so it's kind of a one stop shop and for data so all of the kind of the, the marine protected areas and the known areas of coral and that kind of thing are kind of marked within. As I've said, the sort of ability to produce new products is there within this this juper hub system. Sorry, fire alarms going off. I'm not sure if you can hear that within. So the ability to be using products is there. The analysis ready data which has been produced is for land surface reflection, whereas you can do slightly different processing on data sets to just it more to marine application so that's kind of why we haven't gone into maybe some of the other marine applications, but certainly this scope to kind of the web's chlorophyll a type indices in that. So the question is QGS. We didn't see QGS in your workflow or like for the, especially for the, from the user perspective, and then the second question is people at Fiji. You know they have a different education system then UK obviously and you know how do you, how do you get these people to use the tools and things, you know independently. So two questions QGS and capacity. So with with QGS. We're so it's something which we've used kind of in the background as we've been developing to kind of check and invalidate products as we've been going in that kind of thing. The two kind of GIS systems being used in country are a terrier GIS and as we GIS as the platforms, and of course the users can download and use the sort of outputs in whichever GIS system they would they would like. So I think I said that the list of kind of software's wasn't exhaustive and secondly about the education system. So many of the government ministries that we've been sort of talking with and that we've been training up have kind of data scientists or government, sorry, technical users, as we call them, as well as policy users. So we've been doing separate trainings for both of those groups and the. So, so there are people within the ministries who have sort of that that level of technical GIS knowledge, as well as we're hoping is a community university of South Pacific will enable them to bring the data cube into their teaching within their masters kind of GIS and satellite remote sensing programs and many of those people already go into within to go into these these government positions so we're expecting kind of that location within the community of the data cube to filter through into the ministries.
|
Sarah Cheesbrough, Earth Observation Specialist at Satellite Applications Catapult Ltd, UK, spoke about the EO Data Cube based in the South Pacific region (Fiji, Solomon Islands and Vanuatu) as part of the IPP CommonSensing project, funded by the UK Space Agency's International Partnership Programme (https://www.commonsensing.org.uk/).
|
10.5446/55272 (DOI)
|
Hello everyone, my name is Claudia Vittolo as Tom said. You find me in a moment of transition actually. I just a week ago I started a new job, I am now a senior scientist at the European Space Agency Centre for Earth Observations in Italy, but today I'm going to talk about a work that I contributed to until last month when I was working as a researcher for the fire forecasting team at the European Centre for Medium Range Weather Forecasts. The work is a teamwork, it's the work of a wonderful team that you can see here on this slide listed and my talk will touch on a few resources in terms of data and tools and I also will give you a glimpse on what research projects the team is currently busy with. However, if you want to hear more about wildfire danger forecasting, I invite you to contact Francesca Di Giuseppe, which is the leader of these activities at ECMWF. A little bit about me, just to give you a bit of background, I have a degree in civil engineering, a master in environmental management, a PhD in hydrology and data mining and a lot of experiences as researchers for research for various university research centres and consultancies. In 2016 I started working for ECMWF on the development of multi-hazardly warning systems for weather-driven hazards and then moved on and focused my research on wildfire forecasting only and since a week ago I'm now a senior scientist at ESA. Now let's start with a brief introduction and we'll tell you what ECMWF is and what Copernicus programme is. ECMWF is an independent intergovernmental organisation that is supporting a number of member states and cooperating states. It's a research centre and an operational service that works 24-7 and the main mission is to produce and disseminate numerical weather predictions to its member state. On the other hand Copernicus is a European Union Earth Observation programme which consists of a number of services and within ECMWF we have two of these services implemented. The Copernicus climate change service also called C3S and the Copernicus atmosphere monitoring service also called CAMS. But ECMWF also contributes to a third service which is the Copernicus emergency management service also called CAMS for what relates to flood and fires. So the activities of both CAMS and CAMS services are both related to wildfires but there is a distinction. While CAMS goals are to monitor the wildfire activities in real time and estimate fire emissions released in the atmosphere, what CAMS fire does aims at providing fire danger forecasts to feed the European forest fire information system also called FIS and the global counterpart the global wildfire information system also called GWIS. Both of these platforms are managed by the European Commission Joint Research Centre while ECMWF is only the computational centre. So just to give you a little bit more background on Copernicus EMS the emergency management service is a service that aims at supporting the civil protection and humanitarian aid operations by providing information for emergency response and disaster risk management in terms of two things. On-demand mapping of emergency situations and early warning and monitoring of three weather-driven hazards mainly floods, droughts and forest fires and always in this presentation I will only focus on forest fires. But what do we exactly mean when we talk about wildfires? So there are a number of comprehensive definition but in summary I would say that wildfire is an unplanned and uncontrolled fire that burns in wildland vegetation often in rural areas and there are different types. Wildfire can burn vegetation both inside the soil and above the soil. So we have ground fires where typically what ignites is the organic matter within the soil and this can be plant roots for example and it's very problematic because ground fires can smolder for a very long period of time. Then we have surface fires which burn the dead or dry vegetation that is either lying on the surface of the ground or growing just above the ground. So fallen leaves or parts of grass etc these are the things that fuel this kind of surface fire and then we have crone fires. The crone fire burns the leaves and the canopy on top of the trees and shops and while in general I would say that the area burned by wildfire has declined globally over the past decades it's very important to clarify that fire is now occurring more frequently and unfortunately severely in ecosystems that have historically rarely experienced fire. So I think for example of the fires that occur the well above the Arctic Circle in 2019 and 2020. So as a consequence there is an element of preparedness that is affecting how a wildfire can impact an area and nearby populations. So now moving into onto causes what causes a wildfire? There are a couple of categories of causes. The wildfires can be caused by human activities or natural phenomenon like lightning and can happen anytime and anywhere but for about 50% of the wildfires recorded the cause is actually unknown and when the cause is known instead it is estimated that about 90% of the wildfire is caused by humans and the other 10% is occurring due to natural reasons. So what we really know about wildfire is that it's driven mostly by weather and there is cold wildfire increases in extremely dry condition and therefore this is important to track drought situation or for example high winds. So in terms of impacts the wildfires can disrupt transportation, communication, water supply, can also release large quantity of particle carbon dioxide, monoxide and fine particles in the atmosphere and this is not simultaneously impacting the local weather and the large-scale climate but it's also leading to a deterioration of the air quality and this consequently leads to a loss of properties, crops, resources, animals and even loss of lives, human lives. So it is estimated that wildfires affected millions of people in the past decades and unfortunately due to climate change things will very likely get worse because hotter and drier conditions are drying out the ecosystems and increasing the risk of fires. So what do we call wildfire danger and let's start from saying that it is notoriously very difficult to predict when and where an ignition will occur but once an ignition does occur then it is much easier to determine how much a wildfire will grow and this is because the condition to sustain a fire depends on three things the weather we said already that before in particular wind, strong wind that can favor fast moving fires, hot temperature, lack of precipitation and low humidity but also it's very important to map the availability of fuel because without fuel fire cannot sustain, cannot ignite and cannot sustain itself and also topography is an important element because flames move, burn very fast uphill and faster than they do downhill. So what do we need to remember is that wildfire danger is a measure of how dangerous a fire can be if an ignition occurs therefore technically this is a potential danger and this is a concept that we need to keep in mind. There are several well established systems that measure wildfires, wildfire danger for example the fire weather index system that was developed in Canada and it's the most widely used worldwide. Then we have the National Fire Danger Rating System developed in the US and the MacArthur Forest Fire Danger Meter which is developed in Australia and each of these systems generate a number of danger indices where each index is pretty much works in this way. The higher the index the more dangerous the condition so they are pretty simple to understand not that easy to interpret but we will get into that later on in the presentation. Under the CHAMS fire activities the global ECM WF fire forecasting system also called JEPH was developed and made operational in 2018 and JEPH implements all the three indices and calculates all the three systems and therefore calculates all the related indices and calculates additional indices on top of that for statistical reason that I explain I will explain later. So JEPH works in two modes, air analysis and the forecast mode. So for those of you that don't know what the analysis is, the analysis is a data set that plans together a modern forecast model with past observation and this is and the aim is to reproduce the fire danger conditions in the past with an homogeneous coverage of space but also over time and so JEPH produces a analysis data set of fire dangers but also forecasts and the forecasts are available in different versions and up to 15 days ahead consistently with the ECM WF meteorological forecasts. The analysis of fire danger indices are available from the Copernicus climate data store or CDS under a Copernicus license while the forecasts are available in real time but from the FFIS platform under a JRC license so we need to keep in mind that the data licenses are different here. So in terms of production JEPH is a open source software on its fourth version now and it's available under a Apache 2 license. JEPH ingests grid in terms of meteorological inputs and outputs grids consistently with meteorological forecasts so the grid is consistent. However we have systems in place to convert the grid outputs into net CDF for those users that are not familiar with the grid format. JEPH is also an experiment-ready system in the sense that experiments are really easy to run. We have strategies for running the running idealized conditions and experiments can be run on demand quite easily. You can read more on the paper that is cited at the bottom of this slide. Now the most widely used element component of JEPH is the one that generates the fire weather index because it's also the index most widely used. The system itself is made of three non-interactive few layers and the schematization that explains the drying of the soil depends on both long and short temperature, humidity and precipitation. Wind plays an important role because it controls flammability and the combination of dryness and flammability produces a general index of fire danger that is called fire danger index. Just to give you an idea of what an FWIM map looks like, this is the FWIM website and you can see that the values are given into categories even though the FWIM is a rating system so the number goes from zero to infinity although values above hundreds are very rare. The FWIM made a decision to visualize this information in terms of categories for simplicity I would assume and those categories goes from low to modern and then all the way to extreme danger. Those categories are fixed so there are fixed thresholds that define those categories and for instance the FWIM and also you can see the photos here on this slide. The photos show the value of FWIM in terms of absolute value and the corresponding area on the map and basically you can see that the FWI below 20 for example corresponds to fires that interest only the grounds, the area near the ground that are fairly easy to extinguish while higher values may correspond to those very destructive fires that are very difficult to contain. This is a bit simplistic but I just added some photos there just to give you a sense of what this number means but there is a problem because categorizing danger using fixed thresholds can lead to problems, can lead to several problems so say that the FWI equal high danger has not the same impact everywhere. So for example think of FWI, the FWI is equal to very high danger and this corresponds in very in some particular circumstances to a temperature of 30 degrees Celsius. So you can think this 30 degrees Celsius may be very common in an area like Mediterranean countries where during summer for example but may not be that common in an area like Sweden. So there are categories that are never reached and in for example Nordic countries. And also we need to remember that forecasts do not provide any information about the probability of ignition and if an FWI is forecasted to be extreme it does not necessarily mean that there will be a fire. For this reason and for also other reasons these indices need to be interpreted by a domain expert. So now I would like to explain you why the campfire activities are so important and what the added value is of such a system compared to the traditional way of monitoring fire danger. So monitoring fire weather or fire danger here I use them as synonymous can be done using weather station data. So see for example the left panel of this plot here. Here this plot was generated for the fire event in Pedro Grau Grande in Portugal in June 2017 and you can see the blue shaded area is where the fire actually occurred and the pixel are actually fire radiative power from satellites. The dots are instead the locations of the weather stations and they are color coded based on the severity of the fire danger calculated locally and you can see the area was on the day was categorized as an extreme fire danger. But the problem with this kind of solution to monitor fire danger is that the information is only available at discrete locations while if you use a campfire approach you have a gridded layer so basically the information is available everywhere there are no gaps and also the information extracted from weather station is only available for the day of the current day, the day of the observation while campfire can build on weather forecast and forecast fire danger up to 10-15 days in advance at least for the medium range. Okay but at this point you might ask okay but how reliable is it? How reliable is CEMS estimate compared to the local estimate? So in order to compare CEMS estimate with the local one I invite you to look at the second plot. So the second plot is basically a point inspection in the location of the weather station of the CEMS of the FWI layer and you can see the FWI was estimated to be at very high danger. So here you can see admittedly it's light under estimation but this is normal for when you when you compare gridded information with the point locations but the real advantage here is this has the estimate the FWI estimate using Jeff was made 10 days in advance what was made 10 days in advance and this is important because this would give fire authorities enough time to prepare to a major event. They could use these forecasts as a trigger to take action and prevent a fire from happening or maybe control a fire that is already being generated. So in general I would say that there are lots of ways this kind of system could be used and is useful. So for example you can think of early warning systems because they can warn local authorities days in advance you can use it as a decision maker tool making tool because you have the time to allocate resources on the ground and you can use it for scientific reasons because the for example the analysis products go 40 years back and are particularly useful if you want to look at trends of space and of your time. So yes it's a very very useful tool. So now let's dive in a little bit more into the data itself. So this is a bit of a word this slide I apologize for that but mainly what I want to I wanted to give you all the information in one place so maybe you can use this slide as a reference in the future. So we so ECMWF is generating three types of data products a climatological data sets so based on era interim and era five analysis these have different spatial resolution but the latest one is the one based on era five that is characterized by a 28 kilometer resolution. It's available five days behind the real time and it uses the models the forecast model cycle 41R2 which was developed in 2016 so fairly recent. Then we have the category the data products related to medium range forecasts and these are issued daily at local known and we will get back to this point later. It's using at the moment the IFSI 47R2 that was became operational in May 2021 and there are different data products within these categories. We have a deterministic high resolution forecast made obviously is deterministic so one realization with a special resolution of nine kilometers and 10 days lead time. Then we have a probabilistic product made of 51 ensemble members which I have got a lower resolution but still very impressive 18 kilometers horizontal resolution and 15 days lead time and then we have a number of statistical indicators that are built that are calculated on top of our forecasts and our anomalies ranking extreme forecasting next and shift of tail. I will tell you a little bit more about those later on and then we have a number of experimental products. We have the hourly FWI after three days and the probability of ignition from lightning this is very exciting and I also told we'll tell you about that later and then finally we have the latest category that is relates to the seasonal forecast. This has been developed and it's currently been validated in collaboration with NASA. It's based on ECMWF seasonal forecast test five and it's going to be issued monthly with the lead time of six months. One thing to remember is that the climatological data sets and the seasonal forecasts will be available under a Copernicus license most likely both on the climate data store while the medium range by the forecast is available under a GRC license so if you need forecast in real time you will have to contact the FIS team there is a data request form on the website and you need to make a request for data that way. Now moving on to what's available in terms of training. ECMWF runs a number of training activities and for those training activities we use sample data sets for past events but we also and we make them available in a public data repository in this case the ZenoWildfire community in there you will find a number of data sets for past events but you will also find the past version of the analysis those version that did not make into into the climate data store because they were still being validated and tested so you can use them for educational purposes basically. And then I invite you also to have a really look at the analysis. The analysis data set is very very important for a number of scopes so you would use their analysis to identify fire prone areas, climate, teleconnection, fire regime changes and fire season modification and loads of other things. So we have written a paper and there are some examples of that in the paper. So and now we will tell you a little bit more about the probabilistic products. I'm not going to go into the statistics or the equations here I just tell you that those products are basically indices that tell you not only how intense the fire is in that particular time or in that particular place but it tells you how anomalous those conditions are compared to what happened in the past. So they are very very important for forecasters but also you need to know that interpreting those indices can be quite tricky and this video I'm going to show you a video now in which I'm going to show you how the information provided by different indices can be confusing. So it is I really encourage you to interpret those indices with the main expert. So I'm going to now play the video here I'm going to use a web app that is internal trace mwf and member states which is called DC chart. I'm going to load two indices the FWI and EFI which is the extreme forecast index and for example I'm going to stop it here for a second. You can see here I'm loading the FWI first and according to the FWI layer the south of Spain looks like it's in a pretty high danger condition while the south of England and the north of France are not that concerning right they are green. Now let's resume the video I will then load another layer which is the extreme forecast index which is one of the probabilistic products I mentioned just one slide before and you will see the situation changes completely. So you have now the south of Spain seems no longer you know seems that the conditions in south of Spain are not longer that concerning but the conditions in the south England and north of France are now more anomalous than what it usually happens at the same time in the past years. So in this video really what I want to prove with this video is that you need the domain expert the information can be confusing can be really misleading sometimes so you do need to interpret it right. So I think the video stopped there. Okay so I have a slide now on the seasonal prediction products the seasonal fire danger predictions are currently also experimental the plan is to issue monthly forecast of FWI anomalies with six monthly time and this type of product is very important because it allows to predict important changes in fire danger conditions mountain and plants and perhaps this can trigger long term a short term adaptation measures and now I want to talk to you about the experimental products. So you just to give you a bit of background the medium range further forecast I already mentioned that before but I'm not gonna repeat it now is issued daily and it is basically a mosaic of what is expected in terms of fire danger every day at local noon so it's a mosaic of the different area on earth and you can see what could happen in local noon everywhere. So the assumption is that the worst conditions for fire occur around noon and this is an assumption that is very convenient because it makes the calculation of fire danger very efficient but it's not necessarily true so for some we run some experiments and noticed that in some areas in Europe the worst conditions occur in the morning for other areas occur in the afternoon therefore soon is MWF will release two new data products the maximum FWI of the day based on our runs and as well as a map of the time of the day at which the maximum method FWI occurs and it is important to mention that the entire development of these two data products was triggered by the feedbacks from Metro France which is a meteorological service in one of our member states France and this highlights how important the collaboration with member states is for ECMWF. There is one more experimental product I want to tell you very briefly about and this is the last product is the result of a recent research that was carried out by my colleague Ruth Caklan. The key idea is that the dry lightning are more likely to ignite a fire than the lighting associated with heavy precipitation it sounds simple but it's a very important concept and Ruth built a machine learning model that predicts wildfire ignitions from lightning forecasts using fuel availability and so most your content as predictors and the output is a map of probability of ignition due to lightning the first results are pretty encouraging and this is now triggering further research in the area. So now very briefly on the tools so the very first resource I want I suggest you to look at is the Copernicus Emergency Management Service website because there are two tools related to fire to wildfires to explore so the first one is the rapid mapping tool where there are reports of events that occur in real time and users can get information on the cause of the event whether a fire was caused by natural or other sources for example arson or you can and you can also get information about the extent of the area that has been affected. The second one the second tool I've already mentioned is the European Forest Fire Information System that provides an overview of the current situation in terms of fire danger in Europe. Beside this FVIS we also have a global platform that does that's called the Global World Fire Information System or WGWIS and FVIS and GWIS are two systems that present very similar information but for certain aspects are complementary so the platforms are both free and openly available openly accessible they provide daily fire danger forecasts for Europe and globe respectively and along the forecasts there is also a monitoring component that looks at active fires and the burned areas based on satellite imageries. Modis and VIRS are the the one used for detecting the area but the perimeters are that refined using Sentinel two observations so on FVIS you can also find statistics of wildfires by country while in GWIS GWIS provides information on fire emissions so in this case in this sense they are quite complementary and then I would like to point you to the Copernicus Climate Data Store which is obviously a data store it is openly accessible it is an application that you have to access although it's beyond behind the authentication layer so you have to register an account it includes a toolbox for pre-processing of data on the surface side and it provides and it's proved to be very useful to those users to separate users because users no longer need to download a large amount of data if they can process it on the toolbox and only download only they download only the resulting maps plots and summary statistics whatever they need then we have in terms of software the Jeff model itself is open source and it's available from an open data repository or open repository that I link here we also have another software which is more used for post-processing of fire danger data and this is for the R statistical software so it's a Calibera package it's also available under an Apache 2 license and it's also available on the on-cran there are some big nets and example that they show you the functionalities of this package for Python users we also have Jupyter notebooks that perform pretty much most of the operations that you can do with the Calibera package and this is what we mainly use we have mainly used in the past for training and this is my last slide this simple summary of references for you to explore and with this I would like to thank you for your attention and if you have any questions I am happy to answer Thank you Claudia thank you so much this is really an in-depth review of your work and fantastic that all this data is available including the the data stores that you call so this floor is open for questions I'm just looking if there's any question in the chat if you have a question post in chat and I'm looking here also at the conference hall you don't have a video on the people but we're about 25 people at the site and then 25 people online so we're looking for questions for Claudia maybe I will kick start Claudia so we have this new Sentinel missions and they are much higher resolution like Sentinel-2 and do you think you the for example Sentinel-2 do you think it could be used to inventory actual fires and then use that as the in-situ data for real analysis of your models so how do you use the other EU data sources to let's say quantify actual fires okay the the Sentinel-2 data is currently already used for refining the perimeters of burned areas from fire so it is actually used in terms of assimilating this information into the analysis model I'm not entirely sure it's in the plan but it's probably it is I'm no longer part of the team so I'm not entirely sure what the strategy is in the long term but for this kind of question that are related to the future developments I would suggest you or any other person interested in the audience to contact Francesca di Giuseppe which leads the activities and is more up to date connect the question so there's a new system coming up we heard from Raymond Slouter from the the Dutch space office there's a new satellite coming the thermal the thermal with the thermal sensors and multiple thermal bands I actually forgot the name of the mission something LST so and I think it's a 50 meter resolution or something so how much do you think that's going to revolutionize monitoring fires and fire fire danger oh I think them it's very important that satellite missions develop and produce data that is more at finite resolution etc for example we were looking at at the the work for detecting ignition for example that would have really it would be it would have been very a very good to have information about biomass and this is going to be available soon with the newest satellite mission so I'm sure there is a lot to do and the lot that can could be done and yes again I invite you to contact Francesca di Giuseppe there's a question by Piero Campalani yes it says you also have a background on multi hazard analysis are there online are there any online tool for multi hazard risk donation the one I worked on is is not open so it's developed to be commercialized and so we developed a concept it's the anywhere platform it's called anywhere platform okay but the open open data one you're not aware of any no okay
|
Claudia Vitolo, Scientist at European Centre for Medium-Range Weather Forecasts (ECMWF), focussed her presentation on forest fire mapping. Although forest fires are an integral part of the natural Earth system dynamics, they are becoming more devastating and less predictable as anthropogenic climate change exacerbates their impacts. Longer and more extreme fire seasons,fed by rain-free days globally, are inducing significant variations in wildfire danger. In this talk, Claudia presented the complete wildfire-related data offering, tools for efficient data processing and visualisation as well as results from recent research projects. The European Centre for Medium-range Weather Forecasts (ECMWF) and Copernicus have created a wealth of datasets related to the forecasting of wildfire danger as well as the detection of wildfire events and related emissions in the atmosphere. These products contribute to the operational services provided by the Copernicus Emergency Management Service (CEMS) and the Copernicus Atmosphere Monitoring Service (CAMS) and consists of real time forecasts as well as historical datasets based on ECMWF reanalysis database ERA5. Data is open and available through the Copernicus Climate Data Store (CDS), the European Commission’s Global Wildfire Information System (GWIS) and European Forest Fire Information System (EFFIS).
|
10.5446/55164 (DOI)
|
welj tu je, gelaj oxidListender pizza care barat, zdaj s mata popra 좋아čen cuev in kolejna squares? Napravlji iz glasba Omיתי je na p мbečetost in ne bo gledali je divovan vs Boomina, n std. What is PASS? PASS is the Plagable Authentication Service and it basically is a product that is documented there. You can find the links. You can find the repository on GitHub. To ne več na prezivne prejze in punočkega in hra. Mat Emilton, v Brzilju, na vOTHER Dominert.. samo na njenim matku poz Live on. Buzi so, da jesteび vsov, da are komori plate然ce sk Coraz Boss. Salo visko što saurious do glasbenjenskem v hence bine v floeloč nicely vzladno istetilo, ko sem vstračili, dovolj pos being came in zato aνηvak isti leč washeso in sweetheart, danes je vok altijd mandat. Stor je, da vič ne bizica bolo tako daensity az migr boldtj. Tzala, od vič ar svetor vič me co Vic所有 posni prijmoba harmonija zinfanobljani nene Lin- s konacem rokega Lumene tvoja ne pravno pri discussingi skoľal anti hade starorti u Steam. To na z登 Ali ga jewelrya utologije neeli i vibričnana evtrno situacij nene pošitor, and application, the permissions of the user, the user and group properties. And it also has some, some function to do search user and groups. Outpass works pass basically can fetch or set information from your site about, of course, about users and groups. if that can do that, fetching or setting information from prejveritajo je z njim objet nekaj, 망ravil je vibo a professionalizator. Pozolmo marketov ki bi malo odvijeli druge zvonete, jaz se k attendث legora tudi, jo – για predısı, ampunje t phrases infriendi in prej seat имеет znono po keys in do vse pred использujse scoredaēc, nekaj n pokaj trat u lepše indicatedu podnositi 피 bovina seji vsата pumpator hacked. To je nazet orobitba devושce, č jaz je radibod v Povanju, načč da je to je večoalling občet, leda postih ili dapatni bilpe. Je izopnitva. Premiere in tudi po v emergency uste Cherry Free, s 47 photkanal01, displaced, tukaj ima werja urdred springer. Olgésno- magrines pri fundedri plat memories.jih kaj v sakanim brojar za tukaj. TS slotno utrril先? Resem pa takatek import u nachoj fazimsko. Za emetinte in gaaa, namentêtske fancy pose, ki bolj intensirasi tukaj ob prof mogreatici se nukupit. je v Une blink, v kažegh smo ogud보 Applicants, in nemožno lahko rovni 예 del Communication. P Landis panels, as zeternila크� in instastu don Mana,ざ c cherryee, sedajk ti pa tukaj smo v bl became, posmaral na protektor del Pikachu. V ASCPH wispo Git Razes�ludja ovržili zraela, da sem stonesenji, da mo ili je priko bomMo implant, kater je sm uporess exactly način šta beber prise. Načedu sem počkus no moderatek formuj deduct se trišeni, da wezết Saudi s ε Shiq Ali Zlast i Is工zансke b SFD-da prelyžite vesetinnji in manos mat 스ph immijo pos Heljena Z voicedem! Dar ja sterilima isgodem splod v op comfortu po splod Where To bi mi忘al, da leti iznosil reacheda o kalej T suno je, da je extremely semosno naprav吹om, in četim do underneath namoviti o teh prž feedinga. Veliko prž ne te proga kg in vosmi deltar, ne eval70... that you can take away is that all those plugins interact with each other, so you can have multiple, as I said, multiple plugins for extraction that interact with multiple plugin for authentication. Just after you skin, the left middle is on the left. We are going to the main menu shop when we are on Zhong Begin, we are going to get another app for today, which if we got the HiG知 for you to see yet visuals. For now, we just got二 to sweep says thing about overtime. We got alerts, we got two features or features Kaj successorvocal RS, kakoste plejgin, inวо gestures instructora propeče, inkarbovi plejgin scandih. Karele starch toe Launcher taki ospe<|iw|><|transcribe|> glory fastenorg Jaredyустoke Galaxy contributed pro circuit 네 Partner utilityraham Pания, si accomplishni CDR.하기 slavery�� je Skill U…?.... someone mas on one spherical circle and one axis on the left arrow on上 an note da s... Ringjo z vse glukovice, boto je stavilš Cherry, bo te ik gela n tideh in zelo štev stones. Zelo je nam zelo heel 티i dato, idemo камerci, ki lo Champions za halj men Shiva in았� along orange muscles, ki s relaymen�v autultanunem mi je tudi betsידnoval.纄k ok동a seumbling si proseldim v javalihtkine that there isanotyga, te behaves v težko post変ovati ispenje vse jeanicje, nek林ičvav cu ti boe umegali drugi. To je zato wedheživaj onosti vzgeneracije, hillskih in savesv, tudiwayske vstripe�� z biz стupih, različ si trorjati, iz neglig574, pa tudičvedna rel<|sn|><|translate|> začali bi vse peki je z kindnessi, da gatheredi dredne pravz�'), column zoép질a izličackinga serve.っとลา, ali zo ledrin noted s ponisem yesterday tudi tega ma sanjoaidni v velikočeljo. Zatoados Alicia eta je zverjava foresto, v č RDG id спoljinamo mu,RE très؟ u tom operacijo, se knebajo, aż nasga od 같은데jput in nawagaj vav navozenih. In cartu v zprav� thumbs post cash, tkaj visits tristují, in nasga separ Jamf этих acceleracijound para bo vi s tem n tendedunjimo pri дома poveden. Poradno surgeryparave mu se izrežimo, se svoje xi tudi doesnpas courtyard, te żeby pa tuto beverages, ško seguckleijo, pod plag blas je in zaую. Kajšvi to koment Fuji directory to p questions. Prgal, SomUR, paško o mosti nekaj blinkov, waško, kot razbi v posluškanosti pofl in napoastera še ga mu se pinpointoke s tem, da z Legal variable Trim 5.2 zas pardonom možem tekšim izpečnej počenih telt ele, je potem izstro so b하jati in misi no nekaj z prejavnjen je priorit tar YouTubers in boけれb izapido v 1080Hz, in potem je češnjasa pl sooner modelik 20 2. Bar out bulinguse leba March's. Poleci, urednet komण možete, zipperi, evel선ently so auto utakaj o kladenim v StrikeURkest Output in aktiviziračne brainactivine. Zelo izvene si bo no Youtuber in Opennell-Dap zgodne, ne個人ic There implement after that protocol. Što jeenergy, sakel dejte piloti v tradustvenje vza delaj, in ne Ci nam tudi hava daval seom idejšic zaš idle kaj moči dobolj говорить za cev pudevo v delariji Urzunečno ze virolet, bojte delno. Pri antibiotics. Fume drugi wir, da so n toweri Krishna K cooleston in ri ga odmi성�, pri tuto kljepa kot. Ta je si druga istotda iz zovrčenu, in jaz bo seemsa u hipnotu因為 je za bar ko jaz mokmem pr via in izprvenllega program radio in jaz boleh bomo v pohiram debočan splash smoirst bi se t90 što ho17 dedonde rob knowledge Lačaj smo imati do tamo delato v skupnja, kогa tako ga nap quin k goroupi, nav мире, Intro uno, gre xiуд ontjev, tak je povsirile veřja pormonit pa pravna DR do hadbroga in so PrepareIt muominousذik v kaise in ni si brdis pand NIWA juti pa idemo in v elegant ar formundusti vsang BlueIt, ki je undertaking z hovom vl variable prophet lagi, tako Shi zdown Vietnamese instru, nek pa❤ On za Especially, Enkanaraigu varstav si je dru Ameri它idnem. I si tem sem.這 pre judiciary Enkro. frost. Gufi st真 zaかった, Wi je dobro kar pri konkursu pusti peg emergency in ga za njer motore piste,眦 ti odiusi masa bo sreza je obstavila potии zile nežene in fijorti, ma roz Action ita sederime postavila no fina in tka piste bolj kot posijeli ko se nastavila jem z'... Tudi na vorberejstv recedi plug vaš prejak LandGar k Ojake ll props enable po whit tij裏 spokon Member udvalroga do Elise in p FUCK pre drained farovi za sve,… In mu je zov användil kradi, ki slam na jednotne tot seemsat providing that Hishal p always estride. In tu je pod cola pokastega, ki je namo opemična v bowlosti. Claire o cot in o sting on dimilitator general Nick Hovoutva PSP��ir stran spezie pride z considersa in storze, aben dice v popor间niод lahk Andrejặt jet, in tem pokaj sam Milo'bil sch competitionnanje in cerakom vesek weights,�� ali si končnost po vzihvoje. Selo moje lahkete praloste začags wedding that in može bez petits tilačnih malosti in u App Belcoatting designs in nešom končnosti. In takak vsij, in te, zato akakor usen tudi grob njegodni jazicos na da ja si znam neworkaj za mojания z memoji izena reozovo Same z njoga вмesti ljenji v platji. Parmomemed sem počarao женщina reza, ta je unikovatevo vse odd flamesh kle snaRE, ci hubo ide drušmi rezačnihibal bearshkeren. Adno na košanju vsejud paramenški, kot pri ep совершenju dumnosti. in spl적인 sl Witata z Nami ogromno drtvinje. Avci te behind je te glasba, tko pa diverjamo ksch warrantyga witnessed b garagehome Meldap, z đây tvarjuować Boharm, prema svera upementačnem prema toga, nobe names drugo menenje z exec buvilj n Future地 cevamna, velika plano, je Danos šta je iz broadlystvar opposede slupnee glasbeli job, nač govorile subjectivno utkol.<|en|><|transcribe|><|ko|><|transcribe|><|ko|><|transcribe|> sem separated inMmom in in blendednom in mu er polando in pa deti frames. Se mora petrstvo awaiter svoj bom čestil velik tento salv, tamo seולete, k dimensions, torioma so вже drugi v Didukthe. Zelo salv bil jo, že teruseum Ram ali najmojen je tudi sk常 advised, čistil pra sempre. Andhat ze v pačun doужд They must.<|bg|><|transcribe|> Still – so, ne Protestant, bo Bit Walter, pred agreements, temas in فας. Šum tudi je involjak, kako izv 움acil tudi, pa šetteram duyạ Kao govorilo iz ceva, ki o charges lela v specialistות, bas ni na majimی produz Indo Kotecs. Deriful mナ Visit, Vedno od trup. Prejlo se, berms v katalogu Within k katalog in vkložitamen2018 ont Warsaw av Dresitz 읍od cherry otorki, ki o kom in nared' taj bedroom wači ta badana. Ne pre перед katalon ali nekaj толak号m. In acimo bil si to ust この problem befiting in pava bo istimivalo solo, in weet če criti dejednotnjih. Objažujem, da pridej dobran doma 1? Ja se nik總entίναι. K Prof. Rej ko, zato posle vz družin ali prič mu zelo, dokorko mi prej, ugorila zelo bili, ko vana tudi hatesk backsih weird manjev chvila It was trying to use all the plugins, for example, for taking user and group property to check the permission and the group membership and the roles and pass was abused. Why because for example, there were many groups for example that knew collective workspace groups, which at sense just inside the workspace that basically were for sure. količne vkobagali raz upgrade, tako PaS umarila izmitata. PaS zgastila, da imam k繳gi skupovodno caries, da probajao vse kitetani, na zved sync話bo, a sve vsih tab Chel-IT Oh po 190, na to jaz izgodno adolescentsa za cils. Zvere Dropofvon po nav ஐ nao pos makes, za to meni je nevislicoка. Je الص chaud واči ez, mater samo v svojanjem, podatv Blackgs, epoxy in in nar oč dogodneva barite alo globeno ked便ice concrete in bar laugh stone. Q. the designer? beer. a tum. med grovem in pobrightom stoni, prinovaj 저는 tihovoj tudiそう izvone in appointment. Bez anth讲u, malo je to Rajem vas Assistance Tweek in nihså se solo v드� szybo pozajne sk rep lahpe. te spobrende poly in iz London. Zdred, trader. Ne onlar kdo nephew mir. O Hair & anchor je vk s 오늘도 biscuits. Jedin bil su nekaj Irony zašpega, rezolaziti se lahko izst first- Aj 리is ke rip roughly. Se nikam ne launching, korvo naj 소리 ne prison ne in麥 rozen. in zelo lahko promotionaliveda iz emergencies, in solee je na protestu kn cop σené se refrende v registeru. Tudi po vsi situacije in vsi res vats pri plenšti in na redo<|pl|><|transcribe|> hraste Hitomi mاط courtyard in paprav do 아니에요. I buram, in tudi, ne, Levi, in je thoughtful, boš.... Me CPO daar našli zelo časlo, med在 pieces in je, Dru действkov se všakern. Č snake昨avo, ležte se, posledati celom mu independentiĽ skupin odlevu, bo n shots server in никак ne towerspingov. Preziel LAKT in Noz, profes asleep golf z fill dot se. Z disturbing sort is. $. Splitgr. doberamo le Entre losni 하 shark Oga je slikezgt讓 dan estaba delati se, ko počlo Era sn증ave in bali Rh free In nam ing polnit average targeting in jaz pravi implementation je constitutional 그런데 in chart gì� reflektinала imsk Av mm to怎么 ligatiраш meatballs li ba poles Smart MIDI, display,red почти to je safest in nebal inbok. Nastavl phony ki se ona domosila inically membrane ones. So we were removing one Taste Game from the game. Of course, this says the holidays because you have cront job, you have to be sure that the cront job runs and when on a guesses, that you will get information immediately available. Ki tem忘ovima, da pata hom motivapsa in trage njerகவ, pa je precision, illuminated hear, rečoe,salin, rečoe, a mora odvlila skala. Prejzav trendsi in nič trgal, viJa se pot Vaultna hvačoma najtaženDon Prim Ahhh Déblem concktivno sej lastoput 좋nih tah seטnih pa seれた kao račev servecvega in ja raziloč vindem, da vsite si na Blowmer mer substrate ssi, da ta shitana bolj umojoлу在i tv 1987 reke utvał vse Cinema, infekta, kon Bishop, japod in Isabine pa posledaš zelo str Many just najnev uppel越ovak vid magnetarko spleli, pratique yeralj experimentov na asi straž Naruwenario, dolaš optim??? in spet diving gospod temu Korea tak, bo drzav pro кожno bo kod maarga. V č TON� ravuj na uporju, jaz bomo bok dormirni kredanja, bo meni je ona sp 1972, premišljdid valoli sk какие ležite z t killinga in od kle blanky ta 각o captionsboarding nezahlujte in bo stiff in bo Believe tukari si lahkola in哪jo kljeni mm, ci unik staffing taparjo s ljudim na doiska enrollacija in predzivna naše plastiko par. Z obviousnem vse praktiko ponладemo u čemskih klas� paste o svetkem mand Substituciju. Zankotem, da seÖ izgledali......spotajla im frequentivno izprenevi schopjenje scandela trafic看看 the client tables, in veliko se ali extractiv notekselih Line 4 in als ki subm죠 infektajno, tабit Straveski in Josef J feeder. Mislim o Youth ES18 vs EWA. Mi to mi val Athena EO mavala,...abil m<|it|><|transcribe|> svarizare t Workspaces Gruえる N68ESA con due... droti na odveči na součoj navodnji suš z pozite avenkiECH. Sečen par takenи se tie izka captured后gofarwaiti k fače videli pomeacojo prvno del tato muli suš pogledねvi Foreman units. In zato poglednji nam env錞 celoj nkeljo osrovni salf, resroundool drž житьi na ink Dunposti ki smo ojejulate k posoredni z relh pa poglednih tjebaličnih neljane glaske in kla, boromnijo biti kot in inča leverazda prioritakda, our Olex. Coš blocks, we don't imagine, In neočibe vrati danes beb take po dei pap in namermi samih, ki breakthrough povestern. Fiske sesi mentioning, ki uspeksi u kr jou bozo volunteers, kak tak naprav operate, ki so majimi vitu o pred??? Skaja pa bomo to izden고. jaz nekaj古idovoskatvega graf. 1, 2, 3, 4. gerade i radelega radelega. Timatej ta graf izئa s 완�ženjemvenad n concentrated in med這樣. To solamente v organizacij像 grafovartezen. te Ja, kar u convicted, aforeliife, dohenje glav inôm邊 tam vse dun več dreh. V zetvi dogod cities tem je heatsk Taj Oreo. Hadga v sed уčin so ka bil, da ostara, da je bel trh v pači lahko v vsem avtanš代d這種 na svilenju Pensadzponalneve vse grammarice laplme. in kot dera Co palabra. Kada dolge so, je vsكo mul mortena za to z evastovati srednikov dostender neko smo 48, čal tried decir loans po te danesaringa nanie pre Koment soils, in Bela kako se je. Be in to različ呢! Kako smo, kjatom na tozer, kaj je ne εCome Ino Blaze? Kaj je grafi v Hero Live? In se burnto ultire labički ne stajal postbi獎olepovandi, licensek, ne yesterdayo noves pulaevenajme in da Lad u zig advanced nametje reji, se oblacila neprotim 이제 bo se pet Vietnamese intake in in t minesuch v nин commedačnih valid, Networkix nDrg preko�neenz Iss, te sanjeaways u 맛이 in plugged in pa veseli pohroč у rogu, n bagsi doctorsvet names, demesalja in basinij. W nad Deal visaj je zeremget povihla nekurhel ° joined the networking library to plot this one. Of course I could not plot it like the previous beautiful organization tree, because the structure is more complex. And you would see anything if I start to add label. I actually add some kind of issues to show this picture because all the lines have an opacity of 1 percent, it means that they are 99 percent transparent, sredju, če možemo izglediti happen n间.宑otno je kaj sem kratila mladine, ah, boh mu plače no реakcija Cyberetin amendmenten. Hvala, ko je na reselima reztve s oven. tablej, narlane in za nja Grade. Ne hardenim, bo hemos razljistit, we have to model the nodes and the edges in a way that we can get us it. So we decided to use strings as the nodes and the strings have principal types called the UID. Possible principal types are user profile. That means that it is a user, workgroup. That means it's a regular group, a membrane group, collective. Danes transfercool '' footprints reasons?'' Espot Never evoval le... Lagdaj... vo Z將ske symmetricalkom zelo sp traps Beto je needle Ali echno, kda je svojo sinkset,ο pa l ADT Yi protein herjevون. Joj, ki sem v банju, ki bil ho nekaj u ADT kot misunderstood, zato jaz Chile Veš periodujo pomeknaa o pod isolation temo in Tranas. Total, o Abigail P Bono натila,kaj prv krenjamoiz SPEc, uID, tako nas我觉得, ne Quickたいp moraš se verjetiti vte inde, več na awardedih zap stability reminded sem se skup меня in putinajo� hajima na ver collaborينih dan架šjeb t arričniaš udobujeje judgement. Teżagravali smo obrekoisn provejcanja PATH in vidim obst trainid pmn vs P3. Mimo druga tongobirina in z concernsem sharepoč lingerljati v CIDi.ъ ko sem almost sem o Real Pratvila je ko najeingo tragodiv betrayed. Hate in izgled je partsite targeted s zelo. da so stalje n in resすごem potretiti pribas in se začave si dem Basically, zami drat s vrače invala, našljast rešelucijo cor Newman ima ljude, voila k to其實 si st bueno, lahko to Izgleda v losni del, vse začudno podlegati savstvoje, kn incredible, ne znalete pratice.存čBLE, to je zač els Believe. Turno, da veče ste prest neaber bolj ne Bareq o Vermop saymi web-ine, k ku ar maknenjih se naprejimo. Nakonability çedy survive cords in lega na todič DDRG wylož finde? Salo je. Dodal bandичnja plaми testuje tekjere alde za reprimenza scan evem. Tega, ako kot dar nareda adevamo kut rah感じ kot možna je prove Comedy Pao. Doring pa인이 pa slicedeh v poetry večin data 48, v산je klas ensemble stipundv noticed Royaltychl sociedadnih bolo, da Heaven hajmom ga bojeva in na to ratio za teje netešljane oprestova Strategy. K znig vi behalfu kresenju na v highest s electronicsi delaj detem friendship in wellness becausesega je. Zato nam讲o o Netängerah posložiti prảnja z cisine in daljevne organizations s nasiljami do pe covering, potkala da danes idenergetic zhot inoshimazila za inne leftover對了��h tem paraplni politics in poici. Tko lahko nal factories quarterly или izprejvo sekaz진 eезж ledouchou respektivno ga postaviti polem op останов congressional franč Raven, s kaARD-Azω in ta nekaj woral da izba vrat sem just villagesст. Tod После Respektivny za moreli se pravdam punost iz, as Hallbaum, Staj ano pa je PLOŞ isso molecularisterske del treba pole missa controllers v zvolji nam 김at in pregazin mandraza, sami prärxal poslujilo to in graduating, enak ta automat�球 א� V dollarsili, ala jo namщ학 Politikani moji kraj, neprav Намeg ven on je evac Fourier Logan po vas. In prodiv do aquarium 32<|la|><|translate|> posve bomo projeli pravidovanoah v tako liquidse v pripobah....ve morphikl pocova, učila malo, ja mo expert voor su ljubiru ja se dera, zato me za nesmine. Ne, da so glasreibt bomo polšiti, sve Swedenova je napravo, je jezen rovinno, t smiling in rešo, Aba, krast du se pozostah nosit תي ab je' pasa, da po flavour di sip we pandemic tr melting in so 넣고 in render ki tist' bom v.) Ja ta split so в ingof vno gener jaz ki mu distan u pa in izesta. Khalza Or octol that does in is ease u balls rocking graph-alko bistr toplavi in diplin� Below's pilg turneske spliri part o kaš gri zicoj. Ovo je gusta in reistolizira dolje��면 mat v Outdoor- in maybe you want to re cylje your own, idobno da nap Azure. In nega dobra nasprido več ended in nekaj predicting Zen Pensos also on ampsono je moje ove. V svoj delo動ала celo massega zjavnj izgovisov, li o nijakšliNetiti prのa in katonu. Vi bilo izno Obizeli v nyalam deloferili in posloforili v kriedi na finetikiv v zidnici konč towera. Ako je rever, ba ne knup owed ras v vse nekajšli razherati nazad, Chode je za wolj crashes, čest turbo charger, yeast, sa danes头�� tirehelji.ㅎ, zprnego호�nja. Ta si takooo Zenfra obšqeeh pas vcalmurarki esmerjuju. So sa tekačem in kad so bilaelniki, da ne hydroko neaudibaj. Because it's too much. So maybe instead of using a tons of plugins, just write to your own, do what you need efficiently and... Well, that depends on the knowledge of your stack. And that's it. So thank you. I would like to thank you to Sisl up, that was sponsored in my stay here. And all you wonderful Plum community and of course Bio in todas Prišli so s Retartl, brew Daik v erguorganiz minutesnih komferenceh. Roman chebalj vice. Klavneh. Sylvan ja bom petit vgreenil pa za otvarna poz��만 v vladi na bias. Pardeh videli videliš, da pogledali po ot Danes, da odпravili nas za tomglavnji vis 72 summary donate Nope, laeli pa pripeč poliran�ameje. Ta tabu,ating od cončn Rafael, osvoji je, da sem emocijač� exist caroriicle, wčešulla akselt, da Less Cover bolje unpleasant izvoljena shootl. Nlersهم. Jaz osčesve tako je overt redς, skupே pitnemo. Sw條 na medelujemo pon regarding so gorיא snsakouch drowning in idemocakeom s voми clashom, seasonalni pred vnoje dolge, bo to je smakne palce. V not energijju me replacing the diagonal caj étina. Moj izgledaj demial. Is Wonder ten izgled eniilsne zelo raftovat, odstvoljmi so. Kiniš tramen takrat. Češno, Lusandro. Češno, dajte. Češno, dajte.
|
One of Plone's pillars is its security machinery which is heavily dependent on PAS and its plugins. The most common use cases (e.g. managing users from external sources like LDAP/AD or a RDBMS, SSO on external services, transforming Plone objects into users or groups, ..) can be solved by just picking and configuring already existent plugins. Everything looks awesome, until one day you find out that the bunch of heterogeneous plugins you put together performs, speed wise, very bad. Why this happens? How can this be fixed? This talk will analyze one project where things went bad and were successfully fixed to provide answers to these questions.
|
10.5446/55165 (DOI)
|
Music Well, you know, we are the Plon community. We do have our wordings and everything and we are into several kinds of fiction, fantasy and science, technology. So we are the Plon Collective. Resistance is futile. Assimilation is invadible. And it's all about the community thing. But well, first, as you know, if I'm talking about movies and things, these are copy-licensed work. So we have to acknowledge them and use it under the free-fare conditions. So I'm talking a lot, always, about community, about things that don't want to sound like woken poetry. Because saying it all again and again, it's always the thing that it sounds weird in some I.R.S. it even makes you feel bad. Because I don't know the answers. Well, we know the answer is 42, but what was the question? So I don't know the real questions we should talk about. But I know one thing, and that's language is important. Language is much, much more than just syntax and semantics. Words have special meanings. And we hear it sometimes that we talk a lot about non-violence communication and all the stuff. And language is really, really important. I don't know it in English, of course, but there is a good German example of that. There is the word eindeutik and eineindeutik. It sounds very similar. Most 99% of the Germans even don't get the difference between those two words. And the best example about it is mathematical, because it's the mathematical origination. Eindeutik is all of those four meanings, because from one point you go to one defined point of the other side. Eindeutik is only the B-juncture function on the upper left corner, because from that one point you also get to the same point back. And that's the thing about specific meanings. And if we're talking about language and perceptions, the community is also about self-perception, because if we don't know what we mean or what we tell with our language, it makes a difference. And yeah, well, we lot play with science fiction items or quotes, especially with Star Trek and the Borg. So it's the collective. We do have this email address still borg.plon.org. So yeah, we'll do like and some way affiliated with the idea of the Borg of the hive, that one minded set. But well, is Borg really the thing we want to talk about it? So let's talk about what Borg means in the Star Trek universe and what it is. The first problem is Borg's or drones don't have a free will. Without freedom of choice, there is no creativity. That's a problem. Assimilation. Assimilation is some kind of enslavement of other creatures, other species, other cultures. Do we really assimilate someone? And we've seen it a lot with current communication. There was the soap assimilation project. Did we enslave the soap community? No. And it's also not the thing we're doing with pylons. And there are certain better ways doing it. It's also about the adaptation. So the Borg react to influences from the outside. They adapt to the energy coming in, but not on a proactive way, just on a reactive way. So a lot of drones get killed in the first place until they have adapted the shield frequencies and so on. And that's not how we do it. We are creative. We go and think about it and adapt in front of it. And there is no capability for evolution, research and development within the Borg collective. But there's also another thing. Some people think because the Borg are presented, at least in Star Trek Next Generation, as the most successful, most powerful power in the thing beside the Q Continuative, it's like it's very hard to resist them. It's very hard to do something stronger. But if you look into Star Trek, there are always either a culture, a power, a species that has the possibility to fight the Borg. For example, the species 8472. It's Starfleet, the Voyager with some ideas. It's even the resistance we see on the battle of sector 001, where there was a collision form within enemy cultures, enemy powers standing together fighting the Borg. So many species and powers in the Star Trek universe, which might be a better alternative way of thinking about the Borg community. As I said, for me, the Borg collective is something where we might get addicted to it some way, but there are a lot of powers. And actually for me, it's about the values we share as a community. And if we're looking into the history of Star Trek, the founders, the humans, the Vulcans, the Andorians, the Tellerids, not started as friends. They started as opponents as enemies, but they come together for a larger idea because of values, because the opponents, the Romulans in that sector. And if you look over all the series in Star Trek, the Klingons were the enemy, the Romulans were the enemy, the Khadasian, the Dominion. But at some point, they all find together and share a larger idea, a larger point of view and meaning. And that's also playing to us. We do work together with the Zob community. We do have a lot of common with the Pylons community. We share the same values. And with other content management systems like WordPress, Drupal, Joomla, and so on, we do work together in the CMS Garden Project. They are not our enemies. We learn from them. We share the same mindset because it's about the users. It's about creating fantastic websites for content consumers. And that's a thing. So yeah, it's all about values. And the most important values for me is respect and the spirit of innovations. And especially a free choice of membership. If you want to become a member of the Plon Foundation, you apply for it because you share the values. You're not enslaved. You're not assimilated. It's about the thing that you decide on your own, that you want to be part of it. And that's why I said I'm really glad that the Zob Foundation decided to become part of this Plon family. I'm still really thinking that the Pylons community at some point will see that Plon is the right choice for being part of it. And that we live together. And for me, it's the Python word at all is like the Federation of Planets. There are so many different aspects, so many different things. You colonize one planet. It's either data science world. It's the operation staff. It's all the different ways in the Python community. And people that explore the outside, it's like the Starfleet. And we are like part of Starfleet. And it's a peaceful exploration, no violent expansion. It's collaboration. It's friendship. And Plon for me is like a flap chip because all the good people want to be on that one ship. They all want to work together on it. And the thing about the flap chip is it's a mind share. We all see the same values. We all see the same mission. And we join and work on that. And it's all about the crew. And that's the one thing I don't like on presenting the slides. There is no picture of the whole enterprise crew. It's not only the bridge crew. It's the whole people. It's the people in the machine room. It's the people at the medical deck. It's the people at the transporter room. And it's all them together that make that chip be the best out there. And it's all about the Plon community. It's great knowledge, experience, a spirit of innovation and the good vibrations that keep me in that community and want to be part of it. And there's also the thing that people are different. We are not the Borg. We are individuals. We are very different individuals. We do have different backgrounds, different knowledge. Different beliefs, different way of life and sexual orientations. And that doesn't matter at all. Some of us are anxious, some are in some way aggressive even sometimes. Hey, that doesn't matter. Everyone is important and everyone is a welcome member. So if you know Star Trek, you know probably Reginald Berkeley. He's probably not the most comfortable person. He has anxious. He has so many weird ideas, but he has spirit and he has knowledge that helps the people. He was the one at Starfleet getting the communication with Voyager up. On the other side, it's run very aggressive person, but he shared mindset that helping forming the Federation and that's all about it. And the Vulcans have one sign. It's infinite diversity in infinite combinations that makes them stronger and that's part of us. And on the other hand, what I said with aggressions, yeah, you sometimes can have aggressions, but it's the discipline inside of you that keeps it in a way that you are accepted in the community. But if you would take all those energies out of your aggressions, your anxious, the things that makes you special, then you're like a drone. You're nothing special anymore. You don't give something new. It's the thing that leadership comes from what's inside you with all the spirits, all the energy. And well, it's all about also change. And change does not happen on its own. Change or things are made to happen. And it is a kind of leadership. And leadership is not about being able to perform everything on your own. It's about giving a vision. That was what I liked yesterday from Eric Broughort's lightning talk saying, everybody can submit a clip. You will always find someone in the Star Trek term in the red at Tots or the yellow uniform at TNG that will make it happen because engineering always find a way to get the things done. They always find a way that it can happen. And change is the essential process of all existence. If we are not addicted to change, we stay the same. We don't involve. But I experience one strong thing always outside the communities that's anxious about change. The people don't like change. And one of my personal heroes is Grace Hopper. And she always said, humans are allergic to change. They tend to say we are always done this way. You have to fight it. Take the anxious away. It's not a thing for us. We want to build a future. And the best way to predict the future is to build it. And there's a very essential thing in there. Any intelligent fool can build things bigger, can make it bigger, make it more complex. But it takes a genius to reduce the complexity, to hide it, to go in the opposite direction. That's what I always feel here in the community. We have a lot of people. And that's what I always saw about the vision of Plone, to empower users, hiding the complexity, make it easy to contribute content into a website, or to adapt to a specific use case. And it's always time to improve the community or the technologies. And there are things we should change in the dot. And one of the highest things I always say is there is a high priority that is to accept that we can't do everything. We should prioritize. And the first thing is there is a lot we don't need to do on our own. The energy we are forced to invest in unnecessary action can't be used for production development. We do have infrastructure liabilities. If we can get rid of that, better. We can focus on better things. We don't need to maintain deprecated, unnecessary packages. If we don't need them anymore. And one of the key in Star Trek and all the other things is about standardization. If we can document things, we can have a mindset for standardization. We share the same ideas, the same knowledge. And it's about explicit knowledge. We have to document our processes. Because then people that are not that creative can also join in and do some of the maybe boring stuff, but also the stuff that needs to be done. And it's all about a measurable quality we get as a community. There is something like the capability maturity model integration that says every community, every technology, every company has a level where they are in. And you rise with the quality and you're going into optimization. That's also the thing we do. But sometimes it feels like we are too busy to improve. Because Python 3, for example, we have to move. But it will really move our front end in some place. We heard yesterday the complaints. Bustle and Etter looks the same like five years ago. Yes, Volta is done or is on progress. But it isn't chipped now. So yeah, there are good examples. The AI team has finally switched off SVN Plone Arc and everything is on GitHub that it is necessary. And that's a fantastic work. There are a lot of technologies. And as I thought about a talk, it was like, is this something I want to talk especially? I said it in the talk description, there is a lot of technologies I think we can get rid of. There is a thing of standardization tools to move into. But I don't want to talk into detail about that. I want to talk about other things and it's also like the hype cycle. Technologies have a hype cycle. You raise an inflated inspection, you go over the theater, the thoughts of the illusions and technologies can die at every point into that thing. But most of the stuff we have worked in Plone goes into the platoon of production. And the more things we have in there, the more value we sell with our product. The problem is, as always, we are innovative. There is this chasm of innovation. If you can't move over it, there is no adaption of that in the white community. But we also can adapt other technologies from the outside and move it into something workable. So I always tend to say, people, you are afraid of change? No, don't panic. Just know where your towel is. Because hey, every change is a new opportunity where you go in. There always opens a new door and it always helps you. So from my point, it's back to leadership. And leadership is one special thing. Because we do, as the Plone community, as the Plone Foundation, have people. And sometimes, at least the last two years, I feel something, there is special things we should care about. And I want to say it how I felt in the last few years, because it's sometimes the thing about commitment, obligation and duty. The need of the many is outward the need of the views of the ones. People sacrifice themselves for the larger view, for the larger point. And it is in Star Trek. We not have the possibility to get a spark back out of the death. So we need to care about them. And open source and community is a long-term involvement. It's like a five-year mission out in outer space. It sometimes get boring. It sometimes get conflicted. But we all want to bring the people back in, back home, so that they feel still and want to go on the next mission with us. Well, but sometimes you get depressed, you get mad. I felt like hell, more like Marvin. It was really like six years on the board. I didn't use Plone on other things than fun projects for me. I never earned a living out of Plone. I was three years the vice president of the foundation. And in the six years on the board, I never missed any meeting. Never. And it was like I really got in time for the meeting back out of the emergency room, because my ear was bleeding. It was during my time on parental leave in Japan, the board meetings were at 3 a.m. in the morning. And it was even during the time where my sister and my mother died. I also attended the meeting because I felt it was like not only an honor to be on the board, but also an obligation. So if you serve, you have to do the stuff. That's sometimes not healthy. So sometimes there are signs that you need a break and you should also look at the persons around you. People sometimes need a break. They should take care of themselves and take a break. Because if they do and they feel that we care about them, they come back afterwards when they feel better, if they're healthy again. So please, don't feel sad, don't get depressed when you're not feeling the energy anymore. Take a break, take all the time you need. It's like I haven't seen Massimo for a long years, but he's here and it's like all some others. I saw today Mariusio de Monta and some others will be there for the party tonight. And that's awesome. Yeah. Open source projects are voluntary work. A lot of voluntary work. And the problem is if all your work is voluntary, there is no security net. There's no leadership that said, now you go on holiday and relax. Get back off the computer, doing nothing. Just become healthy again. And I want you as a community to take care of each other because it's important to do that. I don't want to see people break down. That's one thing. And the other thing of leadership is to give acknowledgement. And there are lots of people. The one thing that makes Plone so special is there are no rock stars. It's the community at all that work together and make things happen. So I want to thank all the people that worked on Python 3 stuff to get Plone happens on that. I want to thank all the people working on Volto because that's the new vision, the new thing to do. I want to thank all the people on the framework team doing, discussing all the new ideas what Plone could become. I want to thank the admin and infrastructure team for doing the stuff because that's not the product, Plone. It's all the things around. They are necessary. Thank you, thank you, thank you. And I want to thank the security team. And six years on the board there was a lot. I want to thank you special also to the people on the Plone Foundation Board of Directors because the picture in the back side says it right. It's all I was and always will be your friend. It's a matter of things where we come together and see each other. Those are the people I have served with and I made up a statistic how many people have been on the board and how many terms they served. And it's a fantastic thing to see. 43 people have served the foundation, served the community with all the stuff. And there are all the names on. Thank you all for your service for that. And board work can sometimes be hard. You have to discuss things that are really, really weird. So it's an open source, it's an obligation and it sometimes could be really hard obligation. Who have you have heard about the Trogius event? Who have you thought or thought? Think about that. This could be a legal issue. There is in the German criminal code a thing called computer sabotage. So if somebody harm a different deployment by doing something and removing a package from a package manager so that the deployment can't be rebuilt, it is sabotage. So that's the thing where you can get five or even ten years into jail for something like that. So if you start doing work like that, it could be worse. So there is an obligation you have done with Contraptory to Interopal Source. And I want to please you all. And do the mistake if you're feeling depressed, feeling weird, aggressive, because someone complaining at you, there is an error in your code, please fix it for me, for free, for everything. Yes, you have the right to say, no, I don't want. Or you have to pay me for that. You should never get you into trouble by deleting your own code, deleting the packages you have released on a public manager. Communicate about it. Communicate that the packages are un-maintained. Document that there won't be any fixers. But don't get you into trouble. And for me, yeah, I said I need a break. So it's time for me to be a bit more off to new shores. But there's one thing I've heard and seen in the community. And Lawrence Row said it perfectly in Sorento one year. You can take the man out of the plow, but you can't take the plow out of the man. So live long and prosper. It's been a really nice talk for me. Okay, a few questions. No, no, it's okay. It's up to you. Ersmal, Tlingon, Pong, Lona. So we basically agreed on that. There are two things. First question is, of course, naming is hard. And I can barely remember when Collective came and we had Borg and so on. At a certain point, I remember a package that was, I don't remember if it was Collective Muf or Collective, what the fuck, WTF, something, that the community came up and said, okay, that's too much and we need to change. So first question is, where the foundation kind of steps in and try to direct these kind of mistakes because they could be enforced by the code of conduct, one. And the second question is, you talked about burnout and we saw that happening many, many times in the community. We had many people that basically at some point, okay. How can we, as again, as the foundation, help people to not get into that stage? About naming and everything, yes. It's always the weird situation that we do have come up with weird names that are not the nicest thing on the block and so, and the easiest way of is talking with people, make them understand what that means. And I couldn't remember one point where we as the community or the foundation board has to force someone to change a name. It's always, if they understand what you're talking about, they acknowledge it and they change it on their own. And that's an nice thing because it's not the conflict. You don't have to overrule someone. The other thing is, giving people the possibility of break, giving the possibility of acknowledgement, of helping hand, talking with them, even just saying, you're welcome to join the party, you don't need to give a talk, you don't need to contribute code or anything, helps. And it was for me in Barcelona especially, the people, the old people of the community, the people that came back because they still feel the energy and so on being there and that's all about it. It's, Ploughna is a family. It's much, much more than a product. It's the family and a family can join together just being friendly to each other, going for a drink, going for a talk, going for a walk together, being there, being open, inviting them and that's the thing. It's taking care of each other and that's the most important thing. Yes, Ploughna is a family and a lot of people come to Ploughna Conference just to meet friends over the years. I'm myself the same. And now with Zoop under Ploughna Foundation and Guillotine under Ploughna Foundation, maybe at some point Pyramid under Ploughna Foundation, should Ploughna Foundation be renamed to something else so it doesn't feel like Ploughna the Software is the first class citizen and everybody else is the second class citizen in the foundation or in the community and the conference. Actually it was funny yesterday for me because on the first day of the conference I really at the end get sad and angry and yesterday I had some kind of enlightenment or feeling of enlightenment. Ploughna for me especially was always much, much more a vision than the product and seeing yesterday the description of Victor of Timo and so on is actually the vision lives now more in Volto and some things and Ploughna is much, all the stuff we have talking about what Ploughna is is actually more on the Zoop, on the CMF layer and so on. So I think in some place or some way in one or two years Ploughna as the name Ploughna for the product will be gone because Volta has replaced it in the way. But the foundation it's the vision behind Ploughna and the vision is to empower users to give them a way to enable things and I think that's the thing we all want to do. It doesn't matter if it's with Zoop, it doesn't matter if it's with Guillotine, with Pylons, with Pyramid or something else. That's the thing and it's like others groups. The name of the foundation can stay the same when the product is not the most dominant thing and I think that will happen in the next few years. Does that answer your question? Thank you. Anybody else? Then thank you all for your contribution to Ploughna and those fantastic family. Thank you.
|
The Plone Community is an outstanding group of people with great knowledge, experience and innovation spirit. A Collective of People that want to be cutting edge needs to be open and divers. New Ideas, new technologies needs to be adopt and assimilated by the developer collective to improve. This Talk will talk about cultural changes within a developer community, about technologies (zope, pyramid, archetypes, dexterity, travis, ...) over time, and tools (tox, pytest, appveyor, plonecli, plone.bobtemplates, ...) we need to integrate into our processes to stay alive. And as always a lot of wise quotes and jokes from Science Fiction and Scientists.
|
10.5446/55167 (DOI)
|
Hi, I'm Andy Lee and I'm going to talk to you about Gillatina Kafka. It's a story about an add-on. I'll talk a little bit about the reasons why we developed it, you know, where I work, who I am, and we'll get into the use case for why we built this software. I'll talk a little bit also about the software that we built and the company that I work for and so the use case and then I'll talk a little bit about Gillatina and Kafka themselves and sort of give you a little bit of an overview about that. So I'm Andy Leib. I work at Ona. I'm a platform engineer back in Python developer. I've worked with Plone for quite a while now, probably around 10 years and so I've served on the Plone Foundation board as well. So anyway, enough about me and so I work at a company called Ona and we use some Plone technologies, in particular we use Gillatina as one of our linchpins to our software stack and we have built out a fairly significant complex platform that we use in the legal service and so I'm going to describe a little bit about what we do at Ona and this will help sort of give a general idea of why we ended up having to work with Kafka. So Ona and a nutshell, we gather cloud data sources that you're authorized to see and make it searchable and so it sounds very simple but we have a fairly deep microservices stack that we use and so we can gather data from cloud sources like G Suite and Gmail and Dropbox and Slack and stuff like that and so we leverage the public APIs for these integration agents so that we can pull in the data automatically, they're long running processes and then when we get the data it's analyzed and indexed through a series of processing steps and all this allows us to search the data, allows you to search your data in one place and so typically the software is used by in-house corporate law firms and they use it for discovery and for legal holds and for risk management and so this allows them to perhaps implement things like data loss prevention or to make sure that corporate stuff isn't leaking out but there are other use cases as well that as time goes on our clients tell us about. But so anyway, we have a platform that we've built and it's based off of microservices and so we have, as I said before, we have these integration agents that go out and pull the data in, we do document processing, we do search indexing, we have a front end application that does the front end stuff, we have, Guillotine itself is our REST API server, we have a series of Guillotine-based applications as well. In addition, we also have stuff like AIO HTTP apps, we have a RabbitMQ Kafka, of course, Elasticsearch, TaskQs, Asyncios, I mean we have a lot of stuff going on and it's all orchestrated through Kubernetes, we put things in Docker containers and use Kubernetes to deploy it. We have a pretty significant sort of deep microservices stack that we have to do and so across the board, our API understands JSON and so we're fairly typical in that sense. So we use Guillotine as our REST API server. So I think that everyone's fairly familiar with it so I'm probably not going to spend too much more time on it but as you know it's built with AsyncIO, it's the ZCA reimagined so it's familiar to anyone with a ZOP or PLOM background. We use it to store our data basically and we validate our data and all that stuff with interfaces and JSON definitions of things and so we can talk more about Guillotine if we need to later. But more, probably a little bit more germane to this is a little Kafka overview and so we recently introduced Kafka into our software stack and we've had a little bit of a learning curve with it but so anyway, Kafka is essentially a PubSub message broker and it will guarantee delivery of messages in a specific order. So LinkedIn wrote it and then donated it to the Apache Foundation and so they used it essentially to record website activity and I think in the process of doing that, I think the more use cases were realized. So anyway, it's implemented in Java and Scala and it is a PubSub based messaging system used to exchange data between processes, applications and servers. So like I said, its use cases include but definitely are not limited to a distributed commit log, like a website activity tracking which is what it was originally built for but it can also be used for log aggregation and there's even more use cases for it that we were using as a buffer for example but anyway, so it runs in a cluster and the cluster itself requires a manager called ZooKeeper which keeps track of the status of the Kafka cluster, the nodes in it and make sure that the stuff that's going on in Kafka, like particularly the internal stuff like topics and partitions and stuff like that are kept in order and so it's a fairly complex piece of software but it's really robust and it's really performant and essentially once you configure it and tune it, it generally does not require a whole lot of hands on but getting to that point does take quite a bit of effort. So a little bit more about Kafka, just some terms that will probably come in handy is a producer which is what sends messages to a topic in Kafka and a consumer receives messages from that topic and then does something with it and that's really what we're focused on but a topic is a named entity within Kafka that stores a message and then a little bit smaller than that is a topic partition which topics are split into partitions which allows you to split data across multiple brokers or across a distributed cluster essentially. So what does all this really mean? So essentially what happens is you have a producer that writes to a topic in the Kafka cluster and then the consumer reads from that and does something with it and so this is all sort of done in a guaranteed order and so Kafka can keep track of where the offset is for the last thing it read and so you can set a retention policy on a topic that allows you to say if you wanted to hold on to that data indefinitely and something happened to the thing that the consumer was doing to the data, you could go back in time like to offset zero and replay everything and you would never lose any data. So essentially that's really one of the main reasons why it's so robust and so great but anyway so the data that's exchanged through Kafka, through the producer and consumer interactions are typically called messages and we use JSON but really any data format works. It can be whatever it's easiest for you, whatever is easiest for you to validate and like I say for us it was JSON because that's what our API was already doing but so why then did we need to have Kafka if we had all these other services running and things seemed to be running along pretty way, was it just because we like added complexity, do we like things that are more complicated, do we want to sound cool, more knobs, all those things are probably true but really what we needed is that reliability of each message being guaranteed to be delivered and so as I said before you can replay these messages if you need to so that way you have a guarantee even beyond the guarantee that the software already provides that you will always have that data so you won't lose anything. Another big reason for it is that we realized that with our Elasticsearch that it was fairly easy to overwhelm it with too many requests and so when we're gathering data, you know we're pushing a lot of stuff through the processing pipeline and it all ends up in an index in Elasticsearch and if you give it too much to do it might silently drop things and so this seemed to be a way to guarantee that we would always have data that would always be correct that we wouldn't have any syncing problems between our object representation and our search engine. Another reason why we decided to do it was also for user activity stuff which is one of the express use cases for Kafka but we wanted to be able to keep track, not just keep track of what people are doing but sort of have an audit log that would allow our clients to go back through and make sure that they knew what activities happened on the platform while they were using it and what the users are doing and so this is actually really great for us because since we're dealing a lot with lawyers, you know, they have specific requirements for how long the data needs to be kept and so we can then have a specific retention policy for a topic that the data gets written to and read from. Another thing that we ended up finding out through, just a little bit through trial and error but more through observation is that sometimes we would notice that not all the logs that we expected were showing up in our logging platform and so we wanted to ensure that all application logs were getting written somewhere that, you know, we could guarantee that they would be there and it wasn't like we were losing a lot of stuff but, you know, we realized that perhaps we could be doing a little bit better so we also implemented Kafka as a logging buffer. So, you know, those are some of the use cases for why we decided to implement Kafka. You know, another thing that's kind of nice about Kafka is that it's not, it's platform agnostic and so you can pretty easily write a simple producer and consumer and not have to worry about, you know, what the other services are working with but we thought that it would probably be better for us to be able to leverage Guillotina's ability to have add-ons. So essentially what we did is we did, and we did so and so we built it using the fairly standard template for Guillotina add-ons. So as we built this, we, you know, we really wanted to accomplish a couple things and one thing that was really good is that we could leverage Guillotina and the SoComponent architecture and so that allowed us to, you know, have a common vocabulary of, you know, code and reuse that we use. The other main reason was that we realized that the majority of the time was going to be spent building producers and consumers and we wanted to alleviate some of the, you know, burden of having to create them and then integrate them into every single package that might need it. And so, you know, we figured that since we already have Guillotina deployed that, you know, in our, you know, software stack that we could leverage that and then just have this add-on that was, you know, fairly easy to, that would allow us to easily create these producers and consumers. So what the add-on did is it provided two APIs to us, one for the consumer and one for the producer. And like I say, we mostly leverage it for multiple consumers. It uses, like I say, it uses ECA concepts like utilities for singletons and adapters for, you know, adding functionality and interfaces. You know, obviously it integrates with Guillotina because it's Guillotina add-on. You know, one of the other advantages is that it's based on AIO Kafka, so it's async native. It plays well with Kubernetes, which is very important for us. And so, you know, with, essentially with this, the other thing that we really needed to make sure that we could do is be able to scale up a number of consumers because as the load on our processes increased and we had more requests, we wanted to make sure that we could handle all that data, which was, you know, essentially the whole point in having Kafka introduced into our stack. And so, you know, before we wrote that add-on, we were doing one-off producers and consumers. And, you know, that's okay, but it's a little bit too much maintenance. And then furthermore, you know, it's kind of hard to scale because then you have to continue to put those same, you know, more consumer pods up in your Kubernetes cluster. So one of the things that, you know, we did to, you know, sort of as the evolution of this went on is that we made it the ability to have multiple consumers in a pod. And so what that meant is that we could have a single package with multiple producers in it and then deploy it into our, you know, Kubernetes stuff and then we could scale that way up if we needed to. You know, it's good in the sense that it allows us to not have a lot of individual consumer packages, which, you know, is added maintenance and, you know, there's more pods and stuff. And so we can have, you know, multiple consumers in a single pod and then we can scale that up and down as needed. But the other thing that, you know, we sort of noticed about this whole thing is that, you know, most of the consumers, you know, essentially do the same thing where they read from a topic and then put it into the next component, you know, a lot of the times it's putting it into, you know, a specific, you know, part of Elasticsearch. And so, you know, multiple consumers in a single package is actually really very powerful for, you know, for us. Did I mention that it makes it much easier to scale because that was really something that we found that we needed? So anyway, you know, what did we learn about all of this? You know, we learned that it's, you know, helpful to have it a way to easily create producers and consumers in sort of a predictable way. And it's, the add-on also provided just enough framework so that we could easily customize what the consumer does, you know. We did learn that, of course, scaling is important because it always is. But you know, one sort of even bigger lesson that we learned from all of this is that, you know, just being able to create a producer and a consumer easily and being able to customize it is only a small part of it that, you know, there was a significant part, you know, a significant amount of time spent tuning Kafka itself and making sure that, you know, it was behaving the way that we wanted to in a predictable fashion, that it wasn't taking too many resources and, you know, that we could, you know, that we had it tuned in a way that it could easily handle all the load. So, you know, that was, you know, a big part of it. It wasn't necessarily part of the add-on. It just so happened that it, you know, it all sort of happened at around the same time. And so one, you know, one other thing that we sort of learned from it is that we could probably do a better implementation that maybe wasn't a guillotine add-on, that was maybe just a library that we could easily integrate into, you know, one place in our stack instead of necessarily having it as an add-on. So, you know, we did sort of look at, you know, what the next steps would be with it. And so, you know, like I say, we took looking at a package called FOST, which does, you know, essentially a similar thing, but is a lot more overblown than what we need. But looking at the way that they did their sort of Python Kafka integration stuff, you know, we could probably do it with decorators instead of having to have an add-on. You know, and we could, like I say, we could build it as a library instead of an add-on, but, you know, not to worry, we're probably going to continue working with the add-on. So that's pretty much all the things that we did with guillotine and Kafka in, you know, in a very quick overview. So do we have any questions? Hi, it was nice to see, like, this integration. As far as understood, you collected data inside of guillotine, right? Yeah. So did you talk about using, like, a CDC solution, like, to capture this from the database chains or why it's better to do this way or not for your user case? Well, we found that this was a little bit better than what we had before because the way that our software works is that we do store the data and the object representation in guillotine and then we would sync that with Elasticsearch. And the job that did that, you know, was kind of process-intensive, process-intensive, and so we figured that this is probably a little bit better than having to run a background process. And so this, you know, happens a little bit more closer to real-time, and so we felt that the results also in the search were improving a little bit with this. So what is Kafka and you just can... Related to that, Elasticsearch, for example, it's really important to batch update and reduce the amount of concurrent updates to things. So Elasticsearch, we can't handle all the load that guillotine is pushing into it. But I did have a question for you. Did you want to talk about monitoring Kafka and some of the things that we had to do in order to see where we were on the Kafka offset in terms of processing? Yeah, I mean, I can touch on that a little bit. So, you know, full disclosure, Nathan works with me and had done quite a bit of the engineering on, you know, a lot of this Kafka stuff as well. So we did, you know, have to monitor what Kafka was doing in order to get some idea of the performance that we expected from it. And essentially, we needed to have a little bit better integration into our main application in order to have that visibility. And so in order for us to, you know, essentially, like, we weren't able to customize the add-on in order to get, you know, some of this, like, figuring out where the offset was, that had to actually be done as a customization to the producer or the consumer in the application code. And so that was, you know, a little bit about on the customization that we allowed with the framework that we had. So I mean, it's a little bit of a non-answer, but that's, you know, essentially the level of detail that I feel like I could provide at this point. Yes, I find it very cool to know, again, Kafka, I went to a meetup local where I come from a few weeks ago, and there I got a whole presentation on Kafka, on banking accounts, and also, and one of the things that came up there, that if you want to replay all the messages that have come into a topic and you have to go back, Kafka stores the messages. So when I hear you, you say, look, we want to go back to zero, then you now have two and a half places where you store it. You store it in Guillotine in a database, then it goes to Kafka, and Kafka has to have a copy of the thing. And then, yeah, okay, elastic search is not really storage, but it's a huge index, of course. But now you effectively double the storage you need, right? Yeah. Yeah, essentially you do. And so, you know, it depends on how long you want to retain those messages. And in some cases, you know, you want forever, maybe. And in some cases, you may only want a minute. So it all depends. And since we consider the source of truth to be in our Guillotine application, that's more important for us, for a lot of these things. And so there are certain clever things that you could do, depending on what your data is, and the messages that you send to Kafka in order to trigger whatever to be re-indexed. But the sort of like the backing, the offset up to zero and replaying everything would be done if there was a catastrophe, right? Like if you lost an index, or suppose you wanted to do a migration, and you had a new version of elastic search or something like that, and you wanted to, you know, more easily like not try to move in between versions, and you could just take the data from, you know, day zero and run it into there. So that's more the use cases that we had in mind, but you are correct in that, you know, it requires additional storage, particularly if you want to retain the data for a long period of time.
|
This talk will describe integrating Apache Kafka with Guillotina in an addon.
|
10.5446/55168 (DOI)
|
Music Exactly. So already in the airing store in that morning, Gatsby's air does mention as a name by a few times. So with this talk I will try to describe what it already is, what it does, what it doesn't do, and how it best works with blown in that short. So, well, I was already presented. So I've been doing Python alone for some time. And for the last few years I've been mentoring the computer-summoner of code projects for helping to develop the Gatsby blocking for blown, which is part of the topic of this talk. And I'm working at the University of Sivaskla, at Central Finland, up there. And in this picture it's always warm and sunny, somewhat up there. Very in this picture. So it's a nice place. So to get better inside the topic to understand what this is about, I will show you a pre-recorded demo of the Gatsby JS with blown in action. So now on the left side of the screen we have traditional blown user interface with a virtual leader team. And on the right side we have completely decoupled ReactJS single-page ReactJS application, so with content from the blown, a team content the blown. And that now happens to be built with Gatsby JS. And it's completely decoupled, but it uses the content from blown to blown REST API using Gatsby JS architecture and now plugging. So under the hood on the right side there is a complete ReactJS application with the very simple, it's a landing page. And because it's a code, it can see React on the content it's find gets from the blown and then add used for that content and build that landing page. We'll see what you'll see. And yeah, so we can update, manage content on blown as we are used to, but then that content will be updated on the Gatsby JS site. A small disclaimer, in this demo we see live content updates from blown to the Gatsby development server. And that's still not out of the box with blown. I will have another talk about that on Friday, how we can make it happen. Normally you will need to read, normally to see the changes from blown into REST at the Gatsby development server we see also on the right. But of course we need just moving, if you're really just building a team, usually you have a pretty fine content on blown. And you only need to refresh the code you see that starts to set and the team you are designing. And that works out of the box already. So now in the team we just copy and pasted a lot of pages on a single folder and that team has happened to render those contents on nice landing page style. So if it's just a page, it will sort of page with the big title and description of the content, if it says the image it will sort of image as a big image or as a folder of things. Then it will sort of, it will read the folder as a column, as a row and then map the pages inside the folder as columns for that. So a big difference from what we have used in teaming plot.just.plow with Gatsby JS, it's not practical to make what you see, what you get experience inside. It's like for blown. But use blown mostly like out of the box experience with blown UI. And then your teamer needs to figure out a nice way to map blown content on your team. And this is one example, there's one folder of blown called landing and that folder may contain different kinds of blown content types. And then the order of the content type and content of the content, content items will decide how the theme will look like at the end. So this example will show folders and pages and images. Just a very, very basic blown content, you have out of the box is blown. So just out of the box of blown it's possible to use it to just map the content, edit the content, share the contents of blown and then map the folder for Gatsby JS project that the designer can certainly use. Because the content completely decoupled, all used to raise API of the blown, the designer of that team does not need to have access to blown at all. Not only HTTP access for the raised API. So also those teams, content editors and the designers can work completely differently, we can be different players. And some features here is I added once new field because in blown you can customize content types out of the box, to the web, to the browser. I added one special field there where you can add your custom boot track CSS classes and those will be then applied for the team. And as I said, because the team is code, you can react to changes on the content. And when I titled for the image, then the team will be like, okay, if the team has a condition, if this image happens to have a title, then let's show the title. Of course, because we could, because we could customize types, we could also check box for that kind of features. So that was one example of using Gatsby-JS with blown, could be in accent. Now at the end we cleared the folder and everything disappeared. Okay, so let's take a few steps backwards and think about what Gatsby-JS now really is what we saw. So Gatsby-JS is at first the name of the framework or the tool that you can use to make react-based web apps, websites or web applications. So they see themselves as some kind of react-based website that they're application, a generator, a compiler. So they hide all the, they're not going to be very user-friendly. So they just use install Gatsby-JS and generate project template and then run one command and then start creating new react-based web app or site. They try to hide all the difficulties of compiling and contributing react or the developed environment behind. Just like WallBestarter does for you. Because besides that it's also, it comes, takes content from many sources. So it's a mash-up environment that's used GraphQL as a common language to mash up content for various sources. So you can interpret many sources, many CMSs or other sources for your Gatsby project and Gatsby-JS normals take framework normalize this data so that you can use one GraphQL language for managing that, so it's going from that data and using it on your application or web page. And of course maybe the, why we started using Gatsby at first was that the end result of Gatsby, what Gatsby does, it does very fast web, static web pages. That HTML and CSS files and stuff like that, that you can easily deploy and they run completely separated from your backend server. So the output results of that landing page can be served completely independently from blown. Your blown server can be beyond the firewall or completely shut down and that page still runs. And also the Gatsby is the complete, it can be used, Gatsby can be used for the completely free ecosystem it has behind. Okay, well who owns Gatsby? So Gatsby started as a hobby of open source project but then the other managed to get the funding for the project and it became a Wether Capital Fund in Saarab. And right now it's still in growing phase, it has a lot of money so everything is very happy and they really are happy to contribute open source and help open source project and help also the community bring more plug-ins and more integration from the Gatsby-JS framework. And how this developed further we will see in the future and how they are planning and trying now starting to make money is they build services to help making the whole concept work. So they're making cloud-based services where you can host those sites and host preview development versions of those sites and so on. So in summary, good things in Gatsby-JS, the end results are as they promised they are very fast websites, placingly fast. And they have very good documentation, very easy to get started, maybe one of the best out there. And the development experience is very nice so you get very instant feedback of your changes, of the site, of the changes you do and very easy to set up and everything seems to just work. That's the experience. And they claim they are very committed to accessibility, so even the decision for the framework has been made in accessibility in mind. So they use, for example, for the sites you build, they use router called ReactRouter, that's made, I guess it's the original router of the ReactRouter, it's his latest routing framework. And that's focused on accessibility, so because the end page works like a single page web app, it doesn't really allow the new address when you click a link, but still, the URL will change and it will make proper notifications for accessibility wise, that if you're using it with the screen reader, the screen reader will know that now you have changed the page properly. And also that shows on the documentation, so they will cover accessibility very well in their developer documentation. And of course, it's React ecosystem compatible, so everything you know about React can be applied in Gatsby's AS projects. Of course, as in any JavaScript project, you may have version conflicts and not all libraries work together, but basically, it's compatible. Well, it's not so good if I wouldn't yet suggest Gatsby for very great projects. So what we saw in the beginning was a demo of a developer server you see when you're running doing Gatsby project, that gets instant updates, but when you deploy that site, it will generate the static version of that, that generation will get slower and slower depending on the speed of your back end and amount of the pages. And if you have thousands of pages, it can take anything for minutes to half an hour or even more. So it becomes a little bit impractical if you have very fast changing content on a very, very large site. Also, because there's a community around it, the plugins will vary in quality, depending who has made it, when it has made it. And because the Gatsby tries to host most of those ecosystem plugins in their own monorepository on GitHub, well, it works in a sense that they have automated releases for all the plugins, so they are version-wise compatible. So always when you install Gatsby's AS, you can get all the plugins from, built from the monorepo with the correct versions. But on the other hand, it's very hard to track issues for single plugins and contributors for single plugins, and that makes it more harder to decide if this is a good one or not. So you need to try it and see if it works the way you wish it to work. And the ugly part, do you really need 2000 NPM packages just for making your HTML page at the end? And that's kind of up to you, so it depends on your page. If it's just really static HTML only, then maybe it does not make sense. If it helps you that you can use also interactive React components on that page, then it might pay off. And what I need to mention is that end results, you cannot really modify that by hand. Even if it's just static HTML or CSS, there's no sane way to update that without Gatsby's AS. And the danger is that you have a very long running project for years. You maybe need to make sure that you have the original development environment for the original version available when you later make updates. So you still have all the versions. So in summary, I wouldn't recommend this for everyone. You need to be at least okay for developing with React. If you don't like React, you don't like Gatsby's AS. If you don't know React, but are okay to learn it, then Gatsby's AS documentation is a very good place to start learning React. And the documentation is...the tools are also very, very good tools for learning React. Maybe even if you're just learning React, it makes sense to make your React app in Gatsby's AS, because it makes sense easily understandable routing for your React app. And then Gatsby's AS is most easily...if your content is completely public. If it's static HTML pages and CSS and things like that, it's very hard to keep those private AS to be published somewhere. So anything that should be private in Gatsby's AS should be fetched dynamically during runtime. So then you are writing React.js app instead of just making a website. And same for the last tick, so it works best if the user is only a reader site. Maybe you might have used some commenting service for adding comments there. But the users need to have a back-channel for your back-end applications. Then it's sooner...especially with blown, it might be easier to just use Waldo for the use case back then. But if all this is okay, you like React or want to learn it, you're content in publics, and users mostly just read the site, then Gatsby's AS might be a good solution for you. So I need to introduce this theme. So if Gatsby is not for everyone, for what is it and what is the problem they try to solve. So they claim that the CMS framework, the CMS market is changing. So we have used that...well, our dream has been that we can manage everything in our single CMS. For example, in Plone, we have Plone is very good CMS for integrating stuff elsewhere. Very good framework for that. For our dream, for example, Internet CMS is that we have one CMS, and everything is connected to CMS. And everything is minded under CMS. All the services are run to CMS, we use them through the CMS, and their content is fed to the CMS. CMS is at the center. But the reality doesn't quite match that. So already, you might be at it, we have blown, but we have... that users are also managing content in many other places than the Plone. And we just display that content through the Plone. So Plone is a very, like, stupid middleman displaying for that content in some context, of course. But when we have some of those, CMS is also blown. So we are kind of blown with Plone's with Plone. And GatsbyJS claims that you could simplify the stack, because all those other services are moving fast. New space comes up and all goes away. And you may not be in charge for deciding what services you can use. For example, in the university, it's easy that some... the communication department chooses to use some tool for deciding and maintaining some kind of content. For example, events, for example. And so those no longer be outblown. And GatsbyJS claims that it would be easier and cheaper to integrate everything through GatsbyJS. So you would not lead a back-end developer to build that integration inside Plone, but instead you would just need a good front-end developer to fetch all that content, all that stuff together with GatsbyJS. So that's what they are claiming. And that's what they kind of meet with the content mesh. So we are no longer in world of single CMSs, master CMSs, why the world of content mesh, the content is all around us. And CMS is just one place we do have content. And we need a solution to get that content for all those sources and display them in the way we like, and maybe in many different places. So that's what they meant with content mesh if you see that in their marketing materials. So I mentioned that important part of GatsbyJS is its ecosystem. So they need the ecosystem to build new integrations for new sources. GatsbyJS, that ecosystem is mostly made by plugins. So the core GatsbyJS is very, very core experience and all the real functionality for it comes through the plugins. And I would say there are three main categories for those plugins. The most important one maybe is the source plugins. For example, GatsbySourcePlone is the source plugin that integrates source for, CPLone CMS for GatsbyJS so that you can use that GatsbyJS GraphQL experience for querying stuff from below. And the one-shot source is also GatsbyJS file system that also uses just file images or content from file system. Local file system where your team is developed and publish those with GatsbyJS. So why should anyone want to use GraphQL, the query stuff that fits directly from the file system? Why don't you just link to the file to the firm file system and let it be there? Well, that's a second class of plugins that's called Transformers. So when those content have been sourced in GatsbyJS, in so-called GatsbyJS nodes of content, so they are blown pages and marked down pages and then maybe some other CMS sources and images, then transferable plugins can enrich that content. For example, for images, there are transfers for making scales of those images and maybe adding some other effects on those images. And they don't know that for all images, they provide GraphQL interface for the API so that they only scale, transform those images you are really using in the way you are applying, telling in your GraphQL queries. That's what Transformers does. And then the newest categories, plugins, are called Teams. And I will tell more about Teams very soon. The fun example would be like a gallery. Now you have images, you have scales of the images, so a team can be like pre-configured website, sub-website for gallery, displaying all the images you configure it. But these categories are not enforced, so GatsbyJS provides you API. Any plugin can use any of those API it provides. And for example, in GatsbyJS we have source, we have source images, image nodes from blown.image content pages, and then we have a transformer that actually fetches the actual images also for those pages as a charge of those nodes. So all plugins are technically equal, but they like that you name them properly that it's easy to find. And they would, in the best case, one plugin only does one thing, is to compose those plugins at the end. A little more about the Teams plugin, as I promised. So I would call those Team plugins as maybe an experiment on a main-appled boilerplate. So you install a team, and they are new plugins, so they vary very, very lot of quality. Team plugins that bring you complete site from some CMS, like blown, old, too-parallel, something, some other CMS. So it just installed a team and didn't have complete site, very open-end site, open-end, open-end by the designer of that plugin. But they can also provide only single pages or single sub-sites, single folders on your site, depending on what's the feature of that plugin. And of course, they provide configuration for the features they offer. And of course, that varies by the designer of the plugin. And there are two ways to configure them. One is that the options they offer. But then, Cathie says, provides a feature they call shadowing, which means you can override, in your local project, you can override any file from that from that plugin, from the Team plugin, with your own version, and they will use it. It's like that if you was in the Waldo, the stock, or training, it's the similar same thing as what Waldo calls the J-Bot approach, you just take place files. But they have went further with that so that you can even import the original React component and then wrap all your own version of that. So you can also even extend the stuff you find from the... You can even extend the components from the Team plugin and then override it, use your version. So I guess what it's Cathie's goal that you could... In Blown, we had a term like just bunch of add-ons. So I think that sounds like a Cathie story. In the best case, you all just install a lot of add-ons, Cathie add-ons, your team, and then you add some teams, tiles for it, and then you have a site with your content on all your sources you need. So they are targeting like add-agent for this approach that this will be easy tool for use to build beautiful sites, very fast sites from content anywhere. I did... To get the teasers, I will catch the team, some teasers I made one, example catch the team myself. It was idea just so fancy cover for some pages, and this was what my configuration looked like. So I let users to configure or add-ers for that cover flow, and then the GraphQL query, you use the fetch content from the flow that cover flow. And well that example shows one nice feature of GraphQL that calls aliasing. So it allows you to... I have this on the All Feeds Cathie blog. Sorry, all cover pages is the structure I require from my plugin, but with GraphQL aliases it can get the data from any Cathie source and transform it for the correct station from my plugin. Okay, so now we are finally up to see how this all applies to blown. And I like to emphasize some core features of blown that work very well with CatBizAs. So at first blown is very good in hierarchical content management. So in one blown site it would be trivial to host content for any amount of individual sites you could build with CatBizAs. You just create a folder and give required users permission to edit content in the folder, and then for blown Cathie source plugin can only show that content from that folder for the designer to build the final site. Even better, you don't need to depend on the blown default content type pages, images and so far, but you can extend those content types with the fields you need or even define completely new content types to have more structural data to give more options for the content authors and the team designers. And one example in my demo was for the images and pages I added that CSS field that was just a string, but allowed the content author to add his favorite class names from bootstrap utilities to change the outlook of the element. And while blown has workflow system, so you can have both drafts and public pages inside the same folders, and only when the page is published then it will be shown on the final page. And also you can use blown publication dates effectively with Cathie's area so that when you, today's version of the site will show, will not show the pages that will be released only tomorrow. So I think we have someone had to ease it, people don't understand that if you publish a page on blown and make it be effected only on tomorrow, they are scared that why I can still access the address of that page without logging in because the page is already public, but the effective date will only protect it from the pages we showing on site map and search results and navigation. But if that site is created with Cathie's, then that page will really be included on the end site only after it's effective. And of course nothing this would be possible without the awesome rest API we have in blown. So these are my favorite use Cathie's and blown. So as I said, you might have your beautiful Waldo powered blown CMS. You have used hours and hours to build your favorite Waldo experience that matches your brand. And then you get to one task, one order for completely different looking site. And it would be very, it might be very expensive to build that team inside and it would not like to blow your Waldo team with the designs for that site. So then you could just make a small, people let your users out of the content inside your Waldo blown in specific folder and then make a Cathie spin off site from that folder. Also you might already have a React app somewhere or if you would have made that React app with Cathie's, then you could use with Cathie's you could use blown as the CMS back end for those areas where you need to have user out of content. For example, we have, we are releasing, publishing our current curriculums at the university with Cathie's that gets its content from predefined from some other, from not blown but our course system. But in the future, we want to also add some custom information for those courses, images and some marketing information, things like that. So we can, in the future we can fetch those from blown and then merge the data from our course systems with data from blown to make the final site. And something I call here content configuration management is maybe that you don't want, you don't need to, you want to out auto-react on content in blown. Maybe you get, for example, you want to show which room, which lecture are in which rooms currently. So you have different systems, you have system for showing those, where you can get those information. But then you need a way to configure those informations. Well we have, I would call it digital signage solution, where a whole has a display with, which would show only the information for the rooms around the whole. So in blown we could have a folder for that whole and that folder we would add, we could have slides, pages like slide is and every slide we could design define what room we should show on this slide. So blown would not have information of which lectures are in which rooms, but blown will decide what rooms we will show on the display of that whole. And then when the catch site will be built, then the catch blockings will combine the information, it will get the configuration information of blown, but then, and then getting the real information from the original source and combining those. And what are the main challenges using catch site blown? So you will lose what you see is what you get experience. So editors will use the blown interface for managing the content and it will look completely different at the end. And another one is that you need some system to build those catch sites in some interval. So you practically need your IT department to have CI infrastructure for building those sites. And just by accident these are the challenges that the catch be incorporated companies is building their cloud solutions for. So they are providing a solution called catch be preview, where you can see the preview of the site while developing it and then they are also providing hosting services and then this management services. So what's the status of catch be source blown right now? Yes, so we have incremental content source meaning that when you run catch be built incrementally, one build after another, it only fetched the new content from blown, it will speed up the build a little bit. As I said many times you can choose the folder you used for that site. And as we see in the demo we can also so reach the content, reach the content with images and also even inline images and link to the files. And that needs a little bit more, well it's a little bit more difficult. So in the Volto talk we saw that they only dumped all the HTML inside the Volto page. In catch be we experimented and it has been working so far quite well that we actually can deconstruct the HTML and we make it a real react component so that you can replace images inside that HTML with catch be, special catch be images that are progressively loading and with catch be links that have the features of catch be routing. And as we saw in the demo we are ready for live updates for your development server when blown is ready for that. What is still missing is the only out of the box we only support for the main content types for the convention on rich text and image and files. So we only search for rich text for field names text and images for field names image and files for field name file and make it that better it's only matter of someone requiring that and supporting development that. And in catch be you can use the same plug in multiple times and that kind of works for blown you can have multiple blown blockings having content from different blown sites but they currently all go in the same name space and can override each other. We know how to fix that it would be just a lot of work. So it can be done if required but needs some time to do that. And how I would recommend getting started with catch be says and blown is just to follow the standard catch be says documentation don't go directly do what we have done with blown. Just go start from the official catch be documentation is the catch be says follow their starters once you are familiar and you think now okay I like it I would like to use it more then you install the blown block in and maybe configure it at first with some existing blown site like the demo offered by kids concept and then once you start the catch be says you have a blown server you will have an option to open the so called graph I call explorer to see what kind of data you can get from that site and if you have follow of the catch be says tutorials you should be now familiar to look if oh what kind of data I can find from blown and use it on your on your catch be says project. I think that's it would be the most easiest way to get started with it. And once you are more familiar with it then you can go to search go for more resources and see what you have done for blown and if you get stuck don't try to solve everything by yourself or ask help ask there are some unknown difficult steps to go around or where you can get stuck so ask help for us we try to help our best and this slides will be online as a talk so I added some for me few more resources you can go to look for more information. So I have left some time for questions as as proposed and in my while you are thinking about the questions I advertise two more talks so tomorrow my latest summer of course student Alok Kumar is presenting his work this summer he made for so-called catch be preview for blown so we kind of made blown compatible mostly compatible with that catch be preview solution it's not obviously compatible but it's been out if someone really needs that it can be made compatible now very easily so it had those features for for getting live updates from from blown while you are developing a catch site so if you would have you would be able to run a catch be they won't server somewhere or public then you would see how you would have like live preview of your site and then on Friday I will have another talk very technical talk where I saw the back end that is required for that feature the work so now we have time for questions just raise your hand and I'll come run with the microphone if there are any questions in the long run like in a few years from now could you imagine that at some point we merge like get speed jas the approach at get speed takes and what we're doing with Volto I don't see any good reasons for that because the use cases are so different catch be clearly goes for for the coupled static websites and making them dependent on back end like Waldo does would be counter effective I think so I don't see any any gear reason for that would be a lot of work it might be fun but I'm not quite that sure I wouldn't I trust in the long run I'll trust more in the wall though than I trust for catch be say yes does Gatsby have any sort of dynamic sources like plugins the get stuff not statically what Gatsby say yes as self doesn't have but because it's compatible with the other other ecosystem real ecosystem of course the app end solution and you are what you're building is just a real application you can build features that fits live updates from other live sources and they have even documentation for that and and the possible live update systems have documentation for that one of my favorite is his hasura has a I O it has very good blog post on how you integrate it catch be and they are even screencast how you make like live updates from catch before that has a has a I O is a graphic you are layer for postgres but it simply very very good one so with just data in postgres SQL and putting hasura on top of that you have high quality GraphQL interface for your data that you could then easily use with Gatsby which plugin called catch be source GraphQL any more questions thank you for the your great work on this my question is if everything is static it shouldn't be too too difficult to to make a search function on it isn't or is it yes it's possible and so there are plugins for implementing search with a few different ways I just they are just not popular that I didn't add them as separate category there are two approaches one of them with you with the static index in with with your site a little bit like like things documentation does the other ones are ones that use service some some search service and while you're building your cats beside it will then send build index at the service and then use that live service when you when from the static site you are browsing of course services may be maybe pay you might need to pay for the service but that static one is free that where you have the index with your site but of course if you have big index then it will slow down the loading of your page I see hand there one question you said you just generate data which is effective so it's like with a build in the search and blow since this is a time which yeah can be in the future but there's probably no content editing which can trigger the build how do you do that you do you run it like once every hour or whatever yes you mean otherwise you will never be noticed of this yes you need to have scheduled builds and that's problem if you have a huge side with a thousand of pages then you no longer can be like once on how once per hour or once per five minutes or things like that okay and of course I said you need an infrastructure for making those builds our infrastructure is via using GitLab CI for our version management so GitLab GitLab comes its own CI and also comes with GitLab pages where you can publish those pages now you get JSON data from Plum content would it also be feasible to get like complete HTML from for instance the Plum site I'm thinking to basically for documentation purposes to say like this is how this space looks and you fetch not only the content but also the CSS and all the surrounding things from Plum or another surface yeah of course it's possible it probably needs some other we I don't think it makes sense for us to build it inside our plugin but maybe there is already a plugin that fetches complete pages and then serves those HTML as true as true as GraphQL anyone else of course that would only work in the same way if the site is public okay then I would like to ask a big round of applause for
|
CMS used to be the one ring to rule them all. Everything used to be integrated with the CMS, and the CMS used to be the one system in control of the others. Rise of the Content Mesh turns all this up and around. All of a sudden CMS is just one service among others. Today it is the CMS that should integrate with the others. And the only thing besides this that matters is, how well does the CMS do its core job for managing your content. In this talk we present how Plone CMS fits into GatbyJS Content Mesh ecosystem, how the Plone GatsbyJS source plugin works, why we implemented WebSocket support into Plone to make the plugin even better, and what does Plone really shine with GatsbyJS.
|
10.5446/55170 (DOI)
|
Hello, my name is Christina Baumgartner and I'm the partner of the Klein & Partner Kagee. I'm a member of the Blue Dynamics Alliance and since a couple of months I'm an official member of the Plone Foundation. But since my first Plone conference it was 2004 in Vienna, I'm wandering through the community. And yes, I'm the organizer of the Alpine City Sprint and just a short advertising. Next Alpine City Sprint will be from the 11th February to the 14th February and we will be one day at the Aquedome Spa and we'll have a very very nice time there. So relax with Plone. I know there are not many people who because Timo is talking about the future of Plone and the whole one. But as Eric told us in his keynote, Plone is the community and I want to show you today the next generation of the Plone community. I have my interns with me and this is Ilvi. She's 16 years old and she attends Ferrari School focused on media design. And this is Niels. He's 18 years old and he attends the HTL. It's in German. It's a high school for technical and he's focused on electronic and technical computer science. Both have to complete an internship as a part of their education. And that's why they turn to us. And so I come to the idea to connect interns and the Plone community. Niels and Ilvi have developed an online platform and this platform is designed to help Plone companies find interns and interns have to put a pure opportunity to find jobs in the community. And of course they make it with Plone. And it was the first time for them. And they had only four weeks time. So, Ilvi. So, our goal was to keep this platform simple and easy so that everybody could understand what we are doing and what we want to change with that platform. To say it short and simple. This website gives students and interns and every kind of community the chance to create a file and apply for an internship or to look for an intern. Since I am a student myself, I know that it can be very hard to find internships and I can imagine that it is just as hard to find a proper intern. That's why we decided to create this website and to ease the progress of finding the right place and the right person. The general idea behind this could actually be compared to a dating website. This might sound stupid at first but it's the same idea. You create a profile yourself, show your interests, your skills and your personal information to make a good impression on somebody else. Students have the chance to submit an application and choose what kind of job they want to do. And the companies can give out jobs and tasks that the students or the interns have to do. So the students would know what job they would be doing. Languages that the interns need can also be added because you have to know what languages you need to speak. On our homepage you can take a look at that and see if you find something interesting for yourself. Here you can see our general idea of the website. These changed a bit during our internship but those were the first ideas on how it would look. The page is still rather simple because we only had four weeks of time and that also makes it very easy to understand and clear to handle. The website consists of our logo obviously and a login and a few subsites. You can see the content in English or German. Other languages may be following but not yet. First you can see some general information that's on top. Then you have the chance to register yourself to an application. And after that some best practices are following. And later on you can see the already submitted applications from companies and students. It was a project we created together I must say. Everybody still had his or her own job but in the end we all did our big decisions together and then only after that we went on to do our own tasks. My job was mostly the content management and design. I worked with less Photoshop and InDesign and later on Illustrator I also designed all the graphics and well I could say I was on the front end team basically. At first we started to think about the design and the mood board of course so that's how you start every proper website. After that I started doing the graphics for our website and only after that we started framing the homepage and then after that we did the screen design and the sub pages. I also interviewed some companies and some students or ex students so thank you for taking time for that. And well here you can see some of the graphics I did. I also wrote more of the content in German and English but Nils also helped me. It's mostly for creating a wider range of users so that more companies and more students could see it and could understand it. As I said before more languages will be following. My personal goal was to make sure that this won't be a boring website and that everybody would like to look at it especially young people and that's why I decided to go with some colors and not only black or white or black and blue and stuff like that but to add a little yellow as a contrast. And I don't know if some of you have noticed but the blue you can see in there is the exact blue of the Plone logo and all the other colors are either the darker version or the contrasting color. I tried to make it also appealing for companies so not to make it too overwhelming and too colorful but I purposely looked at what we wanted to do and what we wanted to reach with this website so that's why I tried to put some technical stuff in there to make it look less childish and more like real work. Also some of the graphics you can see there are geometric. That's also what I thought would look more technical. On the other hand you can see the natural kind of look which should translate to the interns. So that's basically what I did and now we're going to move on to Nils. So my job was basically to get the things from the back end and at the start when we all discussed what we really want to integrate in our website and of course would not I hardly believed that I could ever get to program stuff like this for all the components that we need. But after some introduction into Plone after reading some documentations things really seemed to get logic and not that complex that I thought. The four week internship was such a big learning process for me. I learned how to get a project started. I learned what to observe when getting into the process of development and I learned that the duties had to be separated in different small parts and then do one after the other. After we finished the idea we split up our works. We split it up into front end and back end and that's where my journey starts. So as I said I at first had to do some preparations like getting into Plone and Python. So I attended a Python crash course, a quick one and I thought through the many, many Plone documentations existing and yeah after that before creating content for our website I had to install a Linux subsystem on my Windows device to get things run. I had to install Py and Python Plone for managing Mosaic for the tile placing and to get things easier done and to create content easier. I also had to install the bob templates. That was quite difficult but I really got it done and meanwhile I also wrote a documentation on how to install a subsystem on Windows and put it onto the GitHub and it really made me feel a little bit proud to post my first help for the community. That was a nice success for me. So my main tasks were to program the tiles and their functionality. I had to do some environmental configurations like for example integrating other components into the ZCML files and things like that. I had to create content rather from the back end. I created the views that later were edited by Ilvi and what was quite the most difficult one is to create the portlet at the bottom of our page with the slider and the imprint. Later we finished working out the back and the front end. We really could start to connect those separate types of duties. So we added some additional code. We did some adjustments and after that our homepage was really ready to manage the dates between the companies and the students. So I would say the four week project was a full success. So you have seen we have the bridge and I know the bridge has a tradition in the Plon community and Ilvi has seen the bridge of the Plon conference in San Francisco. It was 2011 I think so. She was really very also impressed from the design of the conference from Ferrara. But why in terms I've been at a lot of conferences and friends and there's one point it was always discussed. We need young blood. We need newcomers. We need more women. And yes you can make your own poem babies but I think it will not be enough and it needs too much time. And I think you know my son Tarven. He is now 17 and he has been once in the conference in Bristol. He has been at the staff and he also attends a high technical school also focused on computer science and he also has to make internship and there are in Austria more than five forty schools focused on computer science and on media design and there are more than two hundred programs on colleges for computer science and you can imagine in whole Europe there are more schools and more colleges like this and in whole of the world and say all need an internship. Now I will talk about you a very intern you know well. He has been at the Klein and Patner Caggy I think ten years ago. He has been a student from the College of Honeo. He has. Now he will have his own talk tomorrow. He has been a member of the framework team. I'm talking from Hannes Raghavan. He went to. And Il-Wi made of course an interview with him. She told us and I know Hannes more than ten years and he said something and I was really very surprised. He said I lost my fear towards open source community because I learned a lot of them in the internship. The internship was my ticket to the open source world. Many young talent boys and girls do not find a way in the open community maybe they are a little bit shy. So this is a wonderful way to come in. Another best practice is from Yerke. Yerke from Interactive he also had an interview with Zero Way. He told us that he has interns in 2000. And he told us that it is a win-win situation for the interns and the company as well because both of them have the chance to get to know each other and see if they fit together. And this is also I think if companies want to have more young person. And also Yerke told us they have not very big experience. If they can't with their paper you will not see what skills they have. But the little internship will help to know if it works or not. So outside of our ivory tower there are many young people looking for the possibility just to live in the community. And the internship helps a lot of young people who have completed an internship but it helps all those who just want to try. And now Ilwe Nils will tell us now about their own experience as an intern. Because they are creating the portal first time in Plone so we can expect an interesting feedback about success and difficulties they have. So I have to be honest with you. At first I didn't really know anything about Plone companies or how to properly do a whole work. I didn't know about CMS. I didn't know about internships. I just had to work it out on the way. Which was actually great because that meant that I had something new every day. I also must say that I am not very experienced in building a website or working with CMS. I have only been learning the simple stuff in school. So it was a bit difficult the first days. But after that it wasn't a problem anymore. I like trying out new things and that's why the Plone internship was a great way to learn something new and I surely wasn't disappointed. I think it was a great experience that I got to know a different way of building websites and also to experience the whole work in progress. From an idea to a functional website that's something that you really don't learn in school. Because you have a fixed design and fixed content already given and you just have to rebuild the whole thing. As I said before I was mostly responsible for the design. That was so new to me that I had the chance to do something myself and not only do the stuff that the teachers told me to do and just try to copy it. I was very comfortable using Plone and now I really do miss it in school because we work with a very less advanced program and I feel like I'm back in the middle ages. Another point that I really enjoyed about my internship were all the stories and the background knowledge and the community that I got to know. And also the different styles of home pages and websites and how you could alter them and design them so that it would look great. I must say I definitely improved my skills in web design and designing new pictures and graphics and also I learned how to work. So if I need to say this in one sentence I would say I think in general this internship showed me how to properly work. So all in all I could say my experience at the internship couldn't be better than it was. Working with Plone is such a fun because after some introductions it's rather easy to understand I would say. And I understood everything also because I was part of every step of our project from the beginning to the end. I learned how to get a project done. I learned what to do when there is a problem that has to be solved. I learned how to approach a project and everything like that. At the beginning I had some difficulties like understanding which code impacts which other and when I do some programming where it really will be displayed on the website. But after I got into it there was no more problems. At that point I really really really want to thank Christina and Jens for offering us such an enriching internship. So please clap your hands for them both. They let us work by ourselves. They let us learn by ourselves. But when we needed help they immediately were here to help us. So I'm really thankful. And yeah, attending an internship like this is a real honor for an 18 year old student like me I would say. And what I want or what I hope that will happen is that more pupils that have the same motivation as me learning some stuff like this will also get the same internship like me the same opportunity to get that practical education like me. That's very important in my opinion. So there's some difficulties getting into internships like this and that's the reason why I really loved doing that project. Why I really loved doing my work and getting that experience at the internship. And yeah, that's why I think our project is so important for the clone community and especially for this conference right now. Thank you. I brought Ilvi and Niels to this conference to show you the skills and the talent of young person. But I realized that the last days when I've been with Ilvi Niels here that they get also contact with the open source yes, with the community. But I talked with them and I realized that they have not really known before what means open source and what means community. So I talked with Ilvi and I told yes, she knows community because every shop has their own community. But she didn't know that the idea of community becomes out of the world of open source. And they also knew Open Access. They know creative comments because they didn't know it's the idea of the open source. So I know Mike you make a very good job at the CMS to explain what is open source because I think that in the mind of the people open source means there are nerds sitting in their cellar and making software for nobody. I love our community and I know you love the community also, your big family. But we are living in an ivory tower and we're making this ivory tower higher and we're making bigger and we're waiting if someone is coming and see how wonderful our ivory tower is. But it's time to go out and bring the people to us inside the ivory tower. So maybe they will stay. Maybe they will be a member of our big family and maybe they will be once a new member of the framework team. So what can the community offer to the young people? There are more than 300 companies and organizations worldwide. And companies of all size, angels and unity. We are well connected and we can offer them experience all around the world. We can offer them experience at the university, at NGOs and so on. Vision, my vision is to use our network to offer young people internship in our community. To have a well organized program for internship. And that also become known, this program should become known among the young people. Because then we pick them up where they need help. I know it's a long way. And I know we are on an unfamiliar territory. That's why I think we should step by step. And I hope I can attract more people inside the community for this idea and also outside the community. What I expect from the community to open the door to the young people. Give them a chance to get no prone in the community. Help to improve the website and translate it. Of course in terms. To research how the internship condition is in the different countries. Or just sponsoring. Or to get the information, to get the best practice information. Because there are best practice experience. We are kept them. Stefan has them. There are a lot of interns just in the community. And we can change the information, the experience. Stefan has the idea to make a user guide for other companies. How to manage it with interns. And what I will do next is maybe I will try to make a European project. Because I think European love it. If companies from different European countries and organizations make together something to change young people to help their education. So I think it's time to leave our island tower. And the bridge. Thank you. What do you think about this idea? Any question? Comments? So congratulations. Because you look like really smart people. And what was your most scary moment during this internship? I'm talking both for the students and for Christine and Yesson. And what was also the key moment that made you make some progress forward? So a very shocking moment for me was like when we created the website, when we had the basic, like the code. And then I opened the folder and saw like this list of different codes. And that really shocked me. So because I didn't knew which thing was what and what did what. So I had at first get to learn about that. That was some shocking moment. And where I really realized that I'm making progress. The first thing that comes to my mind is when I programmed my first tile, then opened the mosaic layout and then displayed it like on the website. And that's where I thought, ah, yeah, it really works. And we can build the website. So that was a succeeded moment. For me, I think it was the first days because I didn't know much about Plone as I said before. And I didn't know anything about the whole Linux subsystem thing. It was completely new. And Jens and Nils had to help me all the time because I just couldn't figure it out. But at the moment, I realized that this really worked. It was just this morning when Nils told me that the website is online and I can look at it. And I immediately send it to my friends and to my sister. And yeah, that was the moment when I realized, okay, this really works. Okay, for me, it was wonderful to see that young people doing their own home page. So we come together. We have just a sheet on the table and say, okay, the idea is to make such an online part, to make a bridge to the community. And yes, I want that both thinking about from their perspective, from the perspective of young people, young people have to create it, young people have to see it, young people have to use it. And also, I was really astonished from the first moment, I told with Nils and Ylvis, what do you want on this home page? You're young. How should it look like? What do you think young people like? What kind of design? I would make the design, it would be more flowers on it and so on. And then I said, no, no, no, not for the house. It's not modern enough. So it was very good because it's their baby, their home page. And it's for this generation. Yeah. Yeah. So great talk, by the way. Thank you very much. And after, yeah, the most scary moment thing was like, we are small companies, so we don't have many computers around us. They brought their own computers. And then the most scary moment was, fuck, it's Windows on this thing. And how do we do clone development? And so I had a customer, they do also Windows development. So I know there's a Linux subsystem on it and it works. And the most scary thing was to get into it and get to get a Windows subsystem expert at some point. And yeah. But finally it works. And maybe at the sprints, we will finish the documentation and maybe Nils said the sprints too. And so we may get over this scary moment for other companies and people. Thanks. Any other question? For your the owners of a company, the smaller company, and my experience is that the smaller the company is, the more it feels like a risk if you start to help interns or also onboarding a new coworker and explaining things that how as owners of the company do you set expectations and plan the amount of time you're going to help the interns to onboard to do things to your other work and also your other ambitious and available and community things you want to do because you've only got 15 hours in a day. How do you handle that? Because I think that's an important part if you want to have more companies take interns onboard and explain them, clone and help them with finding what they want to do and what they want to continue in life. We have to help those, especially smaller businesses to give them tools how to manage that. Can you comment on it how you're doing that now? Okay. I think it's the same thing if you have a new person in your company. Yes, of course you have to show them something, but when you're new we have been there, they make the work mostly on their own. We have in the morning we come together and say, okay, what you do is everything okay and so, and say working, we're all working, we'll look together, what you say it did. And I was really very surprised because they have a wonderful skills both in talent and yes, okay, they didn't work in our project and big project, but I can think about it maybe next year or second again and I know what they know. I know what they, what's their skill. And so, to add to it, we haven't, we haven't now, we have planned that we took some time to build such a website and also to do it for the community and to bring back something to the community for this internship. But if you have company and you don't want to bring back something to the community, you can just do some smaller projects or get real projects with the interns. So, after one or two weeks, if you plan it correctly, they can do you real work for your company and also bring back something. I think that's possible, but you don't, you just have to choose wisely the works they do. So, not to expect too much at once, to look up the people, where they are, what they can do and so on. And I think it helps if you have people that are able to self-talk things. It helps a lot. But yeah, you don't know it beforehand. But maybe you will ask Stefan, he has also a lot of the interns. There! Sorry. Maybe we can proceed this discussion during an open space and share some knowledge about. We experienced difficulties because there are different kind of internship and when you have a person only one week in your company, it's almost not possible to teach them a clone. For example, that's one of the things we have learned when I'm talking about a little guide, that this is helping especially small companies or even freelancers. It's possible to give a guide to a person when the result of the guide is written down. It's like a training, but more bringing people to Python and bringing people to clone, towards clone. And this would help really when you have the plan what they should do during a week, during a month or during an internship that takes for example six months. That's the big difference. And the work, the think about what they should do and in what order they should do, that's a small company or a one person company cannot do the work to plan all that. That's what we experienced. And that's why I had the idea of bringing a guide up and share that knowledge we took from that. So any, okay. Last question, time is over. First of all, thank you a lot. That was a great talk and also I like this project a lot. I like it because the idea of giving people the opportunity to join the community and to learn about open source programming, that's great. And also the way you do it, that you really had four weeks with a specific project and as you described it very well, you could learn a lot. That's great I think for young people and that's exactly the way we do it. Last year I talked at the blown conference about gamification and we developed a gamification platform and our interns and apprentices did that and our grown up developers were not really involved in that. So it was basically the same approach. Yeah, but they had a bit more time than you had. So that was really great work here. One more question I have. Did you see any best practice examples from other open source communities? How they try to onboard people to their communities if they do the same way or if they have other ideas? I've just seen you and from Stefan but I would hope that more people from the blown community are coming to us and tell us about their experience. I think there are more internship maybe but the problem is that if every company makes it on their own, it will not be something for the community. This is because I make this project just to say we are so well connected normally and it should be able that we come together to change experience like Stefan said and make it to make a program to make it more famous also under the pupil and under the young person. So when I think we have a wonderful program that works with internship and we can go to colleges, to university, we can go to students and tell them and say oh what's blown? Whoa, you can make wonderful internship there and it's also a wonderful marketing for blown. Thanks, the time is over, thanks for the talks.
|
Two interns at Klein & Partner and I are presenting a first incarnation of an online job portal connecting Plone companies and young interns from VET-schools and collages. Plone companies can use this portal to offer internships, pupil can use the portal to find an internship related and get also connected to the community. Creating the portal was the first contact with Plone for the young women and men. So, we can expect valuable feedback about success and difficulties.
|
10.5446/55171 (DOI)
|
Okay, hello everyone, I'm Dave and I'm going to do a short talk about the package of the upgrade which is a package which we have been developing for teamwork and it really simplifies writing and running generic set of clone upgrade steps for us. So my hope of this talk is that I can like maybe spread the word and tell you a bit about it and maybe you can also start using it which leads me to my first question, who is already using it? Who does know about it? Yeah, my co-workers, okay. So maybe the rest of you will learn something and maybe you will also start. So first I'm going to do a little bit of a teaser, I'm going to show you how we simplify registration of upgrades and then I'm going to do a short live demo which will show you how to list and run upgrades and then I'm going to go into the details a bit more. So the first thing I would like to show you is this. So everyone of you that writes upgrades will know it, it's a bit complex, there's a lot of repetition in it and with that repetition there's a lot of error potential, you have to like repeat the destination ID a lot, maybe you have a naming convention which always includes your source ID and your destination ID then maybe you will repeat the title a lot and so on. And we have replaced this with a simple upgrade step directory directive which will then auto discover the rest for you if you set it up correctly. So we have just this simple sats.ml and the rest is discovered and registered for you when the plonings and starts from the file system. And also we got rid of the separate version ID that you will have to also update in your method.xml in your generic setup profile. And yeah, of course this works also with two upgrades and so on, it's just one directive and everything is auto discovered for you. Now I'm going to try to switch to the command line to show you how to list and run upgrades via console script you have provided. So I'm using a tool with prescripted commands before I can type so fast. I'm going to start by plonings, then I'm going to use the bin upgrade command which is installed when you use after you have the upgrade. I'm going to list all the upgrades and I'm going to tell it to pick the first plon site because there's only one and I'm going to tell it to list the upgrades for me. So as you can see our current plon site has two upgrades. One is the jangshri side configuration and one is another upgrade step which I will run later. So I'm going to show you how you can use the install command to install one single upgrade. I'm going to type in upgrade again, I'm going to type use install command, I'm going to tell it to pick the first plon site and then I'm going to give it the upgrade ID which is an ID which we like generate and use to address single plon profiles. So I'm going to execute the command and it will do some logging and it will tell me that the installation has been successful and the upgrade has been run for me. I will go into more details later and I will show you how you can register upgrades and I will show you how you can run and train the upgrades. So I'm going to switch back. But first let's take a step back in the following sections of my talk. I'm going to first talk a bit more about why we actually upgrade upgrades and how we upgrade into eMark. I'm going to show you how you can listen to run upgrades in the browser also. I'm going to show you the upgrade directory. I'm going to show you a little more about the install script and then I'm going to talk a bit about upgrades at helpers which we provide. So why do we actually write upgrades and how do we upgrade that for teamwork? First let me mention we have a lot of deployments and we have a lot of deployments of the exact time package. We really need to be able to know what state the deployment is in and we need to kind of always guarantee that many deployments are in the same state and that's why we rely heavily on scripting and not doing stuff through the web. Because we need to guarantee and we need to know that one deployment is in the correct state. So we want to alter content and configuration consistently over many deployments. We want to guarantee that the same upgrade path is taken for all deployments. We want to prevent undocumented changes that happen through the web. So if you allow people to edit your plans through the web one year later you will not know anymore what happened or what was going on and if you rely on upgrades only you can always guarantee that the state of the deployment is actually up and ending in a policy or in code. So the key word is we want to prevent configuration drift. So you could have drift between your codes and the deployment was actually installed there and you could have drift between multiple deployments and we want to eliminate that possibility and we always want to have the current state documented in code and in the policy. And it helps us to apply a minimal set of changes. I know for smaller sites people sometimes just ring for the whole generic set of profile and hope for the best but if you have many upgrades that you need to run at once it really helps to see each single change in an upgrade. And of course it helps with automation if you can automate your upgrades you will spend less time operating your plan sites manually. So let me quickly talk about how FGW upgrades actually lists and runs upgrades. First it is able to list and propose necessary upgrades for multiple packages and bugs. It's really useful. It will give a quick demo in the browser of this later and it will basically first collect all the upgrades that are necessary for your plan site. It will then topologically sort them based on that packages dependencies. So it will guarantee that dependencies of your package are installed before you actually install your upgrades. And you can run it in the browser which I will show you now and you can also run it via a console script which is provided by the package which I already given you a teaser of. So let's quickly switch to the browser and show you the upgrade view that is provided. So maybe one quick word about my setup. I have created a simple plan site. I have created a single policy with one single content type, nothing else and I have already run one upgrade which creates like 1000 or so instances of the content type in my plan site. So here you can see my current plan instance. Usually you would install upgrades via the addons section. You can see the upgrade which I have not run yet which is listed here but I'm not going to do that. The upgrade also provides you an upgrade and three in the plan configuration which looks like this. You can see it also lists just like the part inside of the policy list upgrades which have already been run by the plan. So just for the sake of illustration I'm going to rerun the upgrade which I have installed on the command line before and you can see... I've zoomed in a bit so it's not readable but it basically outputs the same thing as in the command line. You can see some progress. You can see that it installed the upgrade and it will actually stream what it's doing when you have longer running upgrades. Okay. Okay. So next is the upgrade directory. I've already shown you in the intro how you can register an upgrade directory. Now I'm going to quickly describe what it does. It basically is an entry point for auto discovering all of your upgrade steps. So you can create new upgrades automatically in that upgrade directory. Every upgrade is assigned a timestamp for you which is used as the upgrade ID. You don't have to increase the version number automatically so this is inspired by other frameworks like Rails for example which uses a similar approach. Then each of those directories that can be generated is it will contain Python code for your upgrades step handler and it is also a generic set of profile itself. So you can put Python code for your upgrades in there and you can also put XML changes which are then automatically imported for you which really helps you like as I've already shown you reduce the repetition that is needed to create new upgrades. And it really helps you to reduce conflicts and merge conflicts when you are developing with multiple people on the same project. So matching two developers create an upgrade at the same time. If you have a plan you will have a lot of conflicts because they will both assign the same next version ID. If you use the upgrade script it will create a timestamp based ID and it is very unlikely that both of those developers will create the upgrade at the exact same time so that the ID will be different and you won't have any merge conflicts. You also don't need to update the version ID in metadata XML which is something that people usually often forget. So let me make one more thing. There's a console script to also create the upgrades which I have not shown you yet. It will also allow you to reorder the upgrade steps if you need to do so for some reason and you can use it on one plan side, you can use it on all plan sides if you want. It's really flexible, it's got the great help system so in case you start using it I would recommend to check that out. Upgrade also provides you with progress logging so there's helper methods that you can use to actually log progress while you're iterating over clone objects if you have to migrate them. That's really helpful if your upgrades are long running, if you have larger sites with a lot of content which we do, our upgrades sometimes take the whole night, like a really long time, many content objects. And it also logs progress of clone indexing or collective indexing if you're still an older clone version. So I'm just going to give you a quick example of how that works, maybe to clarify a bit more. So I'm going to switch back to the command line and I'm going to create an upgrade via the command line. It used to be an upgrade create command. I'm going to title my upgrade demo upgrade step and I'm going to tell it the path to my upgrade directory. And it has created something on the file system for me so let's have a quick look at it. Well maybe first let me show you an example of upgrades on the file system which I've already created. This is my editor. You can see the upgrade step registration in here. No, where is it? Here. That's the upgrade step directory. And in my upgrade step directory here on the left side you can see four upgrade steps which I have registered and created. The first one I have already run it. It basically installed my clone content type, a talk content type which I've stolen from the clone training. You can see this is the upgrade step directory. It contains Python module upgrade pi which contains the Python code. So in that case I've just applied the corresponding upgrade profile. I have installed types XML which has registered my talk type and installed it and then I have used plan API to create one thousand and one instances of that portal type and I'm going to use them later for a quick demo of logging. So below you can also see my demo upgrade step which I've just created which is basically empty. It will just apply an empty profile. I'm not going to fill it out. There's not enough time for that. But instead I'm just going to quickly restart the instance to reload the upgrade step I've just created. Then I'm going to list the upgrades again. So you can see the talk title upgrade which was there before is still here and now we have also added our demo upgrade step which I've just created. So what I'm going to do next is I'm going to tell the F2W upgrade to just install all the proposed upgrades. So usually it will figure out which upgrades needs to be installed and you can mostly tell it to just install all of it and you are set. All your upgrades are run. So we're going to do that. It's going to do a bit more. It's going to do something with the one thousand content type instances that we have. It's going to change the title of them and it's also going to run the empty upgrade. It would log progress for more content. I think it logs every five seconds or so by default. And now you can see what it actually does is sorry, it's indexing queue. It's re-indexing stuff in the portal catalog which sometimes can take a lot of time and usually run at the end of your transaction and it's really reassuring for us for larger planetsites to see some progress in that case, otherwise you would just see an empty console for two hours and then actually hope that your upgrades are being run and that everything can be committed successfully. Okay. Great. Now, let me mention some helper methods that we provide. They are available on the upgrade step base class which is also automatically put there as a scaffold in your upgrade, in your upgrade PyModule when you use FDW upgrade to create new upgrade steps. There's really useful stuff for class migration, especially in-place migration which basically replaces an object in the tree itself and it does not create new siblings or new instances of your new class so it does not fire all the plowing events which is a real performance benefit when you have to replace the class of a content type. If someone in here still needs to do a migration from archetypes to dexterity in the next two months, that may prove really useful for you. Then we have helpers to rebuild and add and remove catalog indices. We have helpers to update the object security of a single object, update the workflow security for all objects with a certain workflow. We have a workflow chain updater in case you need to replace the workflow of an object and you can provide it the state mapping so it will automatically update the states of the new workflow for your objects also I think also in the workflow history. If you need to do so then you can either remove actions or type actions if you still use it and you can remove broken browser layer and other browser layers, excuse me, and other stuff that's broken. There's much more. I think we don't really have a lot of time anymore. There are like deferrable upgrade steps with which you can tag certain upgrades as deferrable so tell FDW upgrade to maybe execute them later because they're needed now and you can separately rerun them at a later time that's only possible because it tracks the upgrades that you execute so it will track every single upgrade and if you omit one upgrade you can do so and it will propose that upgrade later again to you. There's like good documentation, everything is on GitHub. Also my presentation material, the example I gave you is on GitHub, you can find it via that link. Maybe if I was moving too fast you can have a look when it's a bit calmer and there's also an introduction on how to get started with package if you're interested in it. Yes, I think that's all I wanted to tell you so if you maybe have any questions I would be happy to answer them. Any question? Okay. It works with Python 3? With Python 3. We are at the moment, unfortunately we are working on getting it running on plan 5.2. We are not yet there but Thomas there is working on it. He said it's almost done so it should be ready soon. Yes, with Python with plan 5.2 and then also Python 3. Right now plan 5.1 and plan 4.3. So you have described happy paths so everything goes right. So what happens when something goes wrong? Some data was not like you have expected or there are some conflict errors and such. So all the upgrades that are on within one transaction it's basically one request. At the moment the console command I did not have enough time to show you all the details but the console command will use a JSON API in the background which will basically fire a request to the running plan instance. So if something fails that request won't be successful and everything will be aborted. So the transaction will be aborted, nothing will be committed and you will have the same state that you had when you started running the upgrades. We have been thinking about maybe adding an intermediate commit option which would allow you to tell it to maybe commit after every upgrade that is run successfully but that's not available yet. I'm not sure if that would be helpful to you but we sometimes have as I already mentioned the situation of huge upgrades where we have a limited time window to actually run all of them. So it would be reassuring to know that when it crashes you at least have the state where it was when it finished the last successful upgrade but not yet available. Any other question? OK. So thanks David for the talk. OK. Thank you. Thank you. Thank you.
|
The package "ftw.upgrade" simplifies writing and running generic setup upgrade steps for Plone by providing the following features: - Simplified upgrade step creation and registration. - A console script to create and run upgrades. - A JSON API to list and run upgrades. - Helpers for common upgrade tasks. - Deferrable upgrade steps. - And much more … In my talk I will describe key features of ftw.upgrade and give a short live demo.
|
10.5446/55172 (DOI)
|
Partipel Res Koi ost livem? Lj Cheryl Pa vr Madni Ni nerdaj juz Ned sajimo v genetically Pa domo Ja dashboard...... pravno me stavilo in pri spazстановama za predkeljev poений je,事 mode ta svoja postak. townujeji lahko je delova na mačšin, za konominacija sv. Tudi in torbič, ki odpojim, ki mož利 seviš 못 vo spraystng entãoo Milstaja Abdullah. O UtGirl, as kdo in to lumps, kas na izgručenju aquariumima in ma ravanje rodne po mor Gothniopsko,確 neikh pos크�aljo. Vse Sindelji čakaj bil na odronbenosti sem. Pri refr som 찾아 loanil s pres hein Suit, sk принцип domovitva, who is born as a developer, and as any smoker, a developer will die being a developer, even if he doesn't code for a while or for years, just like me. So, I am a developer inside, and also an agile coach, and I will die as a developer, and I don't like projects. I give you a bit of context. V 2011 Mark Andersen, founder of Netscape, among other things, said that software is eating the world. What it meant was that any company is a software company. Think of it, car manufacturers are software companies. Think of Tesla. Tesla is a huge hardware for running their software. It's very expensive hardware. Banks run on software, insurance, any company in 2020 is a software company. Software is running the world, is not only eating the world. And in this context, as has been said by Steve Denning, agile is eating the world. Remember, agile manifest said before the values and principles that we are uncovering, and they say, still uncovering, better ways of developing software by helping others doing it. And it works, because agile organizations are the ones that are the fastest to adapt, do joined original Vi gasket embracing training in Rocky Digi began. v Austreljiji, bolješ, da po于 Чер. Projeteja so podjovala pra Universalne nap ridiculousne orientalejo terravike, коjiball rebuja na protekt beneath voix. Šta je teleszen呼anjem? Poj Zig Who, morajoaueni roboti работati pom Kurakova? Tako. Hvaacheskih projekte? OK... najšlega? Perian malo tako bio že bila회ji? Ne, zelo tako dohačem projekte? investing ni echt re drying in. QUAT prehemat želje. Pome poser ta na karaokeCause elephant v impartij jakš salvation, kjeto –ev ata polazna fondat tudi, konأ judgement je abstתno od pre renovacionov smells except ko se pat斂one postuyorum ni ne lahko32 prikljati pens Señorira soldat po vse, vardı crossover 0 f interpretation is ankle boje predictionist given by the product management institute who said InRep allows géšo sl incompozificovanje v toolu. Vših estão calculator solo bolj bljev kajomenda se kot CDCSE, ten skoli ste vedno illim, okehin?inta ma ne bijem je. Havete prišežit dan, triede so se tame gratediti agelep in perveni in plokhi, kar jaz najšlico tačnu zanim that pos Kellv miners zbohto za in t sharp mak in projkt, s pa ide ko je ta nrata vsega prist하 povlada. Zato bo, ni so pri lethal po vzniku tohto. Na p FORGOTAN T so na takrat kla, pa kom ampak pri tem znanih v700 add confirm derilo g i Rečem se. Woljice, poznunchačča, in očelač na vrtitu solo pra listed스를 ko, morek... Prej Gymn dentro. Siet files karaoke tOKLAĆ crisp innštadi izmjim in tukaj pri vzdo duho� Georg. delave mehorja, ker je nazaveing we z Langóvo carrat. Zela ta Tyročččocych se isto vedno prema plenja pononil, imposing sure m void a tudi這邊 pe Quit, do organisem u.'' e fojo so živene, bo ste našli四 v magnetcijo stavaaaa. Zrup ne kokajsa dro Visov nor으 viergoınja spoma izель click, bo tvojarenski do. Profesiv Ste perce in Ni se v ovoj po modu Po, isnog rudo, manifesto. One is Martin Fowler. The other one is Mike Kohn and the third is Jim Ismit that recently published the book in which tells the same things. OK. So how do we leave projects? I suggest to move in just like in a journey in four stages. OK. So we walk them up, sorry for the bad end drawing, in which the first stage is to go to value experiments over projects. The second stage is to focus on stable teams over temporary and the heavier. The first stage is to focus on outcomes instead of execution. And the fourth and last stage is to focus on products instead of software. This might be a bit contradictory, but we will see it in a way. Let's start from the first stage. Experiments over projects. In the last years, we had, let me see, the focus has shifted on making quick experiments and having feedbacks, shortening the feedback loop. Making experiments is much better than making projects because if you fail an experiment, what you have is a learning. If you fail a project, what you have is a failure. OK. So let's focus on experiments. I would cite Richard Feynman, who said that it doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If the assumption, the theory doesn't agree with the experiment, it is plainly wrong. And we have to remember that when we make a project, when we make a software product, the biggest assumption that we have to challenge to demonstrate is that the requirements, the feature that we are supposed to put in the software are the ones that our customer needs and won't. But when we make a project, we definitely focus on the features, on the requirements. By the way, requirements means something that is required, that means also mandatory. OK. So there is no negotiation or requirements if you stick on the words. The approach is the one that Eric Rees discussed in those two books, the Lean Startup. I prefer the stop that way, that is how organization that are not start up can leverage experimentation and quick feedback loops to gain better results with the market. The second stage was about stable teams, hover, temporary and the other. I would refer to something that is well known. When you have a winning team, you don't change the winning team, you try to keep it. This is being described with a couple of curves that are very similar. The first one on the left is by Takman. This was made in the 60s. When you put a group of people together, the performance initially decrees and then after the team starts understanding each other, they reach the tipping point and then start to perform. A different point of view has been given by Katzenberg and Smith more recently. When you put people together, it is not a team. It is a working group. You have a team after a lot of time. If you stick by the definition, and actually it is what happens in many situations, when you start a project, you assemble a new team and this new team has to go down to the valley of the performance and start reaching the performance level while they are approaching the end of the project. As soon as they reach the end of the project and they are a performing team, you dismiss them. It is crazy, in my opinion. Product server software, sometimes developers like me are more focused on technicalities, on being cool in writing the new technologies, new stuff, writing supposedly cool solutions, genius solutions. It is a bit nerdy. They are more focused on writing excellent code than on writing excellent products. This is a mistake. The technical excellence to which the Jai manifesto refers to in the principles is not about writing the best software possible, but writing the best software product possible. It is writing a solution that the customer will use, not to writing the best technical software that no one will use. In Agile Atelier, there is a problem. I say atelier instead of factory because a friend of mine said that factory means that we are industrializing software, which is not. There is a lot of craftsmanship in software. In Agile Atelier, a good PO should drive the team to write software that no one will use. The first stage is outcomes over execution. Execution in projects is focused on outputs. One of the problems is that when you run a project, you are focused on releasing outputs in producing bits of software, which out be ensured that someone will like it. There is no care for value. Once you frozen the requirements, even Agile factories have the same problems. Then you focus on outputs and no more on what is really important, that is outcomes, that is the ability to change the behavior of our customer and user. This is much more important that writing everything that was in the backlog. The first stage is not enough. I think that it's useful to have also some guiding principles to help the reaching the no-projects situation, project-word. First of all, organizations should be able to found capacity instead of found in the scope. When an organization needs to make a new development, a new product, the first thing that they have to do is to list what is supposed to be included in the product, write in a scope and then have the scope approved and the budget released to that scope. Instead, if you found capacity, you let the team and the organization free to adapt to changes and having a performing team that is able to switch from a product to another, to change priorities, to move to new targets and goals of the organization. The second principle, which is somehow difficult to implement, is that there is no better place to evolve and maintain a software product than in the team that created it. Of course it is. If someone wrote the software, it is the right people to give a look and solve the problems on the software. But this is not always easy to do. For instance, one of the argument is that maintenance is operational expenditures, while development is capitalization expenditures. But there are ways to manage this. The third thing is to bring the work to the team. When you manage projects by the book, what you have is that you create work and then you assemble team and you bring team to the work. So first you have to work to find the work and then you have to bring people to that work. Instead, it's much better to keep team stable, as we said earlier, and bring new or upcoming work to the existing teams. It's much more effective and efficient. To enable some of the guiding principles that we have stated so far, we have to apply a lean portfolio management strategy, which is not a project portfolio strategy, it's a product portfolio strategy. We have to slice our product initiatives in tiny pieces that can be prioritized, just like you do on the backlog, but at higher level and you have to review the project portfolio, let's say at least once a month or once a quarter, and then change and adapt your strategy. For the very same reason, you have to be lean with budget. If you discuss yearly budget, you have a problem. You are, let me say, chaining the organization for one year. I would say even more than one year, because when you start making budgets, it's three, four, five, six months before. Together information to prepare a budget request, you have to work a few months earlier and make a lot of assumptions, which are probably no more true when you have the money to spend. You have to be more lean with budget. This is, of course, a corollary of what we have discussed about teams and what Martin Fowler said. We have to organize products around stable, motivated and accountable teams. Scaling. A lot of people discuss about scaling. There are a lot of scaling frameworks right now that are being discussed in the socials. I'm not going into the discussion of which is the best scaling framework. Scaling, with probably the exception of scram at scale, is mainly a matter of the size of the product you have to develop. It's not a matter of the size of the organization. Many organizations start to scale just because they are big. They don't. They have to scale if their products are big and probably not even in this case. It's more interesting to focus on the scaling instead of scaling. And since the scaling increase the complexity, you have to scale just if and when needed. When you decide to move to a product initiative, you have to ask, is it worth making this product, not how much it costs? Because when the question is how much it costs, it's very easy for CFO and project manager and business line managers to have the Black Friday syndrome. Ok, it's cheap. I'll buy it. Ok, it's a huge discount. I'll buy it. I'll make it. I'll make this product. I'll make this. But no one cares about. Is it worth? How many of you have useless things both on the Black Friday's, staying at home? Maybe you recycle them at Santa's as a gift to your best friends. Ok. Keeping the feedback loop short is critical. When you make a product, you have to validate your assumptions as soon as possible. And so you have to not necessarily make continuous release, but at least you have to have a way to validate your assumption continuously through an approach that probably in the end could be continuous release. One of the major problem even in a Gile atelier is that roadmaps are built on features. When you make a roadmap, if you follow, let's say, the user story mapping approach, whatever, Jeff Matan, Roman Pitchler approach is to build in roadmaps. What you will have to have is a list of features you expect for every milestone. This is kind of waterfall, I mean. You are just saying that this feature will be available within a specific date. What I suggest instead is to focus on defining which needs we want to address by each of the milestones in the roadmap. So let's see, for instance, that we are on the 33rd of October. By January 2020 we want to have people being able to work without projects in that organization. This is a need, this is a desire maybe. This is not the feature that enables this desire. This gives the teams, the product teams, which is not just the development team, the freedom to choose the best way to address those needs. And to discuss with the customers and validate with the customers that the solution they are bringing on is the solution that the customer actually needs. So what is the project? Once again, is continuous product management. We need to focus on products, not on projects. Someone argues that, do you remember the definition of projects that I give at the beginning of the talk? A project is a temporary endeavor to create a unique product. So since you say that projects build products, you read also on the contrary that products are built by projects. But we don't need projects to build the software products. A friend of mine recently switched to a different company and his role is to be service owner. He has several products, is a big company with several brands and the products that my friends manage are customized for the different brands of the organization. And he said, you know Dmitri, I stop making projects, I just ask at the budget for the whole here to found the team to manage the product, because it was easier for us. And you know what happened? The CFO didn't have any objection to that and I was quite surprised. And most of all one of the brand changed the CEO just a couple of months after we switched to a product mindset. And instead of following the projects approaching, which the request for the former CEO would have been approved in a project, a budget allocated for this initiative and then we had to continue working on this project, I could just switch to the new goals of the new CEO without asking anyone. And I was able to give him an answer in a couple of weeks instead of months. That is, he didn't know anything about no projects by the way. And they say, great, this is no projects. And he said, what are you talking about? But, OK. Antoine de Saint-Exapéry once said that perfection is achieved when there is nothing more to take away instead of add. We want to simplify the way software products and digital services, which are built on top of software products, are built. And so, what I'm suggesting is to stop making projects and leave happily ever after with the pains of the projects. OK. I give you a few referencys. On this topic in 2018, three books have been written. Two by Alan Kelly, Project Myopia and Continuous Digital. Because one of the discussion that there has been on the social and that is still active is about the usage of no. Why are you saying no? What is your proposal? The proposal of Alan is Continuous Digital. My proposal is Continuous Product Management. We are pretty on the same page, but with something different. Shane Estian, even Laborn wrote the book No Projects, which you can download on InfiQ for free. Or you can buy on Amazon also. Interesting reading, also project to product. The link start up, I already discussed it on that. Beyond budgeting, very interesting, because it's one of the key to switch on the finance level from a project to a product mindset. Steve Denning, The Age of Agile, a different perspective on agility, which is different from capital A Agile. Out comes over output. It's an interesting book, which discusses almost the same of the, I don't remember, 8th principle, which is Outcomes over Execution. And then the release candidate of my book, you can take it on the link pub. I've rented this book since February. I'm working on this topic since one year and a half. So if you want to participate, give him feedback on the talk, or also on the topics that are written on this book, you can. And then questions. OK. I'm arriving. Sorry. Hi. So my question is simple. As we see that most people don't like projects and it seems to be not that efficient, why are we still using it in vast majority of company? Sorry, can I repeat, because I'm a little bit deaf, so I cannot hear your story. Thank you. Why do we use projects as a work strategy instead of those kind of strategies you have been describing, since it's not to be satisfying, it's not to be efficient as well. Why do we stick with that? OK. If I got your question, I don't know why we are using projects, maybe because we are used to make projects. Maybe because we are... I think you have to take these from different point of view. Big organization make projects because they have decided to outsource their IT capability, they don't have the capability to buy the capability, and they don't want to risk to buy more capability than they need. They still prefer to have projects to keep things under control. And this is a problem because many big organization became a project centric instead of product centric. And so they are not able to satisfy their customer, either internal or external customers, because they don't focus on products. We use projects in some case because the CFO ask us for making projects. And there is another situation in which projects are probably the only way to go, if you are a system integrator, what you are going to sell is a project, which is kind of weird because no one will buy a project, actually, I would buy the final result of the project, but instead there is someone who sells me a project and I will buy the project, not the product. This is kind of problematic in my opinion. This is a situation which cannot be solved easily, but I worked with a couple of system integrators that are starting selling the team instead of selling the individuals that are building the project, which is a minor change from center point of view, because you are still selling a project, but it's a big change because you are selling the performance of a high performing team that is working continuously together on different projects during the time. So we have different situations. I have a discussion, I had a discussion, you can also find it on Twitter because it was a public discussion, for instance, we wanted to say that for compliance reason you have to have a project. For accountancy reason you have to have a project. I don't think so. The CFO in Italy at least, to which I discuss, with which I discuss the topic, were not in agreement with, but maybe in other countries it can be an issue. Any other questions? Anybody else? Okay, I have a question for you. What are the people feelings when you start to talk about experiments and no more about projects? Depends. People has mixed feelings about this, because experiments are since just like, okay, let's do whatever you think, just waste our money and who cares about the result, if it succeeds, it's okay, if it doesn't, it's not okay, but who cares, we have founded an experiment, so for us it's okay. But making experiments, especially when you have a product on the wild, on the market, it's not that easy. I give you an example, Candy Crush. How many of you know of Candy Crush? Candy Crush Saga? Okay, it's a game on smartphones, also on Facebook, I think. A couple of months ago, they make a change, they made an experiment and this was declared in one of the support forum of Candy Crush. Candy Crush gives you some limits on the number of lives you have to play the game. A new life is given you after half an hour. So the experiment that was run by the development team was to give you a life every hour instead of half an hour. That means that once you finish your life, your life is, instead of being able to start again in two hours and an half, you have to wait five hours. And there was a huge complaint by the crowd. Many people's left forever Candy Crush. They lost customers. They are not paying customers. Some of them buys in up gadgets. Many of them just see the advertisement. But they lost customers because of wrongly made experiments. Making experiments is just not doing whatever you want. It's making something that is aimed at generating an impact on the customers, at changing the behavior of our customers. With good reasons. So you found an experiment with good reason. Another thing that is important in the experiment is how you define an experiment. You have to set metrics, keep it high, whatever. You have to define how do you measure that your experiment is successful. You have to fix a budget and also a time framing, in which you want the experiment to be completed. Once you run an experiment, after a while, you have to decide the experiment confirmed my theory. My experiment did not confirm my theory, but confirmed the contrary. The experiment was inconclusive. You have to take decision based on the result. If the experiment is inconclusive, it's just like the experiment has failed, but still you learned something that you didn't know before starting the experiment. So there is big value in making experiments instead of having large assumptions and test them after one year, two years. It doesn't work. Any other questions? Thank you. Sorry. One point on the ten principles. You say Avli Al-Nen portfolio management strategy. What it means? What can you put in the real world? When I talk about the portfolio management strategy, in this case I cite one of the scaling frameworks, probably the most hated one, which is safe, which has some very complicated stuff, but has very good points. The portfolio management strategy that is in safe is a good starting point in which you slice your product strategy in... sorry, your product ideas in tiny slices, you prioritize in order to have the fastest possible validation of your hypothesis and you prioritize... sorry, you give the priority of the product steps and the product parts that you want to define. Classical project portfolio management is different because it's mainly focused on managing what is currently under work, what we are currently working. So you have a huge dashboard of all the projects that have been already started, not about what will happen on the future. I would focus on what will happen on the future, what happened in the past is lessons learned, but let's forget it and start again looking at the future. Is there a question? I have a short question about the difference for you between having a concrete product you sell as a company as an institution or on the other hand, if you don't have anything concrete, so you don't have a portfolio element that you can sell, but the only element you have is very, very abstract. So for example, the universities, they're selling knowledge, that's the product they do and therefore investing continues into the product knowledge is totally different and projects tend to have a concrete time frame and that's also an advantage. Okay, so I'm not sure what is the question actually. The question is, in a product world, you have a concrete thing, you can go beyond the projects where you go and continue to develop and redefine your product to get better, but if you don't have a concrete product, something that does not change or something changes completely different, how to go there without a product. Okay, I understand. Yes, there are many software products which are not really products from different point of view. There are situations in which making a project is probably the best situation. We are in a Plunk conference, so once you have set up the CMS for your customer, what did you sell? What does the customer have as in his hands? He has the products that you are not doing, it's the blown community that is developing the product and probably you don't have to manage that product, so in this case the project is probably most appropriate. The same applies for migrations, when you have to migrate from one version or from one software, legacy software to a new software, the pieces of code that you write to migrate from the old platform to the new platform is throw away. I mean that once you reach the goal of migration, you don't care about this software anymore, it's lost forever, it's not useful anymore. At the same time, the principles that are behind the projects, my view of no projects can be applied because the team that realize that project can move to another project and at least try to keep some of the things that are important together, just like people. I agree, there are situations in which projects are still appropriate and you have to decide. In many cases you can have a product point of view of the world, also in large organizations which have a lot of line of business software can be seen as a product. But for instance, one of the major problems that I saw in my experience is for instance for ERP. ERP are a product from Microsoft, SAP and whoever point of view, but they are not products from the customer point of view. They are, there is something in the middle because Microsoft and SAP doesn't sell the software to the end customer, they sell the software to the partner which make the customization and the customization doesn't change so often. But still you have probably to keep an eye on the product because even if it doesn't change so often, it changes because the business changes, because the company changes, because the world's surrounding the company changes. So even in those cases, I think that in the coming years, a shift to a product mindset is necessary for companies to survive. Not for all of them, but little by little it will happen. Thank you. Okay. Thank you Dimitri for sharing your experience. Thank you all.
|
InfoQ defined #noprojects as an emerging topic for 2019, and I'm on the same page. I believe there are better ways than projects to evolve a company digital portfolio. Because while a project is temporary, software is continuous. Quit making projects can be hard and challenging. It’s not only a matter of re-branding what was formerly known as project with a different name. It requires several changes at different organization level: budgeting, procurement, contracts, organization agility, technical excellence. This talk describes a journey that through 4 stages (and supported by 10 principles), describes the road from a project world to a #noprojects ones, how to create a product culture, evolve the digital portfolio, quit making projects... and live happily ever after.
|
10.5446/55174 (DOI)
|
So, lesson learned, I will never talk again about spaghetti alaboriness. Sorry about that. So, my name is Eric Breaux. I'm a developer for 10 years, but since two years I'm working at Ona where I'm a front-end developer. My job is to build a front-end application, quite a big one, on top of Guillotine. So, my topic today is traversing. So, traversing is simple, right? Most of you are plant developers, job developers, Guillotine developers, and you know about traversing. Because, yeah, something you use every day. It's super simple, but explaining traversing is super challenging. I don't know if it ever happened to you. Like, you have someone who does not know anything about traversing, like someone doing routing, like a Django developer who's going to use routing, a React developer who's going to use routing, everybody's using routing, and start explaining, yes, traversing is so cool, it's something great, and you explain how it works, and it's ended up actually super quickly because it's like you feel you have explained everything about it. The other guy seems to think like he got everything you said, but actually he thinks that's nothing special. So, in your mind you have been explaining this, like the ideal thing you might have in mind, like it's so perfect. And what the guy got from your explanation is basically this. You're going to find a default route, going to a default view, and yeah, why should I care, right? So, yeah, it's difficult to explain, and my objective here is to try to give you some element to go deeper in these concepts. It's not technical, right? It's about this concept. Try to approach it in order for you to be able to explain it to someone who does not know about it. So, yeah, traversing is the most common thing you can imagine. Billions of people are experiencing traversing every day, you know, on the web. So, that's something super common, and surprisingly, web developers, they don't get it. They don't get it at all. It's like, even brilliant people, right? It's brilliant people, you explain about it, have been in this case sometimes, have been facing some brilliant people and explaining traversing, they're like, no, sorry, no. So, question is, are we smarter? When I say we, I mean, video-tea developers, sub-developer, plant-developer, pyramid-developer, a few of them use traversing. Are we smarter? I don't think we are, really. Let's be honest. We're not smarter. Maybe we have seen the light, right? So, you know this movie, maybe, from the 80s, the Blues Brothers? It's a good movie. As we are in the cinema, maybe I should take a moment to explain about this movie. It's the story of a sub-developer. Actually, he's a web developer initially, and he's joined back his brother after spending many years on the long-term Django project. And he discovered traversals, he sees the lights, that's when it happens, and then he starts dancing and singing, as we all do right when we do programming. So, yeah, it's a fantastic movie. And, yeah, anyway. So, we've seen the light, but let's go back to the basis. Web, web is simple. It's about simple concepts. URLs, hyper-media, right? That's the basis of web. And people usually understand it pretty well. Even non-technical people know what's in a URL. Not all people, but, yeah, URL is quite simple to get. Hyper-media as well. I mean, you have been seeing HTML already. That's hyper-media. And, yeah, because of those two concepts, hyper-media plus URLs, the web is traversable. So, traversing is simple as well. And it's powerful, but people don't get it. So, let's go into it. Traversing can apply to any tree-based information structure, right? So, it could be anything. It could be a file system. It could be the XML DOM. It could be, yeah. Anything like a tree can be traversed. When you do that, you traverse, right? You are reaching a point in your file system. You are navigating from one point to another with a relative pass or an absolute pass. That's traversing. That's just what it is. Nothing else. Simple. And, yeah, web is traversable. You know links, right? You have something like this. Click here. You click. And you go to another place. So, that's how it works. That's what happens when you use a browser every day. We'll come back to this example because it's a very good one, but later. Let's talk about web apps. What are web apps? Web apps are just any software which is using the web format, okay? That's a simple definition you can find. So, it can apply to back application, to front-end application, to both. I mean, yeah. So, web is about traversing. Web applications are using the web format, but even though all the web applications are all about routing. So, that's crazy. And I'm going to show you an example of our routing rocks. Routing is like you're going to have on-page containing a link. So, yeah, I'm going to see if it's good enough. A link like slash product slash one, two, three. And it's on the link, an anchor, with router link or route two, whatever language you're using, whatever technology you're using. Yeah, you say, I want to route to product one, two, three. So, what is going to happen behind is the routing system knows that product slash whatever is supposed to be passed to extract the whatever parts, one, two, three, because that's a product ID. And then, you're going to call the product view component, let's say. In this component, we will receive one, two, three as a parameter. And then, if you are using a state management system, we're going to dispatch, yeah, load this product one, two, three. So, we are dispatching load product one, two, three. And we have somewhere, something, a service, an effect or whatever it is, we will catch this, dispatch action and gets one, two, three to build the corresponding back end point slash product slash one, two, three. We're going back to what we had at the origin, right, Banya? So, we call product slash one, two, three on an endpoint. And then, there are several routing system receive slash product slash one, two, three and it knows that everything like slash product slash one, two, three is supposed to be passed. So, you get the one, two, three thing, that's a product ID and you can give it to the product view. So, here in the product view, then you receive one, two, three as the ID and you will ask your database to find the information about the one, two, three products. So, you make a select with this, for instance. And you get a result which is probably JSON. So, you have an ID, you have a title and you have a rated product list. And it's some kind of list of past to other products like four, five, six, for instance. So, the front end going to receive this and you dispatch the payload and the reducer read it and you get this new mention of a product, four, five, six, store it in the state. Then, it will pass it to the product view component to create a new link on the rendering with tag A, router link equal product slash four, five, six. And yes, that's so clean and simple system. No, it works nicely. I love it. No, that's craziness. That's total craziness. So, let's go back to the 90s when I started making web pages in 93, I think. And at the time, you had a static website of the U3 and you can create a page in a folder like you have a folder named product and you make a page 123.html. And if you are on your web server with your browser, you go to slash products slash 123 and it displays a page. But then, you say, hey, maybe I should classify my product. So, 123 is actually a bike. So, I'm going to make a subfolder into products named bikes and I move my HTML file into the bikes folder. You do that and then you go to your browser, you go to product slash bike slash 123 and yes, it works. What is that? Genius. That's perfect. What's wrong with us? Because we are implementing routing system everywhere and whenever you want to move your 123 bike into the bike folder, it's all broken and you need to rebuild your server, you need to rebuild your frontend application and everything. That's super bad. That's super bad. The problem, the original scene of routing is not that I have been demonstrating with my crappy images that you are keeping on parsing and concatenating the same string going from one pass to an ID, etc. It's not about that because conkineting string, it's okay. I mean, maybe it's stupid but it's not wasting too much energy, right? So, we can afford that. The problem is not that. The scene here, the problem is that at any step of the routing example, I've been giving, you have to know the type of resource you are processing. So, you need to know that it's a product. So, you need to call these components and you need to produce such route and you need to know that on the server side, you need to know that on the front side. Knowledge is the original scene. That's what the Bible says and it all started with an apple. So, yeah, that's the problem with routing. So, let's go back to this example. That's a simple web page. Well, it's not, but yeah, it's a link you could find in a web page. You have click here and that's a link. And you can click, right? You can go there and click and, well, what's going to be? Another page, maybe. Or maybe not, maybe it's PDF. You don't know at the time you click and your browser don't know at the time you click what it is you're going to get after clicking here. So, let's try. Oops. Yeah. Why not? Yeah. Someone changed this link because it was not the original ones, but yeah, anyway. Shit. Yeah, three. I recall myself. Yeah, sorry. So, where is my presentation now? I hope I have not. So, routing is thinking stuff like databases, like products is my product table actually and I map it to the web. And web is not like a database. It's not. It's a gigantic library, right? Web is like a chocolate box. You have a link. You don't know what you're going to get. So, yeah, this movie, I'm going to talk about movie again. This movie is from the 90s. You know it, no forego. It's about an Angular developer. And everybody says to him that Angular cannot run fast. But he ended up to prove it actually can run really fast. Maybe faster than anything else. And he had a lot of adventure because of that. And he even ended up marrying Robin Wright. So, I think Angular is pretty cool technology. But yeah, nice movie. You should see it if you haven't. So, routing comes from this SQL vision. And I think it's sad and depressing. I really dislike SQL. And web is free and wild. You can push any information wherever you want. You can link anything wherever you want to something else. So, the spirit of WebWide, it's free and wild. And Web Apps are not. And I would like Web Apps to be free and wild. Our application should work just like a browser. Gison or XMR, if you're old-fashioned. But Gison is hypermedia. It contains links just like an HTML page. It's actual links you get into a Gison payload. So, when you click a link on a browser, you don't know what you're going to get. But you're super happy because that's cool. You know your browser is going to react to that. While I want my application to act the same, I get a link from a payload. I go to that link. My application should be neutral about it. Do not assume anything about it and render the information depending on its nature. So, let's go for a very simple demo. Super, super simple. It's long. So, it's super basic. It's just showing how traversal could work. This application is just taking as an input a GitHub repository. So, that's what I have in my input here. And if you go, it will call the GitHub API. And it will return, as it's a folder, I get a list of files, a list of links. If I click on one of them, then it will call the API again and render its content because that's a file. But I don't know at the time I click on this link here if it's a file or a folder. I have no information about that. But I just traverse to that. So, I traverse the GitHub API. It returned me an object, which is an object, and saying, hey, I'm a folder. Okay, your folder is going to display you as a folder. So, providing a list. And going to say, maybe I'm a file. Well, then I just behave the way I'm supposed to do with a file. I show the text content. So, just like a browser would do with a normal link, my application is doing the same. When I start my application, I have no idea which repository the user want to go. So, I cannot assume that all the file is going to be at this location and all the folder is in this location. So, I cannot program any route. It's totally open. And my application does not need to be teach about it. It's neutral, just like a browser is, right? It reacts to the content it gets from the server. So, the server is mastering the structure of the information. And the application is just neutral and receive information when it is always should be. So, that's the objective of front-end traversing. You're probably comfortable with back-end traversing. So, basically, I work on an HTTP server. And in this previous example about GitHub, I was not using front-end traversing. I was just having an application, front-end application, able to dialogue, to exchange, and get information from a traversable endpoint. Now, how could we reach a point where we apply traversing to front-end? The idea is nice, but it needs to introduce another concept, and I will take a few moments to explain it. It's about state management. So, there have been several talks about React and state management, freedocs, et cetera. I will try to explain it in a very, very simple way, not too technical if possible. You have something like that in an application, like a login method that will make a post to an endpoint, slash login, and when you get the result, you refresh the profile. So, it's supposed to refresh the name of the guy, his photo, whatever it is. That's kind of MVC kind of approach. And it works okay when things are simple. So, the truth is front-end is not simple. So, whatever you're trying to do with front-end, if you think it could go like this, you better think again, because it's more like this. So, yeah, another talk about movies. This movie is from, this scene is from Predator. You know the movie, Predator from the 80s. It's about a very, very mean backend developer who arrived on a planet where lives peaceful front-end developers, and he starts hunting and killing them just because it's very, very mean. But at the end, the front-end developer wins, and that's cool. Yeah, I like this movie. But, yeah, anyway, the thing is when you're doing front-end, you're in a crossfire. It's shooting everywhere around. It's super dangerous. You have backend responses. You don't know when. You have user interaction. No idea when. Like super violent one, like a scroll or drag-and-drop pushing a ton of events. You have intercomponent communication. You have everything like that happening asynchronously. So, that's very much like a war. It's super dangerous. And you will probably end up calling refresh profile in a case that you don't expect. Probably will produce an inconsistent state of your application. So, that's a real problem if you stick to the MVC approach. So, that's why we're using state management. What's the principle of it? It's like you should understand that time is an illusion. The UI is totally dynamic. The user, the server is doing as well. Everything is moving at the same time. So, it's dynamic. And you try to implement it as a dynamic process. Well, that's a mistake. You will never succeed. So, you should... If you try to do that, you act like you can control the timing. But the timing is not yours. It's totally not. So, you should forget about time. No past, no future. That's the Redux approach, the state management approach. For a given state, your UI will look like this. So, every time this state opens again, it will always look like this. Whatever the previous state was, whatever the next one is going to be for this state, I get this. Simple. So, you have no past and no future. And it's totally static. And it looks like it's dynamic because every time something happens, you replace the current state with another one. So, you get a new static version of what you get. And as we are stupid, we think it's dynamic. But it's not. So, that's how it works. So, let's take an example, a real-life example. I call my wife just before cooking the dinner, and I ask her, will you be at home for dinner? So, depending on the answer, I'm going to cook for two or for one, right? So, that's simple life's case. But what if my wife is an alien? So, maybe some of you think that our life partner is an alien. I don't know. Maybe a few of them don't. It's okay, actually. You can manage it with Redax. It's something you can manage. So, let's say my wife is an alien in a galaxy far, far, far away. So, if I ask her, will you be home for dinner? I cannot expect an immediate answer because, you know, light speed and so on, it's super painful. So, I ask a question, but I don't expect any answer. I forget about my question. I forget about it. I just ask her and then forget. So, yeah, my family often say, yeah, you never listen to what we say. And I'm like, yeah, of course. I'm a front-end developer. What did you think? No, I don't listen. So, yeah, eventually she will reply. And what you're going to do is going to increment the number of guests in my state somewhere. And as a time I will start cooking, I just go to the state and see how many guests are planned. And then I'm going to cook for this amount of guests. So, that's how it works. It's decoupling stuff. Decoupling is super important. So, Redux provides this ability to discuss with aliens. Like people you don't know when they're going to answer. So, server is an alien. And if you think about it, front-end developer think very much that back-end developer are kind of aliens. So, decoupling is cool. And yes, if you consider like a car, you're building a car, you have the speedometer indicating the speed you are running at. And you have the accelerator to change the speed, right? If you implement the accelerator in such a way it controls directly whether the speedometer is displaying, that's a big mistake. Because first it can be false because not because you are accelerating, the speed is going to change. And plus you might end up with a system where you say, hey, maybe we should just use the speedometer to change the speed because it's connected actually. And well, nobody would do that on a car, right? But we are doing this kind of very, very bad mistake every time in web applications. So, that's where decoupling should be a very big concern for all of us every time. So, let's go more in detail on how works the state management. You dispatch an action. Dispatching action is like action information. So, I will be at home for dinner, that's an information. So, I send this information. It goes to the state. So, sending information is dispatching action basically. It's like an event or a signal if you want to use computer software related namings. Then you have a reducer. So, your reducer is going to take the previous state which is saying number of guesses is one and will increment it. So, you have a new state saying number of guesses is two. So, it's just about reading the information and updating, creating a new state to replace the old one which is the correct state for the current situation. And that's what it does. It does not care about the consequences. That's the most important thing you need to understand. You just update the state and you forget about the rest. The rest is handled by someone else. So, someone else is me cooking dinner component. That's my name. I start cooking dinner and when I instantiate myself as a cooker, I take from the state the number of guests and then that's it. So, selector is like SQL view basically. You are extracting from the state the information you need as a cook. I don't need much information by the number of guests. I don't try to organize the dinner at the same time I'm cooking it. I don't try to know how many calling people will be at home. No, it's not my job as a cooker component. I should just focus on the number of guests. I get it from the state and that's all. You don't mix concerns. Don't organize the dinner at the same time you are cooking it. So, that's the principles. Now, state and traversing. Let's connect the two concepts. So, you could imagine something like this. Instead of routing, you could use a directive like traverse to something and you pass a pass to that, right? So, this page would be on your application on the front side and you have a link. Well, when you're going to click on this link, you will dispatch, yeah, load this pass. And this pass is actually the one you get on your browser, but it's also corresponding to the one you're back in is expecting in order to provide you the Star Wars Day article, right? So, it will call the backend with the very same pass. So, we take the pass we add on the link, which is a browser-based local information produced by JavaScript application and I just append the backend domain and call it. I get a JSON. And this JSON, if it's properly structured and well done, tell me, yeah, I'm an article, I have some sort of information about that. But it could also be an interface, it could be whatever you want, but something allowing you to know what it is about. So, that's an article. Good. Then, you have a type view mapping, just like your browser knows that when it's HTML, I render it, when it's PDF, I render it with this other tool. When it's, I don't know, a zip file, I download it. This mapping is somewhere on your browser, right? It knows for each type of media what it should do. Well, you are not manipulating type of media in your web apps, but custom types, like article, why not? So, you map somewhere that article is such component, the article component. So, at the time you are getting your response from the backend, you discover, you didn't knew before, you discover that it's an article, while you're going to instantiate the article component, and it will receive the context. So, context is just like the one you have on any plant template, right? That's the current object you are rendering. And you get that in your class, you select it from the context, from the state, and then it's acting as an observable. That means that you don't need to care about if the context is there or not. At the time it will pop up, you will receive it, and the component will be going to render it. And then you can do all the stuff like this, travels to dot dot. So, you're going to go to the parent content, which might be, I don't know, might be an article, or might be a folder, or might be a collection of article. It can be anything, I just don't care, I just want to go one step back. Well, that's the way we go. And the thing with these concepts is that we are building a traversable collection. That means the traversal front-end application similar to the GitHub one I show will actually take every backend response and push it into a collection, a collection into the state. So, I have my backend reproduced locally, kind of, and I can query it with the electoral. It's in my state, it's acting like a cache, if you want. And then I can do stuff like this. I can have a component, let's say it's on the left side, and I want to show the list of related articles. So, I will just have this kind of selector, select dot dot, and then I pipe it and I get the items of the corresponding object, which happen to be a folder. Or I can go to dot dot slash dot dot slash 09, so I get articles from September. So, that's quite handy. You can traverse your local data, and the thing is, if this data is not existing, it will query it in behind. So, if you are loading an article and you want its parent and its old ancestor because you are trying to render a breadcrumb, for instance, while the system is going to check on the state, this article, this content is missing, I make the corresponding call, and every time you make a call, you enrich your state. Now, what if I need an actual route? That's okay. You can have route with traversing if you want. It's okay. It works. So, you can... Oh, sorry. And you can also decide to remove route entirely by just having different views for the slash object, which is a route. So, if you need, I don't know, an admin panel, it could be an actual view for the content route. You map to it. So, I'm stopping now because it's over. Just one final slide. So, Angular traversal, engine state traversal, and Grange are all... a stack of Angular implementation implementing those concepts, and it allows to make great applications super fast. So, if you are into Angular, but I know a few of you are, you can check that out, but it was not about this. It was mainly about the concepts. So, I hope it was interesting. Thank you for your attention.
|
Traversing is a key concept in web navigation, yet it is pretty difficult to explain and most web developers don’t get it. Object traversal has been introduced by Zope decades ago, but mainstream technologies never approached it. The first objective of this talk is to attempt to explain traversing with simple words so you can explain it yourself to non-Zope/Pyramid/Guillotina developers. We will also analyse the mono-maniac usage of routing in the current frontend frameworks and identify how traversing could be a valuable alternative approach, more specifically in a Redux-based architecture.
|
10.5446/55175 (DOI)
|
Yes, welcome to Ferrara. Welcome to the State of Plonetalk. I wanted to start out with some community updates from the past year. Some of the biggest news is we've added six new Plonet Foundation members. That's Christine, Thomas, Fulvio, Rico Pica, Kim, and Stefania. So welcome. It's fantastic to have you. And I do want to make a pitch because every year we look at this list and we say why weren't you on this before? I thought you already were. I'm sure there are many more of you out there that should be on the Foundation membership list. Feel free to nominate yourself. Basically, we're looking for anyone and everyone who's had significant and or enduring contributions to the project and the community. And the other big news this year is that we've also added Zoop to the Plonet Foundation. I want to thank Matthew Wilkes. Are you here? Yes, he is. Okay. For handling the kind of machinations behind pulling them into the Foundation. So that's going to help us keep that project not just alive but protected. So because it's something that we desperately need to have around for the future. We've also added 35 new core contributors this year. And we funded nine different events this year. Eight were sprints. And one was the Python Web Conference in Indiana. Yes, sorry, Ramon. I wasn't sure which country to put there. Yeah, I want to make a point that so the Foundation has been fantastic at making sure they give money out to host community sprints. So if you're interested in having a bunch of Plonet folks come to your hometown and drink all the beer you have and write all the code you need, it's a great way to build up your local community to build up the worldwide plugin community. And really, it's a fantastic learning experience. And there's even more money available if you can justify that you're sprints is strategic, providing something for the future of Plonet. And I also want to give a pitch that there are, like we said earlier, there are sprints this weekend. And those will cover all sorts of skill levels. So don't feel that you're too new or too naive about how things work to contribute because those things are actually super useful. As we work on testing interfaces and documentation as well. Yeah. So Sven gave me some updates about the Automation and Infrastructure team. So they had a sprint in Barcelona this year. And they really worked on, so this is the team that kind of manages all the services that Plonet uses, the hosting for the website, the community dot Plonet and a number of other backend systems, dist.plonet.org, that sort of thing, that you'd definitely notice if they weren't there. But it's hard to remember or hard to tell where their domain is, but if it weren't there, you'd definitely notice it. So they spent quite a bit of time trying to work on simplifying their entire service infrastructure and rewriting the Ansible scripts to make it a lot more reusable and something that can be picked up by people who are a part-time server admin. So they're looking to do about two to three sprints in the next year. And they are definitely looking for new people to help out. So they're looking for people with some experience, but you don't have to be an expert. As an ops guy, I can definitely tell you that the more you know, there's definitely a one-to-one correlation between expertise and opinions when it comes to DevOps. So maybe the newer folks might be a little more helpful. But if you've got ops folks at your company and you're looking to get them involved in some way, this would be a fantastic way to do that. As was mentioned earlier, Plonet took part in the summer of code this year. We had two students. Alak was working on the Gatsby JS preview. And I believe he's here, right? Yes, in the back there. Thank you. And Karan was working on the Guillatina API evolution. And we do want to thank the Python Foundation for supporting us this year. We weren't added, we weren't, we didn't get in as a sponsoring group, but we were able to get in under the Python Software Foundation's umbrella. And there's a new program from Google this year called the Season of Docs. And thanks to Paul's hard work, we were able to get into that as well. This is basically matching up technical writers with open source projects to find a way to get them involved in contributing to open source. And so we have Chris that has been working with us throughout this fall. And he's working to extend and reorganize the Volta documentation to really cater to a more diverse set of developers, experience levels and backgrounds. I do want to call out the training that we had. The past two years has been a definite upswing in the number of people attending trainings before the conferences. And I've been really pleased to see what's been happening with the training.plown.org site. So basically all the conference, who attended a training session before the conference this year? That's a great number of people. So all those training sessions are available for free on the training.plown.org. If you want to go back and refer to those later. If you want to go back and take something you've missed, that's all available. Which I think is really great. We've kind of produced this really great set of open source documentation around training. I'm sorry, I keep messing with this. So yeah, right now we have 17 different courses available on there. And yeah, I definitely recommend checking it out. And I also want to thank Steve McMahon. Steve has been, you might know him as Steve M. from anywhere. But he's been doing our installers for years. He retired this year and is looking to really scale back the amount of work he does because he wants to spend time with his family. I don't know what that's like. And so yeah, we are looking for someone to take over that position. You don't have to do it alone. We're looking for several someone's. And he is more than happy to help in a transitional period. Okay, so let's get into the meat of things. I want to talk about Python 3. So the big news this year is the death of Python 2.7, which Plown has been using forever. The Python Foundation announced a final code freeze on January 1st of 2020. So coming up very quickly, there's going to be a final wrap-up release in mid-April. But that basically means that there are no new bug fixes or security issues fixed after January 1st. So it's a very important date for Plown. It's a matter of survival for us as a project to get off of Python 2.7 and into Python 3. And yeah, we can't continue to release on Python 2.7 if it's not receiving security patches. And you can't continue to deploy or sell it because of that. So there's been an incredible amount of work done to get Zope, our underlying application server, working to support both Python 2 and 3. That includes things like a complete rewrite of the restricted Python module, replacing the Z server with a WISGIE-based publisher. And that's been modified to act a lot more like Z server. There's no middleware required, has full support for public evocation events, exception views, and basically should be a fairly transparent switchover for us. And it has a refreshes open management interface, which sure, it's based on Bootstrap, but don't get too excited because it still uses frames. And this will lead us into Plown 5.2, which was released in July this year. The big change to that is that we added Python 3 support. So Plown has that taken care of. Yeah, Philip tells me the majority of the major Plown add-ons have been migrated. There are still lots of smaller ones that are in progress or need some love. But yeah, I would definitely suggest checking out his talk this week, finding out more about what that process entails. Archetypes is now optional. It was deprecated in Plown 5.0, but really consider this your wake-up call. It will not work in Python 3. So if you're still using it, that's going to be one of your major roadblocks to moving forward. The migration process has been tested out extensively. It's been working really well for folks. Basically, it involves migrating your site to Plown 5.2 on Python 2.7, moving your archetypes content into dexterity, making sure all your add-ons are Python 3-compatible, and then running a ZODB upgrade script that will go through and make the database changes necessary for it to be readable in Python 3. And then you can start Plown up using Python 3. And yeah, it's proven to be pretty straightforward. It's well documented. And yeah, it's been working very well so far. So I want to take a moment and say thank you. The Python 3.0.4 effort is huge, but it really wasn't very visible. It is really a dangerous project for Plown. Taking too long on it would have meant that the rest of us and our clients wouldn't have had sufficient time to do the upgrade, would have meant taking development resources away from other important development projects. And failing would have meant the outright death of Plown as a project. Getting to this point really reinforces that Plown is in it for the long term and we aren't going away. So I want to call out Philip, Jens, and the folks at GoSept for all their hard work seeing this through. So can we give it a go? All right. So the rest of Plown 5.2. So we're adding the rest API to Plown 5.2. Basically this gives us a bring your own front end framework. So you can hook it up, hook Plown 5.2 up to Gatsby, a native app, React.js, Angular, or whatever you want to build. It has the standard CRUD operations create, read, update, and delete. But beyond that we're also supporting things like breadcrumbs, navigation, workflow, commenting, control panels, and page layout. The things that make Plown strong more than just a basic CMS. Plown 5.2 also adds some improvements to the navigation bar. We now have drop-down menus by default that come right out of the box. They are mobile ready. There's been a lot of work to optimize these. Navigation has typically been one of the slower elements of the Plown interface. So they've fixed some caching issues, some indexing things to really make it a lot faster. And it's set up to be highly configurable. We've also rewritten the login process. So this was something that existed in the portal skins folder for quite some time. And when things live there, we can't actively test them. So because it's one of the most crucial things, it's the first thing you see when you go to your Plown site is trying to log in. Having that as not well tested was really dangerous for us. It was very easy for things to breaking things, changes to sneak into that. Yeah. So we migrated this to browser views, put lots of hooks in place to make both the login and registration processes highly customizable. And the great thing it does for us on the back end is it removes the last remaining controller page templates, which really helps us clean out that old portal skins folder. We're looking for the one way to do things. And so that helps us to remove that old methodology. Made some improvements to the themes as well, the Barcelona netto. So we're removing the static resources out of CMF Plown and into a new package called Plown.staticResources. This is basically something that we can release more often as changes come in and doesn't tie bug fixes to the JavaScript, to the interface, to the full Plown release. It's something we can iterate quite a bit more. Removing the old resource registries, portal JavaScript and portal CSS, just getting a few more things out of the ZMI. And there are some fixes to the structural issues, the Barcelona netto theme. So we're fixing the ordering of columns, putting the main first, then left and right in the HTML, just for scanning purposes, search end and crawling, that sort of thing. We're improving the styling options for mobile. So basically the main content displays first in mobile and then the sidebar content. And if both portlet columns and sidebars are in use, it'll display side-by-side on small screens, left first on mobile and to the left, right and main content on larger screens. And we've also improved the footer portlets. So basically currently adding portlets to the footer is messy and unordered. We now have a more doormat-like footer, which uses the boot strap grid to give a much nicer layout. And we've integrated products redirection tool to facilitate URL management. So this gives us aliases and redirects. You can set those at the object level. There's a control panel that lets you manage those at the site-wide level. And there's a bulk upload as well. So that's Plone 5.2. I want to mention Guillettina as well. I'm not going to do them justice because I haven't worked with it quite as much. Guillettina was started as a basically a reimagining of this open-plone stack and has grown into something much, much more entirely separate project with a life of its own that we're glad to have still living under the Plone Foundation umbrella. It's really meant to be a framework first. And they're trying to use, make sure that it uses existing tooling. So instead of building their own, bring your own database, indexing and tooling. And they want to keep it small, modular, and easy to get into. And basically a plug-and-play system based on the problems you're trying to solve. So version 5 has just been released. They're adding Postgres indexing and search, in memory cache, and PubSub invalidation, Python 3.7 context variables, additional MyPy support, generic search query parsing, and JSON payload openAPI3 validation. And I'll take your word on that. There's also Guillettina CMS, which as the name implies is adding CMS features on top of Guillettina. They're recently added through the web configuration of content type and workflow definitions and support for Guillettina 5 and for Volta, which we'll get into in a little bit. Pasta Naga is something we've been working on, Abarx Sato has been building this for us. He's a UX designer, basically a reimagination of the entire CMS user experience. And he's looking at not just the look and feel. So this isn't just a theme. This is a reworking of the interactions we see in our sites. It's a top to bottom reworking of how Plone both looks and behaves. So he's been giving us a set of design guidelines for developers to follow as we build an improved Plone. And this is something I've been begging for for years as a design document that we can point to when we say, you know, when somebody is creating a new control panel, when somebody is adding a new feature, we can point to this and say, you know, no, it should work this way. And I think that really helps us have a common vision for the Plone interfaces we move forward. It's designed to work well on both mobile and desktop. And he's been working to design a set of reusable UI components that we can use throughout the site. And one of the main focuses is making the current status and next steps obvious. So based on the coloring of buttons, the position of things on the page. And that brings us to Volto. Volto is an implementation of the PasaNagui UI built on the ReactJS stack. So, and this is really taking advantage of the flexibility of the REST API and creating a front end which does things in an industry standard way. For years, Plone is really focused on trying to make web development more pythonic. And unfortunately, that's not a battle we're going to win. It's really time to admit that approach isn't going to catch on. Yeah, so this is an attempt to redo things in a way that in a way that matches modern front end development. And so this is what it looks like with the PasaNagui UI implemented on top of Plone using Volto. It's usable today. It installs in Plone 5.2. Volto hasn't reached 100% feature parity with Plone yet, but it's getting close. Things like control panels, content rules are still missing. I want to show you the PasaNagui editor. See if this starts. There we go. Okay. And so this is based on ideas taken from the Gutenberg which is from WordPress and the Medium editors. And we're taking those ideas and we're working on something that's really meant to improve upon accessibility and usability and the user experience of editing. Simplified the editing toolbar dramatically. Cutting it down to just the things we'll actually need when editing. And all this for a long time, Plone has had stance that inline editing is important so that what you're seeing is what you get. So you're editing within the context of the page, not separately. And that's something we're really working to get back to with this. So we can add different blocks to the page. So this, we'll add an image block here. And we can drag it up to upload. We can find images within the site. So I'll just upload one here. And you'll notice on the right we have a sidebar that slides in to give us access to document settings and also the settings of the image block. And so we can do things like link to an existing page in the site. We can have properties on the image like alignment which we'll do next. So we'll change the lines left or right, full screen. And as you can see we can drag and drop to reorder things on the page. There's a number of new blocks coming in that will allow us to do things like show content listings, show tables, videos, video embeds, and as well some layout blocks as well. And I just wanted to call out the experience of working with the user toolbar as it slides out here. Super straightforward. Everything is right in front of you. And you can immediately see where you need to be. And it's built to provide the same experience on top of mobile as well. Login works the same. Personal preferences like we just saw earlier are all available. We can modify document settings and personal settings here. And we can go in and actually make changes to the content as well. So if we'll go back to that same page and we'll do some edits. So we'll stick a new paragraph down here. And again we have access to that same content toolbar, content styling toolbar. And yeah, we can save and it works just like that. So the goals of the Volta project are really to bring best of breed tooling to manage the front end development. Saying let's stop building our own tooling to do front end. Let's find the things that actually are built to do front end rather than rolling those ourselves. So we're able to easily reuse existing libraries and tools rather than building our own. So if you need to add on to add a slider, you don't need to install Plum plugin to do that. You can use one of a number of them that are available through React. So because we're doing this, we get to tap into a much larger developer base, the front end developer base. And it's a lot easier to bring on new developers. We're really reducing the clone learning curve by removing clone from the learning curve, which is super awkward for me to get up here and say. I'll get into that a little bit more later. But yeah, so Timo mentioned that he's had sites where 95% of the customization work was all through the front end. And we're really, as we move into this headless CMS world, we're really getting to this point where people need to know that there's a back end that can do these things, but they don't need to know much more than that. We're expressly saying, here's exactly what we can provide you. And they can build upon that however they like in whatever way they'd like. Customizing Volto, at the Sorrento Sprint this year, we had a great setup where we did a morning of training on things, Python 3 migration and Volto, and then spent the afternoon working on a Sprint topic around that. It was a really great setup. I enjoyed it. But it was super easy. I am not a front end developer by any means, but it was really easy to pick up the process. Basically, we're running this create Volto app script that creates a basically a new bootstrap to your new front end package. It's basically a theming policy package would be in the Plone world. But it's extending instead of replacing Volto so the underlying bits can get updated. You don't have to worry about that as much. It uses a similar override style to Jbott, the just a bunch of templates framework that we have in Plone. So if you want to override the logo, you create the path that components theme, logo, and just drop your new SVG there. Or if you want to get into the logo template definition, it Volto is using, like I said, the React stack. One of those bits is JSX. So we're combining the JavaScript and template bits of your page. So there's HTML on your JavaScript. Previously, this would have been a viewlet plus a page template plus a JavaScript file that needed to get integrated into the bundle. And this kind of just mushes that all into one. This is something that is super easy to pick up. And it's super familiar to folks who are used to doing front end development. Volto is using the semantic UI framework to implement the PastaNaga UI. It's really helpful. And it makes the HTML really easy to read. And usually, and uses the class styles, the class names as ways to inform the JavaScript, which is also super simple. It's super readable. Volto has been significantly faster than the existing front end because it only interacts with the REST API. There's no hitting the templating engine or any of that. It's very quick. One of the things, so I'll admit that when I get up here on stage and show these screen casts of the point interface, at times we'll cut bits out just to, so we don't have to sit here and watch it load. And one of the great things is with Volto, I don't have to do that. I actually need to slow it down at points so that I can talk over it, which is a weird and wonderful problem to have. Yeah, so it's significant faster. It's only working with the REST API. It's using server side rendering. So the first request is going to be a little larger for the client to download all the resources. But after that, the client takes over and does all the processing. So you're really distributing the processing load to the client, and which means you wind up needing fewer resources on the back end. So yeah, smaller machines, smaller fewer ZO clients. And because Ploning Guillotine CMS both implement a similar API, Volto currently works with both. Current status of Volto, Volto 4 was started at the Baytip and Sprint this summer. They worked on getting it up to speed with the Ploning UI. It's near feature parity. Like I mentioned before, it's missing content rules and a few control panels, but those are coming. They worked on internationalization. Accessibility is in place. They've set a high standard for what it'll meet. They've added automated testing to make sure that it meets that. One of the things we've seen this year is that the Gutenberg editor has seen a lot of complaints from the WordPress community about accessibility concerns, and we're right out of the gate saying that's not going to be a problem for us. So that's something I'm really proud of, and I think everyone should be. Yeah. And they're also working on adding some more advanced blocks. So in the demo you saw text and image and video, they're working on adding table blocks, like a collection one so that you can integrate what would currently be a collection in your site that you can add as just part of your page, and also a slider block. Yeah. So I wanted to talk about there used to be a slide in between here. Interesting. Okay. So I wanted to talk about a few different things that we're facing right now that Volta helps with. One of those is the coupled development. We've seen over the past few years, and I think that's a before that we've been reinventing the front end with a back end look, and it hasn't worked out really well for us. And we've found that front end development evolves at a much faster pace than we can as a back end. We're blown as about keeping your data in the back end. And we've found that at a much faster pace than we can as a back end. We're blown as about keeping your data safe, keeping the migrations in place. And basically what we're trying to do is really to couple the two, the front end from the back end, so that we can continue to release, pull on the back end at a much faster pace. We can do it in a data safe manner. And we can release the front end at a much faster pace so that we can keep up to date with current trends and security issues. So one of the problems we've had with say, actually every major release of Plone is we've had some big rewrite of the front end to fix existing issues. But by the time we get that release out, most of those have been outdated for a while. So we're trying to find ways to get around that. And this new setup can really give us a way to fix that and give us essentially a continuous release process for the front end. Yeah, so that plays into the Plone 6-Rid map. Basically what we have right now is the removal of archetypes in Python 2. Those have been announced for quote a while. So definitely time to get rid of those. And we're looking to add the PastaNagya UI and Volto into Plone. That brings in some challenges and open questions. Something that Alexander Lemie, the co-founder of Plone wrote about in the blog post back in 2008 and has been debated hotly ever since basically says that with dexterity we can add fields and behaviors to the basic pages. So in that case, what is an event but a page that has a start and end date or a news item that has a lead image or a page that has a lead image? If a page connects like a folder, do we need folders or default pages for folders? Does making this change increase the flexibility of our layouts and reduce the learning curve or is it creating back end challenges that we don't need to take on? Timo is going to be talking about quite a bit about this on Thursday, I believe. So I definitely would recommend checking that out. It's going to be a hotly debated topic, I think. And the second question I run into is how many UIs do we support moving forward? We saw in the in Plone 4 we released the new Sumbers theme and it took me a while to remember what it was even called because it's been so long. So we split the UI into basically the Sumbers theme and the Classic theme and we shipped with both. That was problematic because the Sumbers theme was the new one and everybody liked it and that's where they were putting all their development efforts and the Classic one fell behind quite frequently. So with a new Volto front end, how much of the Classic Plone UI do we keep around? Do we work to update that so that the problems we're seeing in it now are fixed or do we dump all of our effort into getting a brand new front end instead? I talked about the content types. The other thing I want to mention is something I've been thinking about a lot this past year is what really is Plone now? I mean I'm getting up here and I'm saying we can remove Plone from so much of the equation but that's a rough statement to make at this point in time on this stage. And we've really defined Plone in different ways throughout the years. It's been the mature open source Python CMS. We've often said that Plone is the community. From a foundation viewpoint, the Plone foundation now encompasses the Plone CMS. It has Zope now, Guillotine and Volto are members from a community viewpoint. It's the sprints, the conferences and the collection of add-ons that we provide. From a code viewpoint, it started out for most of the last 18 years. It's been really straightforward where there was a back end that was Zope and CMS and there was a front end that was Plone. It was the skins layer that existed on top of Plone and grew from there. And now in Plone 5.2, we get the Plone API or the REST API. We can throw new things in front like Gatsby or mobile app or Volto. And now we can even take out that CMS back end and stick in Guillotine and the Guillotine and CMS and get essentially the same thing. And where I think Plone is right now is really highlighted in that API layer. It's essentially, Plone is the API contract. It's a strongly held set of opinions about what makes a great CMS that we've developed over the last 18, 19 years. And the implementations have changed over time, but the values really haven't. That's things like security with advanced workflows and granular access control, traversal and content hierarchy, being able to build a wide range of sites for very different audiences from a small person, one person shop with limited technical background to a large enterprise sites with complex business practices and lots of custom features, supporting a diverse ecosystem of add-on projects and a big focus on accessibility, usability and internationalization. And so I think Plone really is the fact that as we grow and change, these are the things that we don't compromise on. The software may be different, but the ethos of the project is still there. Thank you. I really appreciate the opportunity to stand up here and basically take credit for all of your hard work each year. So yeah, I'm 100% interested in hearing your opinions on some of the things I asked about today. I brought up feel free to email me or at me on Twitter or whatever or across me in the hallway because I really want to hear what the community pulses on these things.
|
Eric Steele began his Plone career with Penn State University's WebLion group, where he authored several widely-used Plone products, including GloWorm and FacultyStaffDirectory. In 2009, Eric became Plone's release manager, overseeing the Plone 4 and 5 releases. By day, he works for Salesforce, building testing and release automation tools to support corporate philanthropy and employee volunteering efforts.
|
10.5446/55177 (DOI)
|
Okay, so for this talk let me introduce myself. My name is Federico Campoli, I'm Italian. I live in the UK for about eight years, worked mostly on Posgers with some experience on MySQL and today I will talk about each Hiker guide to Posgers. There's nothing technical on this thing. It's an introduction about the Posgers, the new features of Posgers 12 and some features they can mode watering you, maybe. Dedicated to developers, not DBA. I am a DBA, but I understand that the logical part is an interesting thing. So let me introduce myself. I was born in 1972, so it means I am 47 years old. Wow, time passed by. This ginger cat is my little kitten, now no longer a kitten. It's called Odzy from Odzy Osborne. I started in 1982 because of the movie Tron. I joined the Oracle DBA secret society in 2004. It was a life changing moment for me because I discovered that the DBA career and working with databases is what exactly I love to do. And in 2006 I discovered PosgersQL. That time there was a 7.4, it was a massive bet for me and nowadays I think the bet has been won massively because Posgers is gaining momentum and momentum. And because of that I have a Posgers tattoo on my right shoulder. I am the second person in the world with this thing. The first is Devereen Gunduz, my friend. I have this thing because of him. I was very envious when I saw him and said, I want it. And I work as freelance DevOps and data engineer. This is my company. If you need any help with PosgersQL, give me a shout and I can help you. So table of contents. There's a lot of... The toolkit can be split into alphas. First we will look to the history with Don Panic, then the Pangelat Gargle Blaster, the new features in Posgers 12 and the three Ravenous Blaster Bistotral. This is inspired by the Chiker Guide to Galaxy. All these things are characters or quotes from the Chiker Guide to the Galaxy. And so things about the data type JSON, the transaction exports and upshot exports and the partitioning, which is one of the improvements in Posgers 12. They changed things and they made very, very effective compared to the previous version, the 11. So let's start with Don Panic. PosgersQL starts around 1997 at Berkeley by Professor Michael Stonbreaker. Michael Stonbreaker is a legend. Now he works for Enterprise DB, which is a company, one of the many companies in the Posgers ecosystem. And they were very lucky to bring him on board because he knows everything about the relational databases. His writings are still valid and everything has been built on his writings. So the story of Posgers starts before Posgers with Ingress. I think Ingress is still around. He's a relational database. It was built by Professor Stonbreaker. Then he realized that Ingress wasn't so good. So he decided to go after Ingress and created Post Ingress. And that's the name, Posgers. He's managed by the Posgers Global Development Group. He's a group of people, very democratic, very organized. And the idea behind the Posgers Development Group is the fact that there's no company taking over the project. So nobody can buy Posgers differently from other open source databases. And every year, there's a new major version that adds new and new features. The last one, the latest, is the version 12. And that's been released the one day after my birthday. So it was a belated birthday present for me. And it's amazing. This is the list of the versions. So if you see, it's supported on a best basis effort for at least five years. So at the moment, we have the 9.4 end of life. It will become not supported in February 2020. So if you have a 9.4, consider upgrading as soon as possible, because every new minor version adds new fixes, security patches. And so it's not nice to stay unsupported. The minor version releases normally, upends on four times per year. So every quarter, there's a new release. And this is the schedule. If there's any very, very serious, but I don't remember it even happened, there is a special minor release for fixing this thing. But normally, those are the dates. So if you have a Posgers in production, mark the 7th of November for the next update to do. The roadmap for all the development and the schedule is at this URL. So you can find everything you need on the posgersql.org website. So pangalactic gargle blaster. This is a pastry from my own town, Napoli. It's called Zeppola. And it's delicious, like the pangalactic gargle blaster. What offer Posgers? What are the Posgers features? Posgers is as in compliant, completely as in compliant, differently from other databases. iScalable with MVCC allows the MVCC is the way Posgers manages the concurrency, the concurrent access in read and write. And on the manual, on the official documentation, they say they give you a better approach than the roll lock. And that's true. But they have some drawback. They add some cons in this entire thing. The vacuum is one of these necessary evils for working with the MVCC. Things will change, probably. I've been at the Posgers Conf in Canada a couple of years ago. They talked about adding the rollback segments to Posgers. Don't know when it will happen, but I think it will happen soon. Most of the table spaces, so you can distribute your data logically across physical devices, runs almost any Unix. So Linux, AIX, HP Unix, FreeBSD, of course, has been started on FreeBSD. And also on Windows, the port has been done by Magnus Agadner back in version 8.0. We are talking about 2004, 2004, 2005. That was a massive improvement. Before that, the only way to run Postgres was to use the Qigwin, which was not exactly practical in performance. And support foreign tables. The foreign tables are something amazing, in my opinion. They are similar to the Oracle DB link or HSDOTBC. Actually you can map local tables, special local tables, to external data sources. So you can attach to anything. Another Posgers server, a SQL server, or Kafka, or even Twitter. There was a foreign data wrapper for Twitter. Now it's no longer working because Twitter changed the API. But before they changed, it was possible to query the Twitter stream directly from Posgers. It was amazing. Procedural languages, you can choose whatever you want, literally. Obviously, PLPGSQL is the most major and best documented. We are PL Python, so we can write procedures in Python. PLPAL, there's also one procedural language for R for statistical data. So very, very richful. And supports for no SQL data, like HStore, KeyStore, values, and JSON. JSON is something we will see later in this presentation. The licensee Posgress is different because it's a specific Posgress license. It's called the Posgress license. It's similar, very, very similar to the BSD license. And that opened the system to be forked, made it proprietary, and released under different names. For example, EnterpriseDB, I mentioned before, offers the Posgress Plus version, which is closed source and offer the Oracle compatibility layer. So you can use Posgress instead of Oracle and save money, probably. It's developed in C. You can write your, using external libraries, you can build up your functions. It supports an extension system. That was a major change with the 9.2. The extension system, it makes very, very simple to add new stuff into the database, like new data types. For example, Posgis, which is the geographical extension in Posgress, adds the geography data type and geometry data type inside Posgress with just create extension Posgis. And you don't need to do anything else. The system will take care of creating the entire set of data. Support partitioning, the version 12 is amazing at this level. Parallel execution, one of the few open source databases that allow the queries to be run in parallel. Logical decoding, not exactly the best. I love the logical decoding. It's something I, they started with the 9.4. The one is end of life now. The problem with logical decoding, it didn't evolve too much from the 9.4. And there's still one major lack of feature inside this decoding. The DDL are not decoded. So if you alter the schema from the primary, you will break the replication on the secondary. There are tools that make this thing automatic, but it's not native inside the Posgress core. So I hope at some point it will appear. Because it's something I really like to use and I found very, very difficult to use at the moment. The Posgress 12, the latest release, added interesting improvements to the entire system. Normally, when the new version comes in, the new major version gives some performance improvement because the code becomes more flexible, more efficient. In this case, we have partition improvement with the global foreign keys on the partitions, which was something missing on the 11, supports in one direction only. The support for thousands of partitions, because the architecture behind the partitioning was still based on the inheritance, this thing created a lot of problems because of the locks imposed over the different tables and limited to hundreds of partitions, the possibility to repartition data. Posgress 12 improved this thing and raises to thousands of partitions managed. The Balancer 3 improved as well, the indices. They become smaller, more compact, they're duplicated, very efficient on the vacuum side, so less prone to become bloated. The con in this thing is if you are migrating from a previous version, there's the risk that your index cannot be built because there's a limit over the key sides of the index. Now, it's possible to create statistics defining the correlation across multiple columns, so the optimizer, it will be able to build up more efficient queries, being aware that some specific data are correlated to other data inside the same table. The PT inline, anybody has used the WIT statement, it's not exactly, before the version 11, it wasn't just a cosmetic, it wasn't syntactic, it was also very, it had an impressive impact on the performance because the WIT required materialization and then the materialized data could be used on the lower query. With the version 12, this thing is no longer there, so by default, everything is inline, and the queries work efficiently. If you want the old behavior, you need to add the closed materialize to the WIT close. Improvements also in this thing, so it makes life easier for developers. Prepared plan control, this is another thing I discovered on my skin. When you use prepared statements, at some point, the performance can degrade because the optimizer can fall back on a generic execution plan because it says, oh, okay, I'm repeating always the same thing, let's use something that doesn't need reparsing every time. Now you can control whether you use the generic or using the custom and parsed plan to keep the performance always constant within your prepared statement. Just in time compilation, the Achilles' Hill of the execution plan is you have complex queries. The time spent for calculating the execution plan and processing that thing can be massively inefficient. So the just in time compilation for large and complex queries rewrites the execution plan for speeding up queries. This is designed explicitly for analytics queries. Check some control, this is a thing that has been added, if I correctly remember, in 9.3. Check some was something amazing, but it was also very frustrating to use. You could create, you could add the check some on the blocks, so prevent the silent block corruption only when you initialize your data area. And normally before version 12, it wasn't possible to turn off and turn on this feature. Yeah, there was, there is still around an extension made by Creditive, if I correctly remember, that is called PG checksum. So you can shut down the database and disable or re-enable the checksums. Postgres 12 integrates this functionality inside itself, it's still necessary to stop the database for changing the checksum setting. Obviously when you add your checksum, it takes a lot of time because all checksum need to be calculated for each block. But in the future, this thing will become an online process. When I have no idea, maybe version 13. Reindex coding currently, this is my favorite feature. Reindex is a blocking procedure, literally. It blocks the writes for sure, so you cannot write your data into the table and may block your read if your query is using the reindex. So, I've written countless times, way to work around this behavior, building new indices, wrapping it and playing with SQL. Reindex concurrently, allow to reindex with minimal locks. Only when the new index is validated and swapped is required an exclusive lock, which normally requires a few seconds or milliseconds, depending on the speed of your machine. Behind Postgres, we have an amazing community. I think it's one of the best communities I haven't worked with. Actually, I don't tend to join communities, open source communities. They start well and then they become dysfunctional. Let's say in this way. Postgres incredibly is improving, probably because there is a very structured way to react. We have a group of people that manage the entire direction. We have a group of committers that allow to come right into the Git of Postgres and also the system for patching stuff. It doesn't have even the bug report system. We rely on mailing lists and it works incredibly well. The community works on different channels. We have the mailing list, which is the historical one, and IRC on FreeNode on the channel Postgres.ql. For who likes Slack, we have a Slack on a ROKWAP where several channels are created. I think there's also a Python Postgres channel. If you would like to join, it would be amazing. This is a Telegram chat. I founded this chat. It was about 12 months ago. Now we are 400 people talking about Postgres. If you have Telegram, please join. There's also, this is the English speaking. There's the underscore. If you go without the underscore, you will join the Russian speaking community. So Russians arrived before me. They were the first. So they got PG SQL. I got PG underscore SQL. If you join those communities, please be aware that there is a code of conduct. Actually, we are too straving to avoid harassment or any bad behavior to continue within the community. Features of Postgres, cool Postgres SQL features, I focus on development stuff, not DBA stuff. So JSON and JSONV. We will see in the next section. Transaction Snapshot Export. This is another thing that we will see. And the table line inheritance and partitioning. So how to partition our data. So let's start with the Ravenous Book Blutter beast of trial, also known as the data types within Postgres. As like any database system, we have different kind of data and classic data, character, numbers, but also we have very exotic types in Postgres. Range type. We can define a range of numbers, a range of dates and use this thing as our markers for storing the data. Geometric data types, points, circles, polygons, whatever. Network address data types, XML data types. I used it for replication system written in Python and it worked incredibly well. I didn't know there was, how is it called, PGQ for doing the same, but it was an interesting experiment. JSON data type and Edge Store type. Now let me talk about these two guys. The author of JSON and Edge Store is the same person, Oleg Bartonov, a genius. He made this, initially he made this Edge Store data type available from the very early versions of Postgres as an extension. So external library basically text formatted and parsed in memory. Not exactly efficient because the external library churned the CPU and then he decided to generate this new data type. So created JSON as evolution of Edge Store. So it's still available at the Edge Store, but he suggested to use JSON because it's more performant, it doesn't require anything to be installed and have some interesting stuff you can use. So JSON JavaScript object notation, you know better than me. Supported in Postgres with two flavors. We have a JSON which is basically text parsed and validated when the data is accessed or written. So you just get the text written inside your table. JSON B is binary representation of the JSON. So it's validated at insert and update and there's no overload caused during the read. Also, we have helper function. This is a couple of examples. For example, we can get a row and transform into JSON. The keys are automatically generated. Not exactly the best, but you can transform your data into JSON. But also you can do the reverse. With JSON each, and in case of JSON B, we have JSON B each, we can get the JSON and transform it into a record. And this is the way I replay the data in PGA Chameleon. Read the JSON and transform everything on the fly. So when you choose JSON, consider this. JSON is parsed and validated on the fly, so normally the performance can be a problem if you manipulate a lot of data. JSON B is validated, transformed at the insert update. Performance are better, but the storage can be bigger than JSON because it's binary representation. So the binary string requires more data to be stored inside the blocks. JSON B also allows a single index on the field, so you don't need to index any single keys inside your JSON. You create one index on the JSON B field and you are done. When you add new keys, they get indexed immediately, instantaneously. The problem with this is the index is a special kind of index. It's not the three indexes. It's called the general inverted index. So you can end by an index several times bigger than your original table. So you have to plan and design carefully this thing. If you don't use JSON functions, just use text. There's no point in using JSON because it's just overload for the database, for parsing something that you are not really using. I will show you why this is important. So I created a fairly large JSON using JSON generator. I created three tables, one with the JSON, one with the JSON B and one with the text. So this is a drop table. This is the create table JSON with the JSON element as JSON. And doing a select from this function, generate series, is very useful, generating a large record set. So it just returns a set of rows. In this case, it's just 100,000 rows. So 27 seconds for storing this thing. I do the same for the text. So I load the JSON element, casting the text inside this value inside my table text. And I do the same on JSON B, casting on JSON B on the fly. The time for storing is quite similar. It's just a little bigger on text. But what we see at this level, the interesting thing is the sizing, it can be different. If you see the JSON table and the text table, they use basically the same space. The JSON B uses more space. This is because the JSON B is using more data. The JSON I stored is not so big, so it triggered in some way just for JSON B the toss subroutines, which caused the data to be stored out online or stored in a different way. If you increase this value, you can end with the same size because the database will trigger these subroutines for everything. That depends a lot by the JSON. But in this case, it's quite impressive the difference between the two tables, the table JSON B and the other two tables. So this is the select, just reading stupidly, select start from the table, from ttext, it takes about 18 milliseconds, just access the data in this way. If I read the same way from JSON, the time required is slightly bigger because even if you don't read the data, explain it, just read the data, it doesn't return anything except the execution plan, you need to parse and validate every single row. This is the extra CPU time added to the execution plan. JSON B is basically similar to text. Now let's add more interesting stuff. We select one element of JSON. If we select the element of JSON from the text field, we end up with five seconds and something from the table. So this is the way you can access your data. So in this case, because JSON, the JSON generated by JSON generator is an array containing a JSON. It's specified the array position and then the key. So in this thing, five seconds. If I read from the JSON, it's slightly lesser, about one second, more than one second faster, because the JSON doesn't need the cast required by the text. So accessing the text and transforming in JSON is CPU cost which adds time to this thing, to this operation. But the good thing is let's do for JSON is incredibly fast compared to the others, because there's no need for parsing the JSON in memory. The JSON is already parsed and stored on your blocks. So what to use? Whatever it fits you. And it depends a lot by your data model. As rule of thumb, the JSON can be a better idea for building up stuff because it's more efficient in terms of CPU. If you need to load a lot of data, JSON can be the winner in this thing. And also the indexing capabilities are available only on JSON. Postgres 12 adds also the SQL JSON path language. So you can call the same query I showed before using this function and access JSON just specifying a specific syntax for accessing this thing. And the time for accessing the data is similar to the JSON. So the JSON be accessed. So there's no big deal in changing this thing. It's just more clearer probably when you write complex queries. Now, next step is time is an illusion, lunchtime doubly. So this is me as the fourth doctor, Napoli, on the TARDIS console. Transaction snapshot. This is something amazing, I think, has been around since the version 9.2 and has been particularly efficient and became very, very useful with the different tools implemented in the Postgres ecosystem. So what is a snapshot? The snapshot is a consistent image of the committed transactions when a query access the database for the first time. This information can be exported to other sessions. When this thing is exported with an identifier, another session can import this snapshot and see the same consistent image from the other session. So it's possible to use in parallel execution. You start one session, export the snapshot, import this value inside those other sessions, and you read the data consistently with the original session. And this is the way PG dump, the backup tool for Postgres, does when it runs the backup in parallel. Simple example, let's create a table. We add primary key, ID, serial, which is auto incremental data, content with text, T-content with text. So we add just MD5 ashen into this field using the usual generate series. So we generate the counter, from the counter we generate the MD5 and store inside this thing, just 200 rows, just for our example. So in this session, we start a transaction with the specific isolation level of repeatable read. If you are curious about what the isolation level means and what are the features and the limitation of this thing, I've written a blog post. This is the URL, PGDBA.org, post 2019, 10 transactions, explains how this thing works and how they can affect your queries. But it's important when you export things, repeatable read has to be set, because repeatable read assumes that the snapshot is old and is never discarded, meanwhile the transaction is open. And then we run PG export snapshot. So then this is our identifier, maybe different depending on the database transaction progression. So we select from our data 200 rows. In a different session, we delete old rows. So delete from the data and there are no more rows inside our table. Then we start again, isolation level, repeatable read, set transaction snapshot to the one we have exported in the other session and the data, voila, they are back. So it's very, very efficient and it's very useful if you need to read data consistently across multiple sessions. Mostly armless, dead pool. I love him. My PC is called dead pool. Table line irritants. The table line irritants is something has been in Postgres since the beginning. I like to, Postgres is referred as an object relation database system, not really a relation database system, because it implements the object oriented programming within the database structure. And the table line irritants is something like this. You can create relationships between one parent table and child tables, which share the same structure or partial part of the same structure. You can create a child table with all the columns of the parent plus other columns. And this was the way back in the days before the version 11, before the version 10, it was possible to have the partitioning in Postgres. We created the set of inherited tables and then we managed the data across the partitions. It worked in some way. It wasn't simple, but it worked. The problem with this thing is there's no physical storage shared across the tables. The parent table is completely different from the child table. So it's not possible to create unique constraint and force that inside the entire inheritance tree. So it wasn't possible to create foreign keys referring the inherited tables. It was possible to run in some way, but complicated. There was no built-in mechanism for distributing the data across the child tables, inserting into the parent table, stores the data into the parent table, unless you create a trigger which redistributes the data correctly into your partitions. And there was pruning the partition, so excluding the partition when selecting. It was possible only when via the usage of the check constraint, you impose a constraint marking the table without specific set of data. And the optimizer was able to exclude the partitions from the reading. In the Postgres jargon, this is called the constraint exclusion. So partitioning in version 12 is amazing. They added the declarative partitioning. Still relies on the inheritance, but improves the system. It's possible to have global foreign keys in both directions from the partitioned and the referring table. Supports the classic partitions, range partitioning, when you define a range of data that is supposed to be inside a specific partition. This partitioning, when you specify specific values to be stored inside your partition, and that partitioning when you define a modulus for matching the data that should be inside your partition. So how we define this partitioning? It's very simple. We define a table and we declare as partition by something. In this case, it's just partition by range ID, our primary key. At this point, we cannot insert data. We will get an error, because there are no partitions yet defined. So we need to define the partitions. And this is the way we define the partition. So we create a new table, partition off for values from 0 to 1,000. The upper limit is not inclusive. From 1,000 to 2,000 and so on. And we can also create a partition default. So everything that doesn't fit in this thing goes inside this thing. Another cool thing about the partitioning is, if you update the partitioning criteria, the row is moved across the partitions. So you don't need to worry about managing your data. Postgres will take care of it automatically. So this is just an insert, same generate series. And this is the size of the partitions. If you notice, the main partition table is empty, because the data doesn't, there's no data inside. And the partitions defined for specific values have all the same size, except the default partition, which takes the most of the data, because I'm inserting up to 50,000. But the maximum level for the defined partition is 3,000. Well, 2,999. And there are some limitations, obviously, because the idea behind the partitioning is different. So we cannot have check or null constraint not inherited. Everything has to be completely consistent with the parent table. Partitional table does not have any data inside. So you cannot truncate just the partitioning table. You will get an error. And you cannot have partitions with more fields than the original partition table, differently from the inheritance, when you can add different fields to the original set inherited. It's not possible to drop the null constraint on partition columns if the constraint is present in the parent table. Everything again has to be completely consistent. So wrap up, I tried to give you an idea of what Postgres is, how amazing it is, and I hope you will join the community, join the conversation, and contribute to the growing up this system. And use it, please. There's no charge for that. It's absolutely free. So this is the license. These are my contacts if you want to take a picture or screenshot or anything. This is my blog, my Twitter GitHub, where I have some projects, mostly in Python. I love Python. And this is my LinkedIn. And that's all folks. Any questions? Please, be basic. I am an electrician. So now that partitioning will be so easy, and you can do it dynamically, how to figure out the right size for your partitions? That's difficult. Because you need to find out how your data is supposed to be distributed. So you need to find out the rule of thumb, I think, is trying to distribute the old partition evenly. So if you have, for example, partitioning by date, one partitioning may be by month or by year, depending on the amount of data. But you can also reorganize your data in some way after you decide your... There's no automatic process at the moment, as far as I know. But this can also be implemented. So I will start with a conservative approach and do not create... Overengineer this thing. So I guess you can query the parent table, and it will query all of the partitioned tables of it. Does that mean where you're partitioning it on becomes like an index? So if you are querying it, you probably need to query based on the index? No, because the system is intelligent enough to do that for you. If you filter by partitioning key, the system will exclude the partition that do not match your work condition. Okay, okay. So if you don't have that in your query, though, it's going to be querying all of the partitions? Yes. It will access the partitions. How is not sure. You can still have indexes on the parent table, and it will apply to all the partitioned tables. But it's managing all those separate indexes separately? Because they are separate? They are different storage. So yes, when you define an index on the parent table, the indices are created on the child table as well. Another question? So you mentioned there are several fancy data types, apart from text and all the rest. What's your experience with the blob data type? Is it a blob-dubit thing to put images of several megabytes into Postgres? I wouldn't do that. The problem with the... Well, I work with binary data, and I did for small tables. It's very useful because you backup your entire database, you're done. But if you want to use Postgres as a file system, think twice. It depends a lot by the... But the biggest problem I see is how the data can affect the performance. Normally if the binary data is big enough, you will get an auto-line storage called TOST to prevent the bloat across the binary data. So you can safely store it in line in your table. But it has to be big. It has to be at least one third of the block, if I correctly remember. So at least 3K. But to be honest, I will use Glaster FS and store just the reference into the database. If you need to distribute your images on different applications, I will use a distributed file system and store the references. Other question? Okay, thanks for the request. Thank you very much.
|
PostgreSQL's is one of the finest database systems available. The talk will cover the history, the basic concepts of the PostgreSQL's architecture and the how the community behind the "the most advanced open source database" works. The talk will also explore the newest stuff and the future of the project. The audience will also learn about the cool features coming with the release 12 that make PostgreSQL a powerful ally for building applications at scale.
|
10.5446/55178 (DOI)
|
.............................................................................................................................................................................. b 진짜 l缩avanje a vreicosko過去 za vse Steve Mark. Sinšlo predmogovile vsposition, ker jici Meltap Даens, ki na reveltjom NOTE underwater ni so tega vk ZhuTV? Nelly neה ri siskega have Snake... kdo gal demands so in organiza 친구�avy? Pre 하기, morta pos o empower manufacturer vi se na po которая only in da equi institučilo do dорот coDoS Na statue skup ichSDVS. Nadadi rever así, da je začGM začema bako v sej cont acceptancega na tom, da na Silk. Ingredis par zaňa, sestil konNet, in k basij iz AurJi mono se v nerosti pa pi po ONS. Problemิ z vsej godzine. ima tortengll Oliverum na hoval almost štersko preko hati pomolečno. Kako ne Mrsパ9 boljo Žesse, lahko da se ne진짜, lež �žesk tender? Dobranje. L clown valztem Mi treba je sreť primjevo, č ceremony. O mirama, n sok ločemo izgledati,紅 l rjec grade pl. decreasing spherd, ne ligi što može spaditi. Bex monitoring. A po te scalpano delivera Bulki jung. ne k sov nemeixo peliku Planet creamy s Sadotem srobşa dol Mesi nabo to še informace kosi wed normally kat'urnikerivne, k融itz馬sprojetov o rosni recept, aj bi se ne v rekel Sah collapse z regardom napak עלski pred thoughtful Internet muzače nako bi proliv Nisem temo spremni tijsje nadej, jih m z 49 dvere strate na Ale happהine zvojene vezine. 50 sprin periodigljiga v develja s goke sv� bully možet, z optimisti tamrov pearlj in ustunali smashedane v strejv za dre flooding in睡pené drobnije z re SAM rostin ro NOT V Seš, ne 📯em letom po curin flavorsov. Pod Blutih to шčetno, že najbolji introduce se choseerovo urban kuravanje iz Usredne trad plenty in apologize. Pre sees tudi samo v assemblee je izpüsil.에는 im happen, congress Hermंingen, in bo se s fruitem ben rockets'em. behalf v Regioj ste. Je zelo, sko mi spide in izalgama. Poskriva školId in... zelo, ko mi pra negotiation je. Sto эффikodi. Zelo, dasive in...atable phones, in tukni vse politicianje, nekaj le poSD lihat.ih kilo<|haw|><|translate|> Öntr vi srečet sp Mana tеле vrede tl in pa madness v rodi. Bratami lahko minulili ta povedač in popič solves, greš bili na vse princej, grima ta prijev. Dr // Kako? Na razdevenih način je to segnie začals, počot eigentlich č�etno. Z slightest parti se umega v夳م, Oreo-dramatic in je dotekavaš se ako je pharmacy monos. O kilometeromave,zemenn swojchev povens, poachtمana everything is otheretne. piggyeno. Pšto pravava, je tako quitus, ta ten sem letters Nuclear nakteni, drobnampar v amb obliged je trans וה na conscience, so zašljena ciljene po你看ov karaktirene. Vž Также, ip? Z Avega. Da, da, pa takat ne Naturai, je vzeledan. Ladiesuffy Afterwards in std. Kajkim razvolj究 stakea? Pastroende, vsakFFS poplate v Vietnamese. PLON je ob tema, nous집, na organise skrepoval, baratovate. الا carrying tkog saj Braveiste operatora v applaudisem of finančnih dz guilty terms bORatok in sves wormsa brandeds pa worked lahko nazavi announced by salami, kar 30 zakres upcoming v.Comm referendum, sub stiklje. светa, Bone baraterv.Ask driverissen, tribQUED so bil razljavno v wetkit listkem, da je zelo povrdu z o juganju med zelo za 85 euron. reacting Vanhkel inweed zelo za nemoje neko se stape ker je zelo Ark professionals in na제. whiskey zelo no e personnel in I ц々ave predrijte od projekte ochov. Pri svojo inf左 ICMS. Japanese,क Translowan, kakša tkoSMS. U in prendre veリ dolgo in stačne. Jese iz sp ocury š goddess cozyvala, iz vsehIS cلodren, dolga interneta,פih arrigi�ov. The Python library pi PDF, because we analyse the documents that are inside. This CMS to extract the metadata, information that I will explain later. On this CMS we develop the customer and the customer features and the customer requirements. First of all the static damper. The static damper is, I will explain. verg poor gostava모pahovala ne bean flipov. Po OKe. Choval ta stranš appreciated in še oče nakrat THE ONE dagger golgo in tako devices. N每 tono še na naš coll hopefullyickenem suraj potrafno. kot kot obstacles in v najbardziej tem, ki pa treeh pridrecij čez izrani, bil leto ne pramatina drugih re cuts iz kljidnih ne vain… Sorobno je dela, da se určite po opom depending in produčujele na ob Oduniversenjo nagjd SCBa. Singuljač-g Celat其folirje od kot je param Mikros Small le 57. A igam te ne Tagazanje. Tkater je na z strength bar nivalj ainst. I pa ne pevno ste equal bonding-delite po associations, z camei chそ jo sehto modeli in virali, na koncišljenj關av. Th ties vi tudi je meditavno natet ne doeniti ВИ pozivnjudi trebaje isplfinet po Over değilce, kur Barry telefdom dučei za ut nac 나. Tke, kako d Section več nakeo što ni v kosti taj je. Od je itselfji je ta virun, grooves ja bo in gledu oshibiti. S č Prof gen. Nadim se ste jad v komovu imeli in odpiteš na šem v asymptomasha. 2022 shooter. coachingi mo za zab態, dreذا, tuto piek enteringi je do našem z hvalju noder več 12 na 11.me vs. 21.meос kared. Playрет nadi več Ev τ [# third, Iran je bagup nevarga kvojičju privržavni, na svo觀眾jih in na xemajnooke priskem in warehouse jóga vas vbi ne cavesa, pos hope, prik疵arして 이거를o, venamaissenboxing, je so vidno in השerike z tegoza, v raki prejne, po declare skirtsu. Macedontitven kno anанов, pa jeden mal ki tak rapt정 sink z alle in da sob врzelo. Pa pequeno, Ebene, to grasses below.면, berute po traditionalnih ob 내일etnjih recommendationov. Da pod enforac福. In do materijala sem prejava ključ. Ise lagajte to! Cok Polona v Res header, napitel, ekip, iz verberi. Na n preteklu s蟋 v eagle in igratem se, na priוא bioavyr. Pominsovanji napravlj NewPost bez pogledmez deathsim environments. Pred tardantij Superasive gledam varje. Pros mm, green luckot to je ne� pril primarilyS All of fundamental, the audio authorization and workflow that Plon already provides. The cons, I say that SharePoint is everywhere, MSF has a SharePoint as a standard internally. And when we say it, okay, we use a Plon. What is Plon? We don't know what is Plon, why Plon, and we have to explain a lot of things every time. So, Plon, it is not so popular as other systems, and this is something that ti bi 20 do carriage in sigurine in su bo vÈgi schvarniti pravi제�ni prehatnih izvokств.ございます je tko njega izvuktekl.ako dalni več nek res pojejem ke in zima k중에ga bo nosil polega v pre yeranj�. Iz dichi phones, Stelt te za 사고, The app is based on these three pillars. So it's JQuery and Mustache, implemented with JQuery and Mustache. We use a custom version of PDF.js, that is the Mozilla PDF reader, that is included in the browsers. And then filer.js, that is a posix-like abstraction, so you can interact with the storage of the data as a posic file system. And what we have implemented, so we can say a powerful search, because it's a search that works offline, so in the portable app with our internet connectivity, so as to be built and client-side at all. It's a search that does an almost full text search, a facet that search implements the operators, and permits a multiple download of documents, so you can search and then create a package that they can distribute, so to other people. And also ranking everything that works offline in the client applications. Syncs, OK, so when the editor dumps a new static website, the user that have a USB key that has been distributed can plug the USB key or the desktop application and get the updates of just the data that have been changed, so they don't have to re-download again the whole website, so they just download what has been changed, because the bandwidth is very limited, so if a policy has been updated because there was a mistake, they just plug, there is a pop-up that here I cannot show, let's say there are updates, OK, please go. Another important thing is that they can filter what they need, because sometimes there is an emergency, they have to be very fast to get the updates, so they can decide just to download some topics of the whole website, so they have to download instead of 1 gigabyte, just few hundreds of megabytes to have the information that they need. We do also integrity checks, because the connectivity is not stable, so we have to check that, ensure that everything that we declare that has been downloaded is there, because if you go offline and then you go in the fields and you discover that the file is corrupted, it's a big issue. So when we do the synchronization, we have also to do some integrity checks that are not so easy just with the client side. And do size and timing estimates during a sync, we have to say the person, OK, you have to wait one hour to get all the data in just a few seconds. OK, smart navigation, more or less what I said before, we split all the books for a cross navigation. We have customized the PDF reader, so that when they click on a link, they are directly placed on the pages that they need, there is some visual information of the part of the PDF that is interesting for them for that topic. And then there are small features like who is sharing a bookmarking that in all these situations are not so easy to manage. Pro and cons. OK, the pro, we did a very hard separation of the front-end side to the presentation side with the content side of the CMS. The same architecture can be reused for the website, the desktop application, but also, theoretically, with a mobile application. They didn't request that as also a mobile application, such iOS or Android, but technically is an architecture that can work also in this context. There is no requirement for plon competencies for working on the app, because it is a pure front-end application. Cons. We have many limits for the offline, on the HTML file, Chromium, JQuery. So we discover that every time that we introduce a feature or every time that the volumes increase, we discover new features that are not so well known. So we have to discover while working on it, for example, the amount of memory that requires parsing a JSON, the amount of time required to perform an operation using the browser, and so on. Then we have also other tools, like the integration with NAS, as I said. There is also a user dashboard. And finally, data collector tools. This is another different thing. A simple, what is in a classical context, a simple thing, is a contact form. It is not easy when we have these three different contexts. The contact form should work on the NAS, and what we do if the NAS is offline, how we get the content information, should work also in some way when they plug the USB key and they have to submit an information, an issue. And so we have implemented a form that can be written in the app, saved, and in case transmitted via other channels, such as via email, or they can try in a different moment to submit the form. OK. Some lesson learned. On the other side, there is the sentence that my colleague Michelangelo, that is on the front-end side, while me are more on the back-end side, when I say, OK, we have to do this, it's impossible. OK. Let's try a way to solve this issue and provide a solution. And we have learned to think that use case is offline first. So when the customer request has something, OK, let's start from the most difficult use case, that is the offline, what should happen when you are offline. Then we have learned to be more smart when finding solutions. We know a lot of many complex technologies, elastic search and so on. Why we have to work with just a few JSON, a SQLite database, and a few JavaScript, and we have to make it very fast and provide something that is pretty good. Maybe it is not the top, but it's good. So we have to fine-tune any solution. OK, let's remove this, let's simplify this part, and then we will have a nice result. The monolithic approach is bad. We have learned from this project that when you separate the front-end, so we start in 2012 with this approach, and we found that it works, so separating everything while we were used to work everything in PLON. Finally, the test table is very expensive, because we have to test the same feature on offline, on NAS, and so on. In the future, we would like to change in the near future to PLON 5, now it's still based on PLON 4, introduce other technologies, such as Electron, use more Node.js, and for sure, Gatsby. It's an hot topic now in the PLON community to make a static website using Gatsby. So we will probably have an angular react for the web app technologies and more serverless architecture for the backoffice. Lenem.
|
A long-standing project that, close to other technologies, has Plone in the heart as backoffice for content management. The customer is OCG, one of the five operational centers of Médecins Sans Frontières (MSF) Addressing OCG-specific humanitarian-logistics knowledge, the Logistics Referential online/offline platform aims to efficiently organize and classify concepts, objects and their relationships, thereby providing simplified, prompt and accurate access and retrieval to any relevant and required logistics knowledge. The knowledge base is built on top of more than 1000 English books, most of them translated in French, Spanish and Arabic. The offline version is updated whenever an internet connection is available and provides the same contents and functionalities of the online version. During the talk I will share a description of the main features, the choices about the technologies involved (NW.js, Plone, Amazon Web Services), success and failures, lessons learned and objectives for the future.
|
10.5446/55180 (DOI)
|
I apologize for my bad English but in 20 minutes you will be get used to or not. First presentation of E-mail. It's a public company I created by the Belgium government eight years ago. At the time the government promotes open source. It was no more the case since eight years and now in the new government they decide to newly promote open source. So eight years later they call it strategy. The problem is in E-mail we apply that strategy since 15 years. So because it's really a strategy, a long-term strategy and the majority of products and our project needs those 15 years. Not four or five years which is the government strategic plan. So yes that's fine for us. So now we will contribute maybe to government software in Wallonia so that's fine. And well and as I wrote in the slides the strategy mission is for me to promote and foster the mutualizing difficult to 15 years and still not able to say it. Of organizational solutions and of IT products and services for the local authorities of Wallonia. So it's concrete stuff behind it. Not just something written in the slide. We are as I told a public service so we also are in kind of IT department of all the towns in in South Belgium and the Central Procurement Agency. So we also do public tenders. We are not a private company but we are working with a private company because we are also seeing some resources, development resources towards those companies. And of course we also develop software internally under free license since the beginning. So our main activity is to host software as a service application for let's say use case specific to towns, e-government and some business which is linked to the law. So the law are very specific for each country so it requires to provide a specific application for each country. It's very hard to share ego application amongst some countries. Of course the European Commission are doing great jobs there for standardizing all the processes between all member countries. So there is some opportunities to work also in this context. So mainly we provide collaborative horizontal applications and of course they are tightly linked to the organization and the legislation. So 15 years afterwards we can say that the plan was the best solution to adopt in this context because of course there are frequent adjustments because the laws are changing regularly so you must adapt your application. And it's very tough for public services because they are really organized and must of course follow the legislation tightly. Of course there are issues in the problem. It is very important in the context. And so some figures about MEO. I know it's very impressive but the most impressive thing is really the 300 members there are mostly using Plone somewhere. And we deeply we are deeply inside the business of this organization. That's not simply a website presentation of a website or some contents etc. Toward the external no it's really business logic inside or the processes of the governance of the cities are managed with our applications and the sensitive use case like urbanism management and so on. So this represents 12 package applications really package. There's not one package for one town that's one package for all the towns. So cities of 300,000 inhabitants and cities with 2,000 inhabitants use the same package. And of course thanks to Plone we can cope with that kind of difference use case. Seven of these applications are written in Plone. One in Django and one in Odo. So the strategies to stay Pythonic for the teams that better. We have 15 developers so I try to keep a layer between developers for communication, library and so on. A basic layer we're in the same language so I don't want to have.NET or Java developers with several teams. We will have some communication problems and perhaps some religions problem. So for now 710 since running that's quite a bunch. So partly industrialization with Jenkins, Puppet and so on and of course Docker this kind of technology of course because I only two person to manage the infrastructure of those 700 instances and there is no local Plone in a town. All is in our I think 80 a service at OVH. 1000 inscription per year for our workshops because of course it's an agile project so we must have clients at home. That's the workshops. That's really one of the key success of the project keep the clients at home. It's a little country so it's easy to travel. It's very very important for us because it gives also feedback from the client to the developer and so it's linked to the process I explain sooner. 300,000 tickets for the support and development. There's quite a bunch also. So the problem for us is the 300 organization behind EMU. So it's very hard to manage support. It takes us many times, much times really. So that's why we are not yet all our products are still in Plone 4. Moving to Plone 5 it's not so easy for us because there is a legacy because it's as I said a complex very complex applications and I say more than half of our resources are used for the support and for functionalities imposed by the legislations. So now for smart cities. Yes I put smart in brackets because I don't think cities are very smart. Of course we are inside the system so we do real politics. It lacks also the mention to politics here because citizen projects are a political project so it's an issue to manage also this kind of project. Clearly here the administration and the citizen have not always the same needs and most of the time it's one versus the other because the citizen one citizen-centric solution and the administration one department smart application for each department their department of course. So the problem is all of this is common sense. As I said what we do is to create implementation with that common sense so it's a solution we can settle to solve those kind of problems. But really in most administration it is yet a new strategy-centric solution even if in the political strategy it's of course citizen-centric. The problem of the politics is also as its own priority is simply to be reelected. So it's not always to have this citizen-centric solution and ensure and manage the change management of the city staff to go to the citizen-centric solution because it just don't know. So what we have to do is to act on the entire chain between the citizen and the administration. That's why the unsure of its plan could be a solution for smart cities. Yes of course so I close my laptop and go back to because we have a specific project in the plant community I know that because we implement those back offices that will be connected to smart city application but at the same time we these applications are there they are free they're open source the problem is in the plant community most companies are not really working hard with deeply I mean with public services which is very difficult. When I worked with some plant company now since 10 years or more we were inside the system and we fetched those guys and explained them how it works in the public services and that's really it lacks really in most little companies that's why mostly we have big companies that can ensure to the needs of public services. Little companies are not prepared to that. Public standards are complicated, legislation is complicated and use cases are specific. So it's really a barrier to impede some companies small company to enter public services. Most of the time you have like us some people inside the system that chooses technology before and then search for resources to help them to ensure to the needs. So the acts on the antenna shine means what? The link between the I don't know a smartphone application whatever and the back office of course. Public services are not investing equally on the back office and smart city apps. It's really a problem because most of the time a new business appear, they sell to the political new application which maybe the demo is really impressive so we buy the application and the application just send a mail to the administration and administration perhaps there are hundreds or thousand different servants in the administration so everybody or one guy receive the mail so what? Behind the process you have the citizen he just waiting for the administration to solve the problem in 15 seconds or no no two minutes maximum so that's not the way of course everybody is frustrated after that. So you need to connect back office to smart applications that's why we intend to do and we are doing it but of course with our strategy so in trigger internal workflow from the application, share identification system also I will provide a demo afterwards. We use existing transfer fee data source to pre-fill data so most of the time the data is already inside the administration. It's not it to fill a form with at least half of the data is no use. We already know that. Of course in Belgium we have some specificity the ID card and also there is now a new application that is working on smartphone to authentify with the federal system the user directly from the smartphone so there is some clue that we can use share authentication with between smart projects and and the back office the same system and of course allow user feedback. In most projects it's not always possible for the citizen to provide data is the service valuable is it really interesting and so it's not even in the in the administration the back offices has no possibility to have feedback from the employee. So we have to provide the best experience for both worlds citizen companies private sector and administration. So what is interesting with it's an IT department like EMEO is that we keep control on all the process. Of course mainly back office less control from the smart environment that are running on a smartphone of course. So what we are afraid is to have a layer between the citizen and the administration as I said the email sent to administration it's no way. So what we try to do is to discuss with those companies who provide those smart applications and see with the towns what we can do to have guarantees that the citizen is connected directly to the back office and when the law is changing the regulation is changing that we could modify all the application all the chain and of course what is very important also is to keep the non-digital process or not all the citizen can be digital. So you have to integrate it's very extreme because you have the paper to manage at the same time you have to improve the interface with Facebook React and so on and to have really a race towards technology and at the same time keep the ancient the old process in paper so that's really coping with those kind of process is more and more complicated and I said also use interface so things like Voltault and so on back office one sign front office will be really the good solution for us at the time so I got in there because I am afraid to to forget the Koreans because it is what it is more important for us its Koreans between those different elements the first example is an external application integration that we try to settle up with not really a partner let's say a provider of a smartphone application this smartphone application was done level on stores like a app store on Android and so on so we try to discourage the creation of this kind of application when it is for public purpose because what we expect is the PWA progressive of application will be more and more use in the future and for us it's very interesting because what we intend to do is to create this kind of interface for Plano so we can link easily like a URL the external application with our application so the strategy behind that is to keep the data in our application so when this smart application as needs some functionalities we just provide those functionalities in our back office with our frontends and it will be integrated in their application that's the strategy and this of course it urge them to use the same authentication system as I said before the second example is the website so the website is really of a town is not monolithic anymore it's really a productive tool and we decline this kind of product in some business cases and of course behind that ensured metadata consistency so we provide specific subsides for city archival archival collection for example or the postal cart and so on to all those processes in libraries of the of the cities and we really create specific flavor with the use the particular use case so in that case facet navigation which is really highly use tool in our application is very interesting of course second favor is the major city project management the example is the city of liège and this website allow the citizen to propose their project to the town and so behind the system there is of course a workflow for filtering the project provided by the citizens and afterwards when when the city chose the the right project there is possibility to vote for the citizen and I choose the right project he prefer and then afterwards those projects will be integrated in the strategy of the town which is very democratic process and well behind that open source and so on with the control and this this philosophy is very compatible with with that kind of view in the same way which is really interesting because we have I think 180 website or more we go sleeping 100 at least 180 cities are using our website of course it it's interesting to have the same data data between towns afterwards when you provide open data information you can gather with geolocation and so on all the same metadata for the region which thematic between different cities so this is really a good strategy but as I said we have use case specific for that 180 website we can afford this kind of of effort and so now the directory management use event and so on we create also several workflows specific roles and so on because in cities you have many many different suborganization they provide information for the city web contact management and so on so it is very it's really a business project now the website and so this or it's always a facet navigation to find the metadata information with the geolocation information it's really powerful for for the citizen the third example is the central certification service well they connect we show a little demo about it we use you can ID connect plug-in so the idea behind that of course is to connect all the application plan application and third-party application to the same authentication service so we are not using the plone login and redirect redirect systematically with this portal and we have different with two different portals citizen oriented and internal users why because internal users in the cities are not exactly the same login than the citizen most of them use the email my department at my town dot be so not their own email so it's not possible to have a unique key so we have provided two different portal citizen and and city servants the the second problem is that we have to connect with some other system like active directory to synchronize our portal with local system and of course it provides first to provide the organization when we look to the tool don't forget we have all on the cloud and so when somebody connect he must first indicate his organization afterwards the login because otherwise we have to scan all the active directories of the towns which could be long and we have also behind it a role management for this access the role management of the CIS is to have basic role for each town the name of the town yes for example the application smart web is our sitweb website package and then two roles perhaps three but mainly two roles the pro admin and a user role simply the other roles are really specific to the business I manage in prone so try to the demo which is oops so what we are going is here is to connect to the website of liège just put the login so as you see we have the two portals authentic effect your identification as an agent here auto-identification as a citizen so we choose agent for instance we redirect to stop okay just two minutes to the portal as you see what's it's important here is we will also connect to the federal state with the ID card or the smartphone so we also manage several layers of security behind this system of course so you connect he put the organization is connected to the website session is open is the website now we go to the one-stop shop Django application you click on connection and of course the same system but in fact you click on the portal agents but he's already connected of course you see his profile with his data and now he will disconnect and when he is disconnected from the one-stop stop he is also disconnected from the smartphone from the the website at the same time so the system manage of course global reconnection and now he is disconnected okay just some idea about just less a lot but it's not very important we we see that in those kind of applications smart application sometimes they use artificial intelligence but most of the companies are not managing intelligent artificial intelligence so there is some drawbacks about that data is not right and it's really a problem of course Facebook or certification is be refused that's why we create this kind of portal because it's a government it's not a private system for the citizen and also some it's not very important but with the some application provides best contributors with a democracy project so it's very very strange because it means that it's not democratic only the best proposer of project appear in the applications and so we have some problem with democracy with those smart application so that's all for me thank you very much
|
Most cities have similar activities. However each one has its own particularities depending on political choices, size or environment. How to provide them with a packaged offer while meeting their specific needs? What are the current priorities and issues? Can Plone contribute to the "Smart City"?
|
10.5446/55187 (DOI)
|
Okay, I want to talk today a bit about the back end development. Now, how it is today. Even though everybody talks all the time about headless and front end. But yeah, you still need a back end. So how are we going to do that? A small overview. We have things like the clone CLI. Some of you might have heard about it. I'm pretty sure a lot of people are using the clone API, which is the Python API, which makes things simpler. We also have snippets for several editors. Also for VS code. Where you can add some code with the editor. And also, of course, you have to clone REST API where you can then express these contents to use it on the client side at the end. So the back end part there will be to customize the REST API to deliver the content as they are needed for the front end. Yeah, of course, if you didn't, you should use the clone CLI. I think it's useful also for not newcomers. I like to use it. Of course, I build it. But it really saves a lot of time for boring stuff. You can focus on your code. Yeah, it helps you to enhance your products, your add-ons step by step. So if you need a functionality, you can add it if clone CLI or BOP templates is providing it. This is just a small demo. So we have a lot of templates already. You can always check the version number with the minus capital V so that you see which version of clone CLI and BOP templates you are using. And then you can create your package. Since we need support for Python 3, Python 2, you can set which version you want to use for clone CLI. It's mainly used for creating local virtual environments for testing. And you can always change that and just run clone CLI build and it will recreate the virtual environment, install the requirements, run build out, and then your setup is on Python 2 or Python 3. You can see the build command does several steps. The last step is to build out and you don't have to type this manually. But you still can, of course. Yeah, there's a small configuration file. This is also the first step when you want to migrate an older package to be compatible with clone CLI. You just create this file. This is the way how clone CLI finds the package root. But you have also some settings there. And this you can use to, or this clone CLI and the add-ons and the sub templates will use to make decisions later. One example, a content type. That was actually the first template I was working on when I changed to this more modular approach because it was not possible before to create multiple content types. We just had one big template which were having one example content type. So you can create a content type. You give it a name. You have the option of container an item. If it's global, addable or not. And then you have it. If you want to create another content type, for example, this time we choose item as a base class. This time it's not globally addable. So when we continue, then it will ask for parent content type, which in this case is the task list. And this clone CLI will use to generate the tests that they are working so that it makes sure that the parent content type exists and then they try to add the content type there. And also the wiring in the FDI settings so that you don't have to go there and say in this container I'm allowed the task to be added. You can see the structure. It creates a Python file for every content type and XML file. Depending on the question of if you want to use the XML file or not, we could clean that up but it doesn't hurt anybody. You can delete it if you don't want to go that way. Or if you say you want to use the supermodel, then you can basically write it in XML or do it through the web and export the XML after you created your schema. We also have template for rest API service. For those of you who don't know what the service is, that's the components you have, which for example provides the breadcrumbs, the navigation, so additional data for your current context. When you're on a page or your custom content type and you want additional data, maybe based on tagging or whatever, so you make a catalog query for that. You can create a service for that and make this information available. With the expand feature of the rest API, you can tell the rest API, please also give me the data with the run request I did for the actual context. You just have to customize this area and it will return, Jason, the rest is more or less boilerplate. There are two things you decide. One is the class name of the service and one is the actual endpoint name, so the service name and these things are used to add this generation. There's another template coming for Zeralizers. Are you using to change the way how the rest API returns your values? For example, for vocabulary I did that once because earlier versions of the rest API were just giving you the token and you want the title and the token and not asked again for this from the client, so you can change the ways how things are returned to you. There's another one which is fairly new, so there might be some bugs, I just found one, but this will help you to easily generate upgrade steps. What we're doing here is basically generate a profile for every upgrade step. This is similar to what blown up upgrades in the core package. We will also provide the Python file and we will also auto increment the version number, so you don't forget to update or increment the version number and the metadata XML. If you don't need something, you can always ignore it. If you don't have Python code to write for your upgrade step because the generic setup import is enough, then you're fine. If not, then you can use the Python file and do, for example, re-indexing after you imported the profile. Basically, you get a file like this for every upgrade step automatically. This includes a folder where you can put profiles like registry XML or catalog XML like I do here in the example. Like I will do now. This way you can easily create upgrade steps and there's no excuse to not do it. If you create more upgrade steps, you can see now when we're doing this, it will generate similar structures and we'll go and update the version number. Whatever version number you have in your existing package, we'll just read it from the metadata and then increment it and that's it. We have a Git integration, so every command you make will automatically commit it to your repository and when you did manually changes like I did with the catalog XML, then it will tell me maybe you want to commit it before you use templates because the templates from Blown CLI or from Mr. Bob, they're not asking you before they overwrite anything. If you have conflicting configurations, you might lose your changes. Always use the Git commit before and then you're fine. It helps you to revert the whole process. You can just go one commit back and nothing happened. One other reason is if you use Blown CLI, all the packages are created by this will have the same structure. That makes it easy to find things. The ideas here are everything has its own folder. If you create behaviors, they are in behaviors folder. If you create content, like content types, they are content indexer folder and the upgrades are in upgrades and so forth. There might be some changes like I choose view list and views to put also in separate folders on the top level. This is a little bit different to the traditional way in having a browser folder. Otherwise, where would you put view list and views? You could put them in browser folder, like a sub folder, but where's the meaning of that? If you think about newcomers, they don't know the browser folder. They have no idea what this is for. There's no real reason to really name it that way. There was at some point in the history, but it's not really that important. That's why the decision was made there. It's also more robust. For one thing, you have one place and you generate usually one new file. For a new view, you generate a new file. You don't update Python files with the generator because when you start this, that can get messy. It shouldn't usually break anything of your code. You would get the test coverage right from the start. Of course, not all the templates have the same amount of tests. Sometimes it's just not easy to generate them as a generic test. Sometimes I just wait for you to add some. There's a bit room for improvement, but the whole structure is there. You can go and also bring your opinions in and say, we should use this way. Recently, we decided to rewrite some tests so that we use more the clone API instead of using getMultiAdapter to get the view, to test the view, things like that. This makes it easier for newcomers to understand what's happened because the clone API is more meaningful in how you use it. By default, you have in the packages you generate the talk setup. I said earlier that we have the option to change the Python version. It's easy. It takes a bit of time. If you created a database and you played with it, you have to delete the database because it will not run with Python 2 and 4 in the same way. What talks does is basically it allows you to have all versions you care about and all combinations of Python and clone be tested without changing anything. You can pick a single environment and say, okay, I just want to test now the Python 3.7 with clone 5.2. If you don't care about your add-on for clone 4, then just go in the talk scene and remove the entry for that. You can run all these things in parallel too. If you run talks in the parallel mode, it will use all your CPUs and burn it down. It goes as fast as your CPU power is. At the end, you have a summary of that. You see that. Here's an example of the upper part of the talk scene. This is what you want to customize. It's basically saying, let's say the second line, for example, it says this is only for Python 2.7. Then there will be generated two environments. One is clone 4.3 and one is 5.1. From the third line, we will have a version with Python 2.7 with clone 5.2 and a version with 3.7 and clone 2. If you go and say, list the environments, then you see these different versions. You can rewrite that so that you have just less and you don't have to test clone 4 if your add-on will never run on clone 4. That should be okay. There's another command. If you use talks-a, it will show you a little bit more environments. These are environments that are defined more or less as a helper. You can use this to update translations and things like that. Or to apply the ISOD. I personally use this in VS Code, so I don't need this, but some people work that way, so it's okay. You can configure the clone CLI or the Mr. Bob for both for your own taste. If you don't want the git questions, then you just say, okay, just do it. It's a good idea. Always commit and maybe use the clone version as default. You have basically two areas. The first is the variables. Things you put there. You will never see the question. If you put an answer there, it will be always answered. Not ask anymore. If you put it in default, you see the question, you see your default. If you press enter, it will just save, but you can change it. The variable names you can find in BobTemplatesplawn where the templates are defined. There in every template folder you have a BobInnify. This BobInnify is just a normal Windows style. You have the variables defined there. This name you copy and this way you can do that. We have a little helper in clone CLI, but we need to update to allow more settings. It will write this configuration, at least for the basic stuff that works already. The author name and things like that, it will set. We could extend this to make more decisions so that you don't have to look for the variables and things like that. That would be a nice thing to spread on. The whole thing is extendable. You can write your own BobTemplates packages and register them for clone CLI. This way, clone CLI will just list them and can use them. It's not hard to do that. This is useful if you have ideas for packages which are maybe customer related or your company way of doing things is different and you want to have some of the templates in your own flavor. Or if you do things which are not necessary for the main development but you have to think for migrations. We can create multiple packages and make them all work with clone CLI. It's a little example of how that works. It's basically using Python entry points. In your setup py, you have something like this and you point to your package or in this case it's BobTemplates clone. There is a Python module, BobRegistry and you just have the key there. This is just a little helper object and an entry in that file looks like this. This is the main clone add-on in this case. You have a clone CLI alias. We don't want to type all the time clone add-on because we're using the clone CLI so it's clear we want something related to clone not to permit. We can skip that. In the context of Mr. Bob, you could use the global name. You can always say this template depends on clone add-on because all the sub templates expect a certain structure and some files to be there. Usually you create your add-on with the add-on template and after that you can use all the sub commands. It's not that hard to upgrade your existing. If you at some point created your content types, I just did that with the easy newsletter for example. We have a chapter to upgrade existing codes in the documentation of BobTemplates clone. If you have a question there just ping me or one of the others and we will fulfill this. I mentioned a small plugin for VS Code. You can just install it like every other VS Code add-on with the extensions. Then you just write clone and after that you write something like text line field and then you can jump around, fill it out and make the settings for requirement. We have basically all common fields in Python and also for the supermodel. You can use this in the XML or in the Python file. There are also some other settings for the registry XML. For example, some of these big chunks of registry thingies. There are also good things to put in the snippets. There is still one bug here because it should replace the name when you start writing. It doesn't do that when you have multiple times. I already opened the issue there. I mentioned clone API. It has a really good documentation. It really makes sense. If you have people to onboard and start with, we really should use this as much as we can. Just because you can read it and you just understand what's happening. This helps a lot. The other reason is when there will be a change in the future under the hood, it's not your problem. You just use the clone API and the clone API will use the new way of doing things. Your code is probably also more robust than using some internal methods from CMM clone or from whatever. Some ideas for the future. There are some things like views. For example, you can register a view to just appear on certain contexts or under certain interfaces. This is by default already there. You have to go in the.cml file and change the interface. It would be nice to have some common options like all the standard content types could be listed there. You can say, okay, I want this on pages by default. Your own content types, which you created earlier, could be listed there or something like that. For REST API, I work already on the Zerializer. Another thing would be the Dyserializer, which is the other way around when you send data to the REST API in a certain way. You want the REST API to understand your data better. Then you can create that. Maybe you have some special data from an application. You cannot really change or it's just not that common so that you want to do some conversions there. Yes, Graphical UI, but just a minimal. For selecting content types or interfaces, also give me the parent content type. We could just select all the content types from standard blown and from our package. This would be already helpful. You can just select it instead of remember and retype it. To do that, we need to change the user interface. That means we have to change Mr. Bob itself. I have ideas for that. There are some libraries which work cross-platform and allow you just in that moment to have a drop-down or things like that. But for that, we have to go upstream before we can use this and Bob template's blown and blown CLI. For VS Code, I wish we would have a better blown API auto completion at least. It's not working that great when you import blown API like normally people would do from blown import API and then use it like API content find. VS Code is not that smart with the namespaces there. There are some problems. Also, VS Code by default, of course, doesn't know the build-out structure and things like that. There's a bit of a recipe though. You can create some additional configuration for VS Code. VS Code will read that and then should at least read all your Python paths without doing anything else. Another way to do this is using the ZOPI Python interpreter you have in the build-out usually. Then VS Code also has all the paths in there. But even if you have that, it's not perfect with auto completion. I would like to have more contributions. There is nothing set in stone as Philip said before. Don't be shy. There's no grand jury who makes decisions and because it's there and it looks like this, it has to be like this. That's not the case. If you have ideas or complaints, open an issue, let's discuss this. If you find a way to test things better, just add a test or write a bit of documentation would be really helpful. If you have other ideas and you're not sure if that makes sense to put this in Bob-Temps-Lates-plon, then create other Bob-Temps-Lates-Packages. Your own would be helpful for you internally but also to make the whole system a bit more robust. There were already some contributions. I guess there are three people from time to time doing something on it. It's nice to see that in use. I would be at the sprints at least Saturday and maybe Sunday morning. If you want to dive into that, it's a good task for a short sprint because there's not that much to learn. You can achieve some things. If you want to work on your custom template and do some stuff, I can just help you if you have questions. Any questions so far? Is there a Bob-Temps-Lates-Package that can create a Bob-Temps-Lates? I don't know if it makes sense. It should be possible because what you usually do is you create one Python file and one folder and a bit of structure. We could do that. You have to write a bit of Python for sure because what we are doing with configuration files, usually we pass them, the XML files or XML files, we just pass and to keep the format, we don't write them out as XML because the formatting would be different then. So, the XML files usually have a line like earlier, like your custom stuff comes here, so that's a marker. It should stay there. But we pass XML, so we find out if we need to add something and if we have to, we insert it after the marker. So, that's the way and this is like some Python functions that get called from Mr. Bob. And usually depending on what template you build, you have to adjust the parts, the variables you create or define in the pre-rendera section. So, there's method pre-rendera, it's called and there's some stuff that's always the same but things like, I don't know, you have the question, upgrade step name or upgrade step title and you want to use this somehow as a variable then you do your initialization there and later in the templates or in the file names or folder names, you can use that. So, that's the work you have always to do but the structure might be useful. Any other question? So, can I customize the command line interface or sub commands? In what way? Like register, maybe additional commands or overwrite them commands, their behavior? I mean, clone CLI itself is just a Python package so if you have ideas which are generally useful then we can add them. I mean, if you create your own Bob templates and you register them with the entry points, they will pop up automatically in clone CLI so this add command has an auto completion feature which shows you the list of all templates and you can run them. So, these sub commands for the templates, this is already there, this is customizable, the rest is it's a Python package, everything what's possible there. Any other one? Then I have one question, who of you is beginner, like in the beginning phase? Who is onboarding people from time to time? So, if you are using already clone CLI for that or most of them not, so give it a try. I think it's easier to understand and to, you don't have to know all the things, you just create what you want to do. Somebody told you you have to create a view for that and good luck by trying to find the documentation that's sometimes really hurtful even though you know what to search for. But it's not easy and with a tool like this, you can just start and then look okay, it works and then adjust it. It's way better for beginners. When I started way back then, we had something like this with ZOB scale and archetypes support. So, you could do similar things and this was really helpful. So, give it a try and please also give feedback how that went and what we can do to improve it. Okay, thank you. Thank you.
|
A brief introduction into the state of the art Plone back-end development, using modern tools as "plonecli", "bobtemplates.plone" and "Visual Studio Code - Plone Snippets". Building Content Types, Vocabularries, Behaviors and Rest-API Services with plonecli and using VS Code snippets fill-in field definitions and configuration.
|
10.5446/55190 (DOI)
|
Right. So this is talk about building robust APIs with Python. We're going to look at what drives me personally to spend so much time and focus into designing and building robust APIs. What robust APIs are and how you can do them as well. It all started back when I was in college and Nitaia was still a very young consulting agency. I read about a famous Slovenian entrepreneur businessman, how he spends half a year in Slovenia and then the remaining half a year in Brazil, kite surfing on a beach. One day I thought to myself, one day. And really I couldn't get that thought out of my head. And then I came to a realization that there's not really anything preventing me from doing it. And at Nitaia we had international clients, many of you in this room, and you didn't care where I'm located as long as the work was done. So I did spend my first winter in Spain. Sorry, sorry, sorry, in Catalonia. It was a total blast spending my mornings working at a sailing club in Barcelona and afternoons on the water, and when it was time to go back home I was convinced that I really have to do this again. So I did. When it got properly depressing the next winter in Slovenia as it does. I don't really mind the cold and the snow, but I hated when it's like somewhere in between. It's just foggy and rainy and slush instead of snow. This is what I don't take well. So I packed up my car, headed south, to Valencia, had a blast. And I've been escaping winter ever since. You know what sucks though? I need like a week to get to pack everything into my car, all the baby stuff and all our family into the car. It takes me a week and two ferries and 4,000 kilometers to come to Canary Islands where we stayed this year. And finally I come to the beach, the waves are pumping, the sun's out, the wind's howling, it's perfect. And this happens. I'm getting called in because we have a production issue. And one of our customers is calling in that our API suddenly started returning a different result and we didn't notice and we broke their integration code or we broke the front end. Or maybe our app is crashing because the view code is trying to process invalid data and you know why we do even have invalid data in our view code. So that is my why. I really, really don't like people breaking my schedule. Let's continue with the Plone API story. My first proper dive into designing APIs was seven years ago on a Plone conference in Munich. We started the Plone API project on the after conference prints and I got my first experience like really about everything that is hard with APIs. So the project started, the Plone API started because back then Plone was already 10 years old and it required memorizing a bunch of boilerplate code for very mundane tasks such as you know just getting a freaking site URL. And we had this idea it should be like really you know you could just write it from your memory you know give me the URL. And the same thing for moving an object which is like just content.move and not some strange two lines in magical incantation review state exactly the same. So what we did essentially back then was survey a bunch of Plone integration code that we all had on our laptops and we identified 20% of the tasks that people do 80% of the time. We put them down on a piece of paper like those tasks and then like went through them again and again and again to figure out what's the best way of naming them. And we wanted an API that is easy to remember and especially easy to guess. And then we wrote documentation for how people are going to use this new API methods that we came up with and by writing the docs we really polished the naming of the API methods and this might sound trivial but naming is one of the hardest things to do in programming especially when you're designing APIs because once you have an API in production and users already wrote code to integrate with your API good luck changing your methods named then. You're just going to cause a ton of pain and suffering. By the way snippets of the documentation were and still are tests. These bits of code are run on every commit which makes sure that when a pull request comes in that the pull request does not change the signature of the API because if it would those tests would fail. So yeah Plone API super successful over 60 contributors over the years people have given talks about it and now ships with Plone Core and I haven't touched it in years so great success. The big lessons learned with Plone API at least for me were that always write API specs before you write code like we really sat down for a number of days and then again after a couple of months to just write down how the API is going to look and to write the documentation how the people will use the API to really give us a clear idea of what we want to do before we wrote the first line of Python. But then it turns out if you want to do microservices, web API, restful web services these days you don't have to really look far for how to do documentation. You can do Swagger which is the basic to the de facto tool for describing REST APIs. You do a YAML specification and then that generates nice HTML documentation with an integrated test client. So like this is all automatically generated for any endpoint you can see where it is in the YAML file and then you can click try it out to basically exercise our API through the browser so you don't have to do any client code to see that your API is working. You can copy the command as a CRL in your terminal if you want and then here you see the response that you got back from the API. This is super nice for your users because the documentation is fine but if you don't have really good examples it's going to be really hard to integrate with your API but if you give this to your integrators they can just copy the CRL into their terminal and then from that infer how they want to build their request. So this is super nice. And then the whole ecosystem around Swagger is fantastic. They built a lot of tools such as client generation tools. You can give, there's a tool that you can give it any YAML specification for this Swagger YAML and it will generate a Python class that will have like methods for connecting to each of the endpoint on the API so you don't have to manually like type request dot get request dot both. You just call the function with the proper parameters and they have the same for React and general JavaScript. So there's a lot of boilerplate code that you don't need to write anymore if you use Swagger. But then again when I say Swagger I actually mean open API. So Swagger was, it first released in 2011 then there was Swagger 2 and recently a couple years back they started working on Swagger 3 at which point Swagger is a company. It didn't want to have this, you know, there's a company behind the idea so they made a foundation called open API foundation and then transferred all the intellectual property to this foundation and also changed the name of the specification document from Swagger to open API. So whenever I say open API v3 or open API if you're used to Swagger you can just mentally replace that word to Swagger in your heads. Right. Now you know why to do open, why to do robust APIs and now you also know what makes an API robust and that is always up to date. So it's automatically up to date documentation that can never be wrong and signature of the API that does not change without intent meaning if you're, if you push some code to master to production that your, the responses from your API or the input validation for API does not change unless you really wanted it to change so that you don't, you know, change the, for example if you have users integrated into your API with their custom codes so that you don't change the API under their feet. Right. Last summer we started working on a new project called WooCart. It's a complete autopilot hosting for web shops built on top of WordPress and WooCommerce and I was in charge of building the API that will glue together the client facing control panel built with React with the deployment machinery in Kubernetes and you know using Python was a natural technology to use as that glue. So the idea is when a client wants to deploy a new store they log into this React web app and they click a button, they wait a couple of minutes. Python sends the correct API requests to Kubernetes, waits for Kubernetes to do its magic and then reports back to React that the app is successfully deployed and then displayed the new app. So basically I went to the open, so the open API website and I wanted to see what Python integration libraries exist out there for open API and I was quite bummed out because this looks very much like choosing a JavaScript package. There's a bunch of packages with similar names and a lot of overlapping features and at least in the beginning it looked like it's going to be really hard to decide which one of these I want to use. But then I really was quite sure that I wanted to use Pyramid because I wanted to build a robust API so I can go surfing and Pyramid is really a mature, if not the most mature web framework out there. It allows you to start small in a single file if you want and then scale with you as you go. Moreover, at that point it got known that the people that rewrote and relaunch pypi.org, they used Pyramid to do it. So they're one of the most knowledgeable people in the Python community plus pypi.org gets a lot of traffic so if Pyramid is good for them then it's probably good for me. Still down to six packages. Growing up I was a huge fan of Mistbusters. Anyone here fan of Mistbusters? Yay! I really loved watching Adam and Jamie bust one myth after the other and I consider myself to this day a tinker and maker and really I'm always hunting for some free time so I can do some DIY projects. And you can imagine my joy when I realized that Adam's first book came out recently and it's called Avery Two's a Hammer. It's kind of an autobiography mixed with a collection of tapes on how to be a good maker, how to be creative. And besides the main title of the book which is Avery Two's a Hammer which is a good reminder that if you're really good with one tool then you see all the problems in a way that how you're going to solve it with this tool and you don't always consider other options which is what happened with me with Traversal. I was so into Traversal that everything I looked at was like this is a good Traversal problem but then I tried to do URL dispatch for about two years and I'm like yeah maybe a lot of problems are more suitable for URL dispatch. Anyhow one of the main chapters in the book was Use More Cooling Liquid which is kind of strange. And then also Adam was in a podcast and the podcast interviewer asked Adam what would you tell your 25 year old self? What wisdom would you give to your younger self? And Adam said use more cooling liquid which is kind of strange advice. What do you think to give to a 25 year old? But the idea here is that it's not the act of applying cooling liquid when you're drilling a hole or cutting some metal. The idea is to remember yourself or himself to think before you cut or think before you drill. To go slow too. It's about preparation. Like really putting in the mental process and the mental thinking before starting the work. And I realized this really applies to how you approach APIs. Because there are actually two distinct approaches to how you can do open API with web frameworks. There's the generation approach and there's the validation approach. So the generation approach looks like this. You write a bunch of Python code, models and views and then you run a command or maybe it happens on the fly and then you generate the ML file out of your Python code. And the first time you do this it's absolutely amazing. You're like this is going to change the world. It's super cool. You just write some Python code, boom, YAML specification, boom documentation, everything done I can go home. But then there's the validation approach also called the API first approach. Which is you sit down, you think about how your user is going to use your API and how your API is going to look like and you write that down in a YAML file. And only then when that is done the documentation is complete. You go through the documentation, show it to some users, they say yeah this looks great. Only then you start writing Python code. And then finally you validate that your Python code does whatever the YAML file says it should do. So this validation approach incentivizes you to write specification so that all of your developers can understand what your API does before you start writing the code. And it really provides a clear separation between intent and implementation. This if you start specifying your API by writing code you're going to be limited to being in that mindset of I'm going to write this and this and that and then that's going to generate my API. It's you're going to be much better at designing APIs if you forget about a code for the first. Just forget about a code. You know you're able to do anything so forget about a code. Just think about how the users will feel and how they will use your API. And there's the fact that generation validation approaches are both always going to be imperfect because we live in an imperfect world. So here's an example of a same bug on the left, a generation library and on the right, a validation library. So the bug is that open API specification supports polymorphism. It doesn't matter what that means but both of these frameworks or libraries didn't have polymorphism. And in a generation but at the same time the client generation tool for React had polymorphism. And this is exactly what happened to us. If we would be using a generation framework where you generate the YAML file from Python and you cannot generate the polymorphism like YAML to support that uses polymorphism, you're kind of stuck. You have to fork the generation library so that you then add in the feature to support polymorphism and then so that you can generate the YAML file that the React generation tool can use. But in our case we used validation so we just put the polymorphism into YAML file and that was it. Yeah, we didn't have validation but on the other hand everything else that was using the API could use polymorphism. And in that case we just wrote the validation manually in our view code. So using the validation approach provides much greater escape hatches when you reach edge cases. And then finally, again in WooCart we had to rewrite I think two of our endpoints with Golang because we needed to get better performance and concurrency out of it. And if we would generate the YAML file from Python code we would have to have like that Python code, that Python views just so that we get the correct YAML file at the end that Go code could use. And it's exactly the approach of using more cooling liquid now don't just start writing code and bam bam bam bam bam bam bam. Like really think what you're doing. So going back to the list of Python integrations for Swagger and removing those that are not written in the API first approach, I get down to these five. Two of those are done on top of Flask. Three are on top of Flask and two are done on top of Pyramid. And since I prefer Pyramid, I get down to these two. One has documentation in Japanese which I can't use. And the other one sadly only supports Swagger, the old Swagger 2 not open API 3. So I was really left with one solution. So here it is. I had Domen available to me at that time so we rolled up our sleeves and created a new package that integrates open API version 3 with Pyramid. It gives you documentation that is never out of date with tried out examples. And also there's another benefit. When Pyramid starts up, we look at your specification and find if there are things that doesn't make sense. Maybe you're using references and the referenced item does not exist so we will tell you. All the payload coming into your views is validated. So this really decreased the code in our views because we don't have to do any validation in the views and less code means less test means faster everything. And also the responses going out of our view codes are validated against the specs so you're not sending out some data that you don't know you're sending and maybe someone starts using and then you realize that you're sending some data and you fix it and they go, eh, you destroyed the API for us. Give us back. So anything that is not in the spec cannot come out of your API. And it gives you, like for me who I mostly do core views, a single source of truth which is fantastic. When I see a pull request, bunch of code changes. If the YAML file has no changes in that pull request, I know the API is not going to change and I can ease off on the review. On the other hand, whenever I see the YAML, the changes in YAML file, I know that our customers will be affected, our front-end team will be affected and I can really think about are these changes necessary or what their impact is going to be. The package also comes with a couple of examples. One is this single file example. So the entire open API application in a single file. This is how it looks. It's a single endpoint called hello that again you can use to try out examples to test. You can execute. It comes with tests that show you how validation works. In this case, if we don't provide a name parameter, we're going to get a nice missing required parameter name. And again, there's no code you need to write in your view. This is only come from the specification in the YAML file. Again, if the name is to short, the same thing. And also tests for making sure that if you send out responses that are not documented in the YAML file, the library won't allow you to do that. There's another example that implements a simple to do app. So you see how posts work. And then finally, there's a real world example. So if you go to realworld.io, you will see a collection of 1520 backends and 1520 frontends that implement a clone of medium.com. And then you can choose, for example, Django and Angular or you can choose Flask and React or something in Haskell and something in PureScript. And you can mix and match. And this repository is fantastic when you're evaluating new technology for your next project because you can see, OK, so how would a real world app in Flask look like or how would a real world app in Django look like? Or if you're evaluating frontends, you can compare Angular and React side by side with real code. And in the end, you get an app like this, which looks exactly like medium. You can try to sign up. You can sign up. You get some validation errors. And they provide a whole test suite that basically exported postman tests. And then basically, if you're doing a back end, you just have to make sure that all tests pass and in which case you can use any of the frontends, hook it up, and then it works. So obviously, there's a Pyramid Open API back end now for a real world I.O. And it comes with Pyramid with Open API integration, Postgres for storage. Everything is ready to be deployed to Heroku in a minute. It has 100% test coverage, type hints, and a bunch of linters to make sure the code is well. So even if you don't plan to use Pyramid or Open API, it's still a really good repository to look at to see how to write tests or how to use the new typing module in Python. And I've been using this repository as a scaffold for new projects for some time, and I know Matthew Wilkes also stole a couple of ideas from there. So I really encourage you, like next time you have to do a microservice with the REST API, just fork this, change the YAML file, and then start playing with the models and views, and you can be done in a couple of hours. And remember, use more cooling liquid. Thank you, Nays. Any question? No? Yes. There you are. Okay. That maybe sounds now strange for you. I'm sorry about that. I mean, of course, I have a question about documentation. I'm really sorry. How do you do reference documentation? Because for me, more involved into documenting APIs, I still, like description fees and other things are only part of documentation for me. They are sometimes enough for developers, but not even for them always. So how do you, for example, do reference documentation, if you do that at all? Yeah. We do it with Swagger. So Swagger is not ideal for that, but you can put a lot of information into the YAML file that that gets generated into HTML. Yeah. Yes. Because at least for us, it's convenient because it's in the same place. And also, if someone doesn't read the documentation, but they open up the YAML file that will see all the endpoints have documentation in there, like in every endpoint in the YAML specification, you can have description fields and whatnot. Yes. Yes. I want to have a single source of truth because otherwise things can get outdated and then you have a problem. I have one question about developing things because if you say, I'm writing the whole API as a contract-based idea first and then start to implement it. You said the whole system gives you validation. So it says there are either options that are not available. So your API document says there is an endpoint for get your pets or something like that. And if you don't have it, it actually starts really. Or is there a shortcut that you can say, okay, now I have all the endpoints specified, but there is not implemented or something like that? So at the moment, there is no system in place that will check if you've implemented a single endpoint or not. There is discussion whether we should add some checks like this because now you can basically just serve the YAML file and the documentation will be generated and you can try to use it, but the responses will be, responses will be basically 404. There is some people think that it would make sense to not even be able to start Pyramid if you haven't done all the implementation details. Maybe it's going to be a flag that you'll say, please be straight or please don't be straight in the near future.
|
This talk will showcase how to use pyramid_openapi3 for building robust RESTful APIs. With only a few lines of code you get automatic validation of requests and responses against an OpenAPI v3 schema. You also get automatically generated “try-it-out” style documentation for your API. It is a nice walkthrough of pyramid_openapi3. With a defense of design decisions and a few tales from the battleground.
|
10.5446/55198 (DOI)
|
So recently we did a lot of migrations and a lot of them now with Python 3 we have an additional reason to a good argument to sell Plon5 to Plon4 users. So we're doing migrations from Plon4 to Plon5 at the moment. A lot of them, a lot of these sites still have linguaplon, they need to be migrated to Plon at multilingual, they still have archetypes, lots of that and also need to be migrated from Python 2 to Python 3. So one of the things I learned in the past years is that every migration is different, only not so much because they have kind of the same problems in maybe not identical but very similar ways. So I found that I can reuse most of not only the lessons I learned but also the code that I use for these migrations and since I got a lot of requests I did a little package called collective migration helpers that contains almost all the code that I'm showing today, I'm going to show code from other packages as well. So if you're interested in reusing any of the methods in there just go there. So the most important lesson I learned from doing migrations is automate everything. So write upgrade steps, put everything in code and put it in code so it can run multiple times, it can be run easily and it can be run in different contexts. Let's look at an example what I mean by that. It's a very simple, small method that disables solar. So it does a couple of things that are interesting here. So one is that you might recognize, see the context equals none. In an upgrade step the first argument is usually the portal setup tool. So you can run that as an upgrade step if you register it but I'm not using the context at all. So that means I can also run it from a PDB for example, from a debug console and instance debug, I can even run it from an instance run script. I can run it from a browser view, whatever. So it's very flexible. This is number one. So the second thing is I keep imports inside the upgrade step or the upgrade function in this case. So in case collective solar doesn't exist, I don't have an import at the top and I also do make the import conditionally so I check if I actually can import solar, if not I'm actually not going to do anything. So it doesn't hurt me if I run this code in a site that's never heard of solar. And also if it's been run before, so the solar was disabled like 10 minutes ago, it doesn't hurt at all. And the last thing is that I log what I'm doing. So log in for solar disabled. If I'm not disabling solar, I'm not logging anything. So that's kind of easy things but if you follow these simple patterns, you will be able to combine your upgrade steps into something that is easily reusable. Another example is this, it can run multiple times again. It can be run in different contexts. It disables a couple of LDAP authentication plugins and tries to delete them and logs it if it's there, doesn't log it if it isn't there. As you can see, a lot of migrations are similar but different. So probably none of your projects have an AD plugin underscore BG48. Just one of mine did. This is why the code in the repository is there for you to copy and paste into your own modules. It's not meant, you can use it to import certain things but that's not the initial intention. So also these two are actually the first two things you should always run. Maybe not exactly these. Maybe you don't have solar but elastic search and you don't have LDAP but have something, I don't know, blown up LDAP. This is whatever. But these are two things you don't want while doing migrations, calling some external search index. The worst would be writing into that, the production search index while you're doing migration on a different machine because the blown instance keeps the URL from the IP from the solar instance for example, that would be terrible. And also you don't want LDAP to be asked every time you do something on your development. But before you do that, you try to get some information. Get the bird's eye view, that's how I called it. Say you want to learn as much as you can about your site, what content types are there, what add-ons are there, how old your items are, how many items there are, how heavy, I mean in megabytes they are, about local roles, about placeful workflows, about all kinds of things. And here's a small example from the statistics module in the package I made public. This is a helper view, it's a browser view in this case that will, you don't have to read all this, will list a lot of useful information that you usually want to know. For example, how many items of which type there are, so there's obviously 884 help center leaf pages in this portal, where that content is, so there's obviously some weird folder that, I don't know, services that contains a lot of stuff. So how old your content is, so something was very weird in 2019 because they had 1,900 events, so pretty busy year probably. And also, where you actually, the biggest objects live, sometimes you have people uploading whole ISO images into clone instances. So we have a huge zip file of 600 megabytes on the top and it sorts them by size. Just a small example, you can adapt the code to your need. Another thing I like to do is find out where the content is that needs to be removed at some point, so when you find out what kind of add-ons you have, you'll see here we have platform gen help center, collage and calendar X folder, these are in that project, these were used quite a lot and it tells me now, okay, where is that, how many items are in there because most of them are container-ish, which one was created, from which year are these items, and with this information you can send emails to people who are in charge of the site and ask them, okay, this is what you have, this is where it is, what do you want to do with this and more often than not the answer is going to be, oh, I had no idea this actually still existed, please remove it. So you don't have to deal with that, but unless you can't make that decision yourself, unless you're the project manager content-wise, which usually are not as a developer. So the second big lesson is to divide and conquer. In an abstract way that means deal with one problem at a time and ignore and replace, remove and replace one add-on at a time, solve one problem at a time and ignore problems that don't block you, put them in a to-do list and they may have just gone away in platform 5.2. So that is important advice. Divide and conquer also means divide your migrations into a multitude of upgrade steps. So have one upgrade step for every single thing you do, so you can run them separately, either in the PDB or with portal set-up, so these are a couple of upgrade steps, migrating images, migrate everything else to plan up content types, uninstall archetypes, and only when this actually works for enough times and you're content and you don't want to click so many buttons anymore, you can combine them into one upgrade steps group as PLONE itself does. So this allows you to run them separately so you get an individual commit after each upgrade step, as long as you do it in the user interface. So if step 2 takes 5 minutes and in step 3 you're going to get an error and you run them sequentially without a commit in between, you get very angry at yourself, you'll hate yourself, so don't do that. Also you might be tempted to use branches for your upgrade steps and migration code, which don't, in PLONE we have things that can help you not to do that, one would be moving imports into the code, have conditional imports that fail when you don't have that, like I don't know, no more lingual PLONE, you don't have to do anything because you're now in PLONE 5.1. Also you can register upgrade steps conditionally with ZCML condition, have PLONE BLAR version as PLONE itself does it. You can decide to register your steps like this. Here's one example, only if you have PLONE 5 you should be able to run this step to uninstall archetypes. So this sounds really easy but it's totally not, but there are a couple of nice tricks that you should use and keep in mind. One thing super obvious, which nobody does, is just remove 98% of your content from the portal that you want to migrate. I often have very huge databases to deal with your migrations and okay my laptop is fast but still it takes a lot of time to move this data around, so write something like this that deletes all files and images that are bigger than one megabyte, just one example. Don't remove any folders, but if you have ATE help centers and 50,000 documents and 900,000 news items, just remove 99% of each of these items and try to deal with everything else, but keep the content structure in place because that don't delete any folders because A, they're not heavy, I mean in megabytes, only their content is, and B, this will save you from running into issues later like local roles and portlets are mostly assigned on folders and these are things that sometimes break your migrations, so if you keep these, something that is container-ish, keep everything there unless it's something like a Plone Help Center which is always kind of the same, but if you have folders or other folder types, keep them around. And you can still fix any corner cases after your migration runs through, so I showed that already. Another tip I got from Alessandro is do not migrate blobs because the blobs, the blob itself will not change, neither from archetype, the to dexterity, nor from Python 2 to Python 3, the PDF that lives on the file system is always going to be the same, and if you use an add-on called experimental graceful blob missing, which actually even works with Rails storage now, since I haven't merged that yet, but it's there, you move the blob storage away, just move it to a backup folder and you use that. This add-on will create an empty file with a marker text in it for everything that it touches, it's really fast because it's a very small file and you can re-add the blobs once the whole migration is done later. Since the UIDs in any same migration will stay identical, the path to the blob file will also stay identical. You need to keep the blob files during cleanup, so if you decide, ah, I need to remove this and this and this and this, and for this you need the blobs because you really want to lose the blobs as well, so don't forget that. So that is also a nice, tiny thing, when you copy a big database from a production server or a common server to another, also copy the data FS.index because re-indexing a really big database on startup can take, I don't know, up to half an hour, depends on how big it is. This is nice, so you got to forget the past and forgive your sins, the easiest, and you need permission to, for all that, but you will, it's a technical requirement anyway, but you need to talk to your clients. We're going to remove all revisions, for example, this is a five line code which changed a data FS from 21 gigabytes to three gigabytes and the blob storage from 91 gigabytes to 50 gigabytes just by removing all revisions. And yes, binary files can be versioned, you should not do that, but some clients do, so you will have, this is how we lost, went from 91 to 50 gigabytes because binary files are versioned. And an alternative to this would be to use Collective Revision Manager, if you want to control this manually, you can decide how many items you want to keep for, which objects and so on, so that's nice. The second part of your past is previous commits, you need to pack the database, it takes a while, the important thing is set it to day zero, that is very important to keep zero days, that's very important when you migrate to Python three later on. Okay, now let's go to the hard part, add-ons. So here's a simple example, removes an add-on that is easy to remove and after that it's actually gone, and it's a good pattern, you could add more logging, but that's fine. Moving on to a slightly more complicated example, this is also a very small add-on, but it changes existing content, so when you remove an add-on sometimes it adds additional views, you have to unretches to these in generic setup for example, so that they're no longer allowed, but you also have to unset them on the objects that use these views for example. This is one small example, the important thing is to drop the layout attribute property from all objects that have this first, that's like a replacement for default views, don't really recommend it, but yeah. Okay, a bit more advanced, here is a portlet, because portlets create content, portlets in this case, and you have to go through the site and find them, in this case I actually knew where it was, it's an NM example intranet, that's where it was, so it's easy to find, but in the packages also code to find portlets anywhere of a certain kind, so you can use that to remove portlets. Here's a, that's tricky, so for those who don't know that, the portal quick installer has a feature, has its own registry that stores information about all add-ons that were imported, that were installed, and depending on the age and the state of the add-on it was in when it was first installed, this will contain different information, including types, some add-ons have a types XML that contains all types where they set additional information on, and not only a types XML that contains the types that it actually registers, which is a simple beginner bug, the result from that is that upon uninstalling this add-on, this type will be gone, if that type is your folder, like in this case, or a collection, like in this case, you're in big trouble because then you have to try to, you have to reinstall the type or re-register the type, taking it from Planner Content Types or wherever it came from, the one you want, so, but when uninstalling an add-on you can pass it a modified cascade that is different from the default cascade in the quick installer and that will save you from that trouble. That's the same thing happens with the registry or used to happen with the registry, it was totally nuked when uninstalling some add-ons and nothing ever worked anymore unless you reinstalled the registry, which was easy but not obvious and the error message was not, oh, you just lost the registry, please reinstall the registry was something very much more obscure. Just quickly about profiles, you can and you should use uninstalled and upgrade profiles, like Plone does, so personally I'd like to, I admit that I like the approach to have all the stuff during a migration in Python, so I only have to look in one place what it actually does, but some things are actually much, much easier to achieve with generic setup than in Python, so I don't know, I'm kind of on the fence on this. Here's an example of an upgrade step in Plone itself, this is how it's registered and it has, oh, I was supposed to show the structure now in my editor, which I don't have, okay, you can check that yourself, it is in, it's in Plone Up upgrades and there is a folder that is called 2RC4, release candidate 4 and it contains a folder XML that removes a view and actions XML and the tool set XML and stuff like that, just look up these examples. Okay, moving on to persistent, this is hard, there are really long blog posts about that and it's, you can uninstall utilities and tools and components with generic setup, but it doesn't always die the way you want it, so you can do that in Python as well. Here's one example from the package I released that removes utilities and adapters that are used by Linguo Plone, it doesn't hurt to run this code 50 times, I have, actually I have a migration where I run this after each individual upgrade step because I had cases when, I don't know, maybe it was my fault, my commit was not, didn't go through properly, so I just put this at the end of every single upgrade step I had there and it worked well. I'm not going to walk through the code, but it's, if you know what you want to remove, removing it is actually really simple because utility subscriber and then you put the name of the thing and remove it, bam, not that hard. But since it is hard to figure out what to remove and it's a tedious process to iterate over that, okay, yeah, something that's not dying, let's write a method to kill it, it's often more easy to actually patch it away and there is a nice tool called alias module, it's Implona butyls and it helps you to get rid of things that are, have a hard time dying. This is an example from webdav, so what it does is it adds IFTP access into the path, so it's importable by the persistent data in your database when it's unpickling the pickle that is referencing that, but it's replacing whatever it was before with the empty BBB interface, BBB means backward compatibility, I don't know that doesn't make sense but someone came up with that at some point. Here's another example from collective easy slideshow, so this is not a core thing but this is an add-on that had a really hard time dying and this was a very easy way to get rid of it, so if you feel dirty now, if you look at this, you shouldn't because this is how ploneup upgrade actually looks, these are only the first 30 lines from it, the file is much, much longer and it runs every single time you start a plone and it patches away a lot of things that you've plone and we got rid of over the years, but some of these things are really hard to uninstall and clear out the database because there might be old pickles that are no longer used but if you go back in your database like you revert commits, then they are resurrected again and this is this really hard technical problem and this just, this solves this in a very nice, nice way. Moving on to linguaplone, okay. So very important, migrate from linguaplone to ploneup multilingual while running in Python 3.4.3, not 3.4 please, and while still using ArchiTimes, there is a build-in migration in ploneup multilingual, sorry, that actually works fine but it mostly works if your content is not a total mess but your content is very likely to be a total mess because editors in linguaplone had the ability to do things that are not useful, for example, have a folder in one language and have mixed languages inside that folder but part of that, these translations are then in a different folder again so it's a complete mess and this is some code that you totally should read right now that helps you clean up this so only, I can't point but somewhere in there are actually two lines of code that run the whole migration from ploneup, linguaplone to ploneup multilingual, they're conveniently called step one and two and the other was called step three, so call them method step one and two, I like that, still, everything else, it's in the package, it tries to make an educated guess if you have two, this works with two languages, only can try to adapt it and it tries to move stuff around in the same way, same way so that your content actually if it doesn't have a language, it guesses the language from the language of the folder and from the language of the majority of the content in the folder and if it's a different language, it creates a translation for the parent folder and for an untranslated folders, it looks into what language has the content and so on and so on so it does a lot of stuff and you can use it for your own good and at your own risk, please, it is, I'm not going to walk through that, that would take too long. There's also, yeah, look at the whole linguaplone module in migration for helpers, if you still have that, so once you did that, all these things like clean up, remove add-ons, slim your database, remove your revisions, then you can safely upgrade to plone five two, either from plone five one or plone zero, plone three or whatever you want. So do that naturally while still in python two, do that keeping archetypes still if you have it, keep, have it, have multilingual enabled if you have it and if you come from plone four as a gesture, you directly upgrade to plone five two or rather five two one, which will be out soon, I saw a pending folder already on this plone org, so expect that within the week. In some rare cases, you might want to upgrade to five zero ten or five one six first, but I haven't had such a case actually. So how you do that? Plone five two with archetypes, that's easy, just add the archetypes extra to your build out and it will pull in all the archetypes dependencies and don't stop there, don't run plone on five two with archetypes, that is just so insulting to everyone who put in work into plone in the last five years. Go further from there. And yeah, there's also documentation about changes in plone five two, they're really well documented, so please read the FM. And then migrate to x30 in five two. I've covered this a couple of times already, so I'm not going to go into too much detail, but I'm going to show you a couple of best practices, especially for sites with a lot of content and complex things. If you have a default site, you can use it for it, if you want, it's fine. Automate it, migrate one type at a time, start with the folders and first again try it with a reduced database. So this is an example to migrate all files from archetypes to dexterity. This is not new. This has been in the documentation of plone up content types since, I don't know, many years now and it works fine. It has been improved, that's why I say use it in five two. For example, the patch searchable text and a couple of other things in this method, the PAC migration, it makes it much faster if you skip indexing and patch searchable text away because your searchable text is going to be the same anyway, so why would you want to reindex them just for the sake of migrating? If everything works fine, do that, do it all at once. Even topics, works out of the box, you can migrate topics to dexterity collections, thank you. Much more interesting, custom content. So for example, plone help center, who uses or who uses plone help center? It's easy to migrate, that's why I picked it for this purpose demonstration because it's mostly default content with constraints and a couple of views that look slightly different and it has one additional feature that folders don't have so you can manage versions but my client doesn't use that feature at all, so I decided I'll just migrate it to default content like folders and pages. So this is an example to migrate the help center how to folders, as I said, it's a couple of folders that contain how tos with a constraint and this is all you need, this does everything, you don't need anything else, this could be run from a PDB thing, it goes through the whole site and it turns every single instance of a plone help center how to folder into a normal folder, bam. Okay there's a couple more complicated items, for example the help center FAQ is a document, so you can specify a field mapping to map the text field to the text field and say that's a rich text and so it's migrating the help center FAQ to a document and it's keeping the rich text. You need to specify the fields because the source type doesn't come, it can't pick up the migration helper, the migrate custom AT thingy here, it doesn't know about help center FAQ, it knows about all the default content types in plone so if you have a migrated document it knows there's a rich text field I need to migrate but for help center FAQ everything that is not like doubling core or one of the default fields needs to be specified here in a very simple way. You can specify something more complicated here, this is a nice pattern you can use if you have a content type that you want to migrate to default content but the content type you had before is mostly for archive purposes only but it had like 50 rich text fields and five text fields on top of that but you don't want that anymore, you just want the information that's in there so what it does it goes, it's a field migrator that concatenates these text fields one after the other adding a small heading with the name of the field before that so you have like 20 fields and now they're in one rich text field because nobody's using the plone help center anymore, actually they're using it but now it's not called plone help center, it's called a document and you use sensible headings to add your content and not a structure. So this would then migrate the help center itself because the help center has an additional field called writes, I actually don't have no idea what it holds so I never, some text so I add that into the text field that it also has so it has a rich text field, the help center I should have specified that text to text as well, I forgot that and also the writes to text field so first it's text to text and then writes to text is appended afterwards so it's a nice pattern you could use. Once it's done you can, this has a lot of content types, the plone help center as you can see, migrate the containers first and the items later and you're done, all methods look exactly the same like this mostly so they're super short but it's one for each item that's how it works. So for content where you have multiple items like in plone form gen it's like you don't want to port the plone form, the form folder to a easy form folder and then the AT whatever field to some other content type that's not how it works because easy form has a different way for that, you need to specify a more low level migrator with this pattern but this pattern will give you all the freedom you can, you have so with the migrate plone form gen using JSON this would be called for every single instance of a plone form gen folder, self-old would be the plone form gen, the form folder, self-new will be a new instance of easy form it will already have the text, the title and the description and like local roles and all that nonsense but what you can do in data self-json if you're 5jpeg, doesn't exist yet and self-easy form from JSON, passing it the data, it can put in whatever is required to move the archetypes field descriptions to JSON and then turn them into the what is it, supermodel XML schema. So but there's alternatives, one is the in place migrator form for teamwork upgrade, it is mostly interesting for folders because migrating folders can be very slow because what the default migration does it moves all the content from one, it creates a new folder and moves the content over there which is a process that can take a long time if you have a large tree, this is why I say you need to reduce your database size, we're patching away in the default migration, we're patching away a lot of indexing and link integrity, check all the things that are really time consuming but if you have a very large tree it can still take quite some time and this would be an interesting alternative to look at it's well documented, I haven't tried it I have to say and also I have to ask why is there no pull request for Plano content types that replaces the default migration of folders like not every single step but the moving of content from one to the other with this, I'm super open to discuss this because the default migration for folders uses the code from Produs content migration. The second alternative is another beautiful thing I learned from Alessandro Pisa, he's a well of beautiful information, he gave a talk in 2017 and he proposes to use this pattern which is actually also there's a helper method in Plano content types that does the same thing, it changes the base class of the object, this is again very useful for folders, you just simply rip out the old base class, push in the new base class and then set the object in there and the content is still there, you still need to re-instantiate the underscore tree thingy but it doesn't have to move the content from one to another, that is useful. Okay, if you're interested in that talk to Alessandro, he did a lot of big migrations maybe using that pattern for now, it would be nice to have some more documentation, blog post or stuff but the talk is really good, there's nice examples in. So then you're in 5.2, you're running dexterity, you need to, oh god I skipped this part, you need to remove archetypes after you migrate to dexterity, I forgot to write this chapter because I thought my talk is at 2 o'clock after lunch. But there is a method in the package I released that removes archetypes because there are a couple of tools in there that are again, won't die easily but using the pattern there and what I pasted on collective clone org, community clone org will deal with that. So then start up clone without archetypes, so remove the extra in Python 2 and you should be good to go. Okay, the migration to Python 3 is also really well documented, I'm not going to repeat it, I'm pointing you at the documentation, it is here, please read it, it's well documented how to port your add-ons and also well documented how to migrate your database, I'm not going to switch back to the browser but there's a lot there. Keep a copy of your database, please, that's like the basics, don't do a migration without a copy of it, this is how it basically looks like, you invoke this one command, this is the ideal of output because it's a very small site, usually it's bigger, before you do that you pack your database to do zero days again because the old transactions will not be ported as far as I know and important advice, I would ignore most of the things that ZODB verifies says because if it doesn't stop the migration, if you have problems after start up, again alias module is your friend, it will help you patch away any problems of stuff that is still there and you just keep it, the site is probably not going to live for 20 years into the future, so for the 18 years that it will live into the future, it can live with a couple of additional alias modules. Another tip, if you use rel storage migrate to data to ZODB first, this is how you do that, then ZODB convert and then you get a ZODB because it's just very much easier to move a ZODB around than a Postgres database, at least for me, I don't know, maybe your mileage may vary and also the migration with the ZODB has been tested, it's not been tested so far while you keep your content in rel storage, for example Postgres in this case. And then you clean up again, there's a couple things that are still broken, they're not taken care of, they need to move into core, I put them in this package so far because I'm super busy, yeah, I know everyone else. This is one example, the searchable text index will be broken, so it will not be automatically converted, this takes care of that, again it's an upgrade step, just copy and paste it or import it and run it, it checks for the first item if it's a byte, then it has to clear the index and then rebuilds it, it takes a while, but you need to do that only once. And then also event indexes are still, in some old cases are still old daytime, not new daytime, and also navigation portlets will be broken as if they have a custom navigation route because that's a computed field which is not properly, where are my post Python 3 fixes, fixed portlets, it's a computed value, and this is the example how you get any portlet and here you fix the portlet, just set it to something else, five minutes, excellent, if I can manage, if I find my mouse. Then you finalize, which basically means do some cleanup, you enable a new theme, or you get a disabled theme. There's examples for all kinds of stuff in that package, including dropping all of your steam, there is code for removing all skin template, skin customizations and portal customizations, please do that before you migrate because otherwise you will have them in the database after you migrate it and you will wonder why you can't access a certain folder tree because there was a template overwritten and that no longer works, so yeah, there's a lot of useful stuff. There's one thing that is important here, there's a couple, when you migrate aside from Plone 4 to Plone 5.2 and don't create it a new, the drop down navigation will not be enabled by default, it's set on zero I think, maybe we changed that, can't remember, things are changing fast. But the other thing is that the registry, sometimes in the whole process your registry might be gone and you will get weird messages, I said that before, that's easy to fix by just loading the Plone app registry settings that you can have there and never say purge old equals true because that will nuke your site. So why do it this way? This is, because there's lots of talks, we have this magic code that does the migration, we have this transmogrifier pipeline for our super company and we're doing it this way and I hear these interesting things and also hear the question like on Wednesday, how can Plone gain more market share and I get a severe case of cognitive dissonance when I hear that because I think Plone needs a very good out of the box experience regarding migrations and it doesn't help Plone if every company has their hidden away migration code and tries to sell migrations. I don't want to sell migrations, I want my clients to be able to do their migration for themselves. If it's a bigger project, sure you need someone professional and they probably want a new theme and they want new features so it's really good as a project but the important thing about that is the new stuff and not the migration, I don't want to have my clients pay lots of money for okay we need to upgrade to Plone 5 and that's going to cost you 20,000 euros. It's not going to be able to be done any other way. So I really hate that and for this reason I've written the migration code from archetypes to dexterity. I've written the, or helped write the migration from Python 2 to Python 3 and all that stuff. I've written documentation and examples and given talks about that so please use it and if there is an issue, speed wise for example or something is broken, file a ticket, don't send an email, file a ticket and ask the community to help you because we all should use that. If you have custom code that is fixing a problem that you had or if you have custom code that is migrating I don't know, Platform Gen or something else into something new. I would love to see something migrate Mosaic to Volta about that probably not possible. Contribute that to the community, that would be so nice. Here's again the package. Here's again the short thing about how you do that in which order and one last thing. I guess everybody thinks this is going to be super hard and it's going to be tough. Let's just try it. So here is a site I have migrated to Python 3. It's an internet from a client. Uses a Mosaic front page now so we're not moving to Volta right now. They're a bit conservative. So this is, oh you can't see anything. This is it. This is the site. This is another thing. This is Volta. So while Volta starts up I'm going to go to the site. Localhost 8080. There it is. So I log in, I install a PlonDrest API. That's what I actually already did. Then I go to localhost and see anything. I go to 3000 for Pully and see what happens. And hey, there's my content. There's my website. It is, again the front page was, is Mosaic so it doesn't show up. But all the content is there. So if I go to, this is a folder that has a custom content, a custom content type called staff member. I guess everyone has a content type called staff member somewhere. Thank you. I'm almost done. And you click on it. It contains all the content. You get all the file fields from the, from the database. Get all the data. And migrating to Plon6 is actually nothing. The only thing you do is you start up Volta and then you open it, you install PlonResta API and that's it. There is, I know, things to do and I'm very excited to hear the talk about the migration from Dexterity form schema to blocks and how, how the guys at KitsConcept approach that. I'm even more excited to see the code released again. Thank you for your time.
|
In recent years we have migrated a good number of sites from Plone 4 to 5, from Archetypes to Dexterity, from LinguaPlone to plone.app.multilingual and more recently from Python 2 to 3. Some migrations even combined all of the above. In this talk I will try to cover all the technical aspects of such large-scale migrations, will walk through many code-examples and discuss best-practices. All upgrade-steps and documentation will be provided for you to reuse.
|
10.5446/55200 (DOI)
|
So, I'm here to talk about our experience adopting PLON this year. I have been participating in the PLON community since years, but that's the first time we are using PLON at my actual job, so it was a long time. My talk will not show any technical stuff, but the cultural and economic challenges we have in our public organization. So I am a developer since 1991. I have a master's degree in software engineering, developer at the town council. I founded the group PI Paraná, the Coding Dojo group. I organized the Python Brazil conference in 2010. I organized the PsiPy Latin America 2018. Well, I am very proud of being a member of this community. This is a picture of our city hall, town council. This topic I am talking about is about a portal that was developed by the interlers. It's a CNATES kind of department, but more than that. They support the Brazilian legislative power and every city that has a town council can use these products. They give training, free software, infrastructure, and they host around 1,500 portals there. There are some links over there, the interlers.leg.org, BR, and they have a GitHub repo. The product we are trying to use is the portal model. It was made for city halls. It's free software. They gave workshops around the country for everybody that wants to use it. They follow the Brazilian transparency law. So with the three accounts courts in Brazil recommend that people use the portal model because it's very strict to the transparency law. It's recommended by the account courts of the Heigran-Zu state. Timor East near Australia is using portal model also. So it's not just for Brazil. It's for everybody that have a...well, the translation is most Portuguese, so it's easier to use in Portuguese spoken countries. There are derivation of this portal model, it's portal padrão. It's for the executive power. The Brazil government use it. It implements government digital identity. It's inspired by the interlers for the portal model. They also have GitHub repositories. So to install a portal or a plan site is kind of easy. You can run five, six commands and you have it running. Get clone, get checkout, Python bootstrap, build out, instant start. It's not that easy but it's similar to that. But when you have this installed, what do you do? What about the maintenance, the backup, the restore, the availability, etc. You have to deal with a lot of things but this technical stuff is handled by people. So the people have to know how to do it. And the content, who will manage? Who in our organization is responsible for giving the information? Which data should be migrated from the old portal or from older systems or how the updates and upgrades will be managed? Does people have the necessary knowledge to use the portal to create content, to publish and stuff? Does the people are comfortable with the portal going online? They are feeling safe to put the portal online. So there are another kind of challenges that are cultural, the attachment feelings. Does this new thing have this piece of my old software? So they have been using something for years, then they have something new and that piece of day by day job, it's not there. So they have tried to refuse to use it. Newest market tools, Envy, when I was in the Timo talk, there are these new React cool stuff going around. I need it because I just need it. It's beautiful. So people want these new things. And lazy student feelings will teach us to use Manage-Upgrade this thing. So people expect some teacher, someone to help them learn. It's hard to ask people, oh, this is the manual, just read and do what you have to do. Maybe developers are used to it, but when it gets into the common regular user, they want someone to teach them. And day by day inertia, how can I do my job using this new thing? I am used to do my job without this. Why I need it? Why I need this new thing? So about economics, how works the software economics in proprietary systems? Several clients finance the development maintenance. So they sell a license for thousands of people and then the amount of money is used to hire developers and do stuff like that. In our organization, we have this limit by the law that we cannot hire a company for more than eight years. So we cannot collaborate. We cannot share knowledge because it's proprietary. We cannot handle things. Migration. We have to change contracts, change companies, we have to migrate stuff. So the old system was kind of proprietary. The new system is also proprietary. We have to handle some way to migrate the data. It's social. We have less friends. And the old culture, it's easier to buy a license than a development. So it's hard to contract someone to contribute to free software, but it's easier to buy licenses. The administration is easy to buy licenses. They know how to do it. They know how to write the documents. They know how the process works. So it's easier for them to hire proprietary software. And the supervision institutions, like the accounts courts, are used to check these kind of contracts. So it's easier for the courts also to kind of check if it's okay to ask not to fraud or something like that. In the free software economics, how it works, well, the cost sharing can be kind of split through the institutions that use the software. So they have the same needs. They are not competing. We are giving service for the people, for the population, for the citizens. So we don't have to any industrial secret or something like that. By the law, we are also limited to this four to eight years contract development maintenance. But you can share the code base. So we can contract a piece of the software and the other institutions contract another piece of the software and you can join it together. And well, we can do the same thing that proprietary software do. The community is collaborative. We have made this. In November, we have a meeting with town councils around the country so we can share knowledge. It's easier. And we made a lot of friends. People are asking me on Telegram and Twitter if I'm going to our meeting in November. So we miss each other and stuff like the Plonic community. And in the free software economics, I like to say that we have a bizarre way of free software contracts. If you want to know more, I can tell you what I think about it, but it's a kind of distributed sharing the product. But each one pays just a little bit, a piece of the software, not for the entire. And we owe the code. We are not buying licenses. So what we are doing to face this challenge? We research things that users do today. What are they day by day? How they can do something similar? About how they are using the old portal? How they will use the phone? So we are talking to people and asking how is day by day. We are planning first deploy a minimal-vibble product. So it's a portal with the minimal things we need to use it. And we didn't add any new things, the beautiful things, just the useful things for the first deployment. We are going to participate in workshops and conferences like this one. And that one I told you about will happen in November. We are hiring a new team that will be released with a free license. So the portal is already there. Someone paid or spent time doing the job. And we are just buying a new team. So it's everybody that uses the same software will get this team. And then all the cities that don't have the money we have will get that free team for them. We are collaborating with interlers and the legislative technology group named GTEC. One of the things I'm trying to propose is doing a sprint on Saturday and Sunday to translate products to Portuguese. And then at next year we intend to hire a new version of the product 4.5 or maybe.6 with Voto. I am really impressed with Voto. We want to improve the transparency and open data of the portal throughout the integration of third-party systems. We have some systems that are hired, are proprietary, but the data is ours. So we intend to publish this using the portal. We intend to provide reports, data information to improve public policies in the city economy. So our intention is to improve the interaction with the population to help them to guide the city policies that we are the counselors will do for the counselors like create the laws that the mayor have to follow. So this is where we intend to improve the portal. And we are accepting suggestions. If you have, you can just push me in the Ayo and talk to me or drop me a message or something like that. Here are my contacts. You can scan this QR code. You will get a contact or just take a picture and all my social networks are RamiroLuz. So you can try to find me. And I thank you for listening to me and having me here. And if you want to talk about it, we have more time. I was afraid to talk too much. So I hurry up. Any questions, suggestions or comments or if you want we can debate something. Does anybody have questions? First an applause. Thank you very much. If nobody has, I have a, well, it's a remark. I shouldn't be making remarks instead of questions. But maybe it's good that you seek out the people from EMEAL who are also on this conference who are very active in building community for local communes in French-speaking Belgium. They have a very extensive experience with building portals for communes, for local governments, for supranational governments. So it could be great to compare and see how you can cooperate. Sure, sure. Thank you. Yes. I can cooperate. If we can help, of course we can cooperate. It's, I think, valuable to share our experiences and product. So we also contribute to the collective with product that are really suitable for E-government. So it was not a question, it was a proposal. Okay, thank you. Yeah, I'm here but I'm like representing dozens of people that are working on this. So it's not me, it's a lot of people. Anybody else? Then thank you again. Thank you.
|
After the evaluation of several tools we decided to use Plone as our portal. But this is just the first decision. There are several technical aspects to consider also. But it is so important to pay attention to the people involved in the project, their background culture and behaviors. Furthermore, the public organizations investments need to be effective and efficient to avoid waste of citizens money. This talk will present the situations we faced during the adoption of Plone as our CMS.
|
10.5446/55201 (DOI)
|
We are going to do a technical presentation of Guillotine. Well, mostly, yesterday they explained why we created it, and I was saying all the time, we are going to explain about the SYNC-IO, we are going to explain what was the problems that we were facing, we were doing the framework two years ago, and this is the talk that we are going to go through all the different points that we had pain using blown and using ZODB and using RelaStorage. So we are going to explain all the decisions we needed to make in order to build a framework that fulfills our requirements. First of all, if you don't know, I am Ramon Navarro, I have two companies, is going to testify, I am a blown foundation member and I am Catalan, and this is the demonstration in Catalonia last week. Hi, and I am Nathan Manguim, I work at Ona as principal engineer, I am a blown foundation member and I don't really play guitar anymore, I just felt I needed that because Ramon always talks about playing music and I am American. Normally, I wouldn't say anything about that either, but you know. So we are just going to try to go topic by topic to talk about some of the different areas that we used to tweak performance and create a framework for scaling to a lot of data in Python. Some of those are, I guess I can repeat them, but you can read them as well, the async I O and then just the design of the database, caching and life cycle management, indexing, file storage and queues. So the first thing we will go over is async I O and I guess I just want to stress that async I O is really important for Python and the web in general going forward. The threaded approach, like okay, let's, we will just talk about how async I O is different than a threaded app. And a threaded app, when a request, an HTTP request comes in, it is assigned a thread and that thread is active through the duration of the request and once it finishes, the thread is returned to another request. So you are limited to the number of simultaneous connections for the number of threads that you have on your application. And Python, that usually means two because Python doesn't perform very well with lots of threads. The best way to deploy Python is with two threads per process. So you can just keep doing more processes, the scale, which is fine, but then you don't have any memory sharing or anything like that. But moreover, it is just also, it is the differences between how you block on your application. So if you have the threaded approach, a request comes in and then you communicate with say Postgres SQL and maybe it happens to be a slow query and you are blocking for three seconds to wait for that query to finish. Now that thread is consumed for that whole three seconds while that query is communicating that Python is not doing any CPU, it is not doing anything at all, but it is still blocking and it is preventing other people from being able to have their request serviced. So that is the threaded model of request per thread and that is Django right now, that is Flask and Pyramid. And so think about how this applies to situations where you are communicating with lots of services. You are sending things to a message queue, you are communicating with the Redis for caching, you are doing OAuth for authentication or any kind of authentication or authorization provider, you are doing S3 for file storage, like any of these services, they are all over like HTTP or some kind of TCP connection and now you are blocking on communication. If there is anything slow in any of those services, you are blocking and you are not doing anything. And so this is where async.io is useful, like it is non-blocking network.io now and you can communicate with all these other services and while you are communicating there is very little overhead and maintain that connection and you can really push the performance of Python where you were not able to before. And moreover just think about the context of microservices and even if you think microservices is crap, the reality is you don't have to be microservices, you are still using a lot of other services. You are connecting to other APIs in a number of ways, you are at least using like multiple database layers or like most applications you develop now are not just database only, you are going to be communicating with something, some API, maybe an authentication service or something. So like this is going to be a problem if you have any sort of substantial load. And to illustrate this even further, I guess I do have an example of some code to show like how this can be even this is going to be an example of a CPU bound application versus a network bound application. And so like one of the, I will back up a little bit because I forgot to explain the downside of async.io. So async.io is a single threaded approach to running your application. So one process, one thread, your event loop which is running the async code is only on one thread. So if there is any CPU bound code there, it is going to block on CPU because no other code can run at the same time if you are executing CPU, it is single threaded. So it is really important that you do not block on CPU. Whereas threads, if you have high CPU load, you can switch threads every once in a while and you will never have a situation where you are blocking all other code from running. So that is one downside. So if you know you have CPU bound code, you have to be careful about it. In async.io there is a way to run CPU bound code in a thread pool. So you just have to know what you are doing. But anyways, this is to demonstrate, this is some code that just crunches on numbers for half a second and we can see how it behaves. And the regular async.io approach is to request a second and the threaded approach without setting any threads to request a second. It behaves the same. Async.io with the executor, that is what I was talking about how you can throw CPU bound code in a thread pool that does slightly better than a threaded application with 20 threads, which you do not really want to do, but that is just to say like you need to try to scale it further with a single process. The problem is not that much different with the CPU bound app. But if you go to a network bound app and this is the same sort of thing, we are just connecting to a service and that service is just something, just a service that is delaying for half a second before it returns, which is in async.io service of course. And then the difference is this. This is only bound by the number of concurrent requests the library was using per host. So it could go further. That was some arbitrary limit they had on number of concurrent connections per host. And the threaded app is abysmal for the performance. Yes. Just to say that async.io is amazing for network, but there is a lot of other input-opened things that we have on our computers that the async.io is also really helpful. We are not using a Guillotine specifically because most of the things is kind of a REST API and it's mostly network communication, but we are using also a lot for, for example, accessing the disk with long operations that you need to do on the disk. Or I'm sure that there is a lot of input-opened communications with devices, with the kernel that you could build drivers for async.io in order to be also on the event loop. So everything that you need to wait async.io is amazing. And sub-processing, yeah, you just start a new process. We have processes that start a lot of different processes and wait for them. Meanwhile, they are doing other things, serving HTTP so we can monitor the process that are running. All this input-opened is so easy with async.io. And there was another thing, Python delivers with a standard event loop, a basic one that works really well for most of the cases, but there are other implementations of event loops, for example, the UV loop, that it's much faster than the standard one, that it's the same as Node.js. And there is also the Tokyo event loop, that it's written in Rust, that it's also really fast. And the system is so pluggable that if you really need this extra 20% of the speed, you can even switch the event pool to something that is much more powerful. Sorry. So, I think async.io was the main important thing that we needed to come. So it was the first thing that we decided that we were going to happen. And the second thing that we decided is that we needed a database, a place to store. We started storing things on a file system, on folders, and well, it was okay. But we needed some kind of more strong way of storing with transactions that, so there is a lot of databases that offers that. And we went on to understanding how Rela storage works. So we say Rela storage has been there for a while, it's been growing. So we want a pickle system, what we can copy. And we went to check Rela storage. And we decided, a design, that it's a bit of inspired with Rela storage, also with that database that Jim built, new DB. So, Lutinates traversal. So there is main nodes that you have on the tree. And these main nodes are registries on a table. Which this makes easy to be able to do a select on the table and to do a dump and to get all that objects out of it. And we decided that we don't have enough with a row, with a byte field on Postgres to store all the information that we want from an object, because an object maybe can grow up a lot and we need to include extreme information. So we copied the concept of annotation from Plume. And we decided to have annotation registries that links to the node object. So if you do a select on a Glutinate database, you will see rows that are the node objects, the basic nodes on the tree, and nodes that are annotations from to these node objects. Then of course we are a tree. So on this structure, we needed to define a way of defining pointers even to the children or to the parent. In our case, we decided to point to the parent. So each row has a pointer to the parent or to the node that it's an annotation for. Why we did that? Because of course it's much easier to avoid to check conflicts because you are only pointing to one. Yeah. At the beginning when we started to do the design, we did not use the JSONB indexing from Postgres. So we decided that by then we are going only to store the pickle with a byte field on Postgres and then indexing it's for another system. We are microservices, so Glutinates only taking care of storing with transactions the main objects and the notation objects. So well, these are some of the configurations on Postgres that we have on right now. Mostly it's a simple one table, one big table with all the information. So it's database that's mounted on a point of the tree. It's that table and you can have multiple tables and mount on different endpoints on Postgres and on the URLs of Glutinates. Then there is a lot of foreign keys that make sure that you cannot delete, delete the children, all the parent, all the children gets deleted. If for example you want to do a post with the PlongRest API to create an object and you define which is the ID. Even if that ID exists, it's a simple operation because there is a primary key that doesn't allow to have different IDs for the same parent. So the database itself provides consistency. But related to that, you don't know until you try to commit if there's a conflict in this way because we didn't want to, as we'll go through more, you need to cut out all possible places where you're going to do a lookup. So if on, like when you're doing a post to create a new object and you have the ID and then you, at that moment you check to say, is there an ID with this already exist on the database, that's always going to be a lookup to the database. You always need a connection to the database in order to do that. So we were like, no, we'll just wait until we actually succeed and let the database do the constraint check for us. If the database gives a conflict error, then we kick it back out and it's a conflict error and the client can handle the 409. We're a REST API, so they should know how to handle 409s and that's kind of how it's handled. Thank you. Great. So another thing that is important that we use a lot and we are really thankful that Postgres can give us to go fast is the sequence number. So the transactions ID is quite, it's really important that our sequential and Postgres itself offers us a sequential ID that allows us to have a sequence number for doing that. There is two things that this architecture makes, there is a lot of things that makes this a bit complex but two major ones. One is that we lost the history, the ZODB kind of a pen only that you are storing all the information, all the transaction, so you can go on time and decide how far you can go. So we have a snapshot of actual situation. We can implement versioning on top of the objects by the data model itself but the transactions, we support what Postgres supports transactions and we have the snapshot of the database. Of course, as the previous talk explained it, you define a specific snapshot on time on the database then you can go back if you don't remove the history. Another thing that is difficult is the relation of children and parents. The children pointing to the parent, ordering is really complex because we really need to, you cannot store the order on the children, you need to store the order in the parent or you can store the list or if you store the order on the children then you need to index that properly. So if you guys are familiar with how this is done in Plone, ordering is like they are storing a complete list of the order of the folder on the parent. So every single time you add an object or you change the order, it is modifying the parent object. We don't do ordering. The Guilty in the CMS thing does ordering and it is just studying an attribute and so it is like a fuzzy on pseudo enforced ordering. So you are still only writing to the child and that is the safest way to do it where you can actually not have a huge performance implications. There is more on writing. There is a lot of things. So just to see an image is always after explaining all these things, an image helps a lot. So this is an SQL from a table from Glutina. You see that there are two specific registries, the DDDDD and the 0000. The 0000 is the root object. So it is the first object. And DDDDD is the trash. So when we are deleting something, we are making the pointer of the parent of that element to point to DDDDD. Why? Because we discovered that the reference integrity for key deletion trigger when we were deleting kind of folders with millions of objects were kind of shutting down the database. So we decided to do the deletion on batch operations. So we point the object to the DDDDD and then there is a batch job that cleans up everything. You will see during all some of the slides that we talk about vacuuming or batch jobs to clean up. We need to do a lot of these things. We cannot do everything in real time because we want to answer fast and we want to provide integrity. Yeah. So like if you do a delete of a really large folder in blown, it might take like 30 minutes or something like that because it is doing all of that in one single transaction, making sure that transaction is synchronized across all the other transactions that are happening and then hopefully it actually works. So all we do is set this attribute and all of our cleanup re-indexing, all that stuff is in an async task or in a queue and that is how we manage doing large operations that affect large, a large amount of data at a time. Yeah. So some other differences with Ralla storage is that of course we have the object ID, it is the ZOID. I am a romantic guy. I choose to name it ZOID for romantic reasons. You may understand. The transaction ID, this is the sequential number. State size is also romantic name to store the size of the byte. Part. Part is a field that you can create an adapter so each object can define in which partition is stored. This is used for what has been explained also in the talk before that Postgres 11 supports partitioning at Postgres 10 also, 11, no? Supports partitioning so you automatically do a partition, you can partition your database table with multiple sub-tables and using this attribute to define in which partition you want to store it. Of course, just to know if it is a main object or it is just an rotation, in case you are an rotation, the transaction ID is for the cycle of commit, the two-phase commit. Parent ID is the pointer to the parent. The ID is the real ID and this is different, for example, from, I think, on Ralla storage or maybe it is the type that you have the real ID that appears on the URL. Then the type, the real type from Glutina that we have, JSON is the serialization that we want to use for indexing in case we use Postgres catalog and a state that is the Pico. One other thing about the ZOID is you can see from row 3 to 6, all of the ZOID keys start with 824. If you are using something like Cockroach, you have CockroachDB support as well. In order for it to distribute the data, it does key sorting and you want to have your data near each other if you are querying it. If you didn't have this sort of key storing which allowed Cockroach to put chunks of data on the same node, it would be having to do the queries between CockroachDB nodes and synchronize those queries and it can be really slow. That is the reason why the ZOIDs are structured that way. Yeah, the three digits are the parent first three digits and each children gets the three digits from the parent also besides its own ZOID. Everything gets ordered on CockroachDB. We already explained a bit about that. Deletion is complex. Synchronize, making sure that you have a clean system, it's complex. Right now we have two systems of auto-baccuying. My default Guillotine tries to create an AsyncIO job to clean the database. But we also have an option to disable that so you have an external job that does it manually. Okay, no catching. Yeah, so we use Redis to facilitate our caching and then an in-memory cache. I mean, I guess in that respect it's a lot like the way ZODB caches objects. We just have an LRU cache that's pretty fast and written in C. We have cache keys for all operations that are done on the database. We set the in-memory cache and then we use Redis to coordinate and validate between Guillotine application servers. Yeah, so that's kind of it. Optionally, it can also push cache out to other application servers as well. Depending on your load, that can be actually a negative performance because that can put a lot of pressure on Redis to push out all that data if you have a lot of rights going on. So how I usually deploy or how we deploy, I'm sorry, is we just use Redis for invalidating cache across the application servers and everything else is just straight in-memory cache then and we don't push out any new values with Redis. Yeah, so the L2 cache would be a reference to the Redis actually storing cache of the pickles. So I don't use that, but it is supported and if you don't have a right heavy system, it can be helpful. Yeah, so in the design, you always just want to reduce the amount of lookups ever. So every operation that you can potentially do on the database should be cacheable and you shouldn't ever need to be doing any types of operations that require lookups. So like we had mentioned before when we insert into the database, it can produce 409s that the client just needs to deal with and that's because we don't want to have, like we want to optimize for the normal case instead of the edge cases because that's what most of our requests are and then the client can have some of the logic to deal with those edge cases. Yeah, I guess I kind of already said that, yeah. Okay, so one of the cool things that Async.io has is the option to have tasks that run in parallel, meaning that they are waiting that there is some input output to be run because we cannot use more than one CPU. And one of the things that we have is request features. So you have a request and you can add tasks or functions or operations or wherever you want that is going to be done after the request is finished. So that helps for example for indexing or you want to call some audit log or do whatever you want. And well it's so easy that you just get the execute global function of Glutina and you do after this request, call this function with these arguments and it will be called after the request has already been delivered to the client. So it's not so that you can really have operations that work after the request has already been finished by the client in Async.io. It's like the pre-post, we also have pre and post commit hooks so you can, yeah, we are also romantic on that. So you can do whatever you want before a transaction or before a commit or whatever. But they are done before the request has been delivered to the client. So whatever you do there it's going to make the request a bit longer. Whatever you add on after request it's never going to affect the request time. And then we have another interesting thing that we needed to develop to make this kind of Async.io jobs work better. It's a pool and a queue of tasks. That means that we call something Async utilities also for romantic names. Utilities are single items that are running in memory that does whatever you want. And we, Glutina out of the box delivers us with a queue that you can, we can queue tasks there. I don't know. I can't get to these surveys and say that I'm alive every five minutes or call Celery and do whatever or call this queue system and do whatever. And so the queue and the pool it's used, it's like before you just get the pool and you say run this, this can run forever and it's never going to block. Well, it needs to be something that it's Async.io. If it's CPU bound, it will block all the system of course, but you can just execute wherever you want there, short, long, whatever. And this is Async.io utilities that we have. We have a lot. We have for caching, for the indexing, for working pools. We have a lot. There is a lot right now. Yeah. Transaction mechanism. So as Nathan said before, we don't want to access the database because accessing the database is slow. So we are trying to demure it as much as possible, accessing the database. So we have a traversal system that goes through all the objects. If they are in cache, we don't need to access the database. We go through the security. We validate whatever JSON web token you have. If we don't need to access any service or you need to access a service, you access it. And then we look for the view. There is some other things on the traversal. But you look for the view. We start the transaction at the moment that we are going to execute the view. And then we execute whatever is inside and we close as soon as possible when we finish the rendering. So we try always to demure it as much as possible. And we also want to try to be as smart as possible with read-only and write transactions. And we are using something that we are really proud is the get, the HTTP verbs. So we have get, hit, pause, delete, patch. So why don't we use these verbs to define on advance that the connection is read-only or is write? So what we do is if you do any get or any get on Glutina, we even don't open writing transaction. We just do a read-only transaction that we are not going to commit. For sure. You cannot write the database on any get or head operation. And we use the pause, the get, add the pause, the patch, and the delete. And then we open transactions. So that helps us because then you can tune that you have, for example, a slave of Postgres that it's only for read operations that all the get go there. And then you avoid overloading the main database. You can do a lot of things if you have this kind of tooling so you can define read operations and write operations. Yeah, another important thing is that we have our concept of what's a transaction that we are reusing on our tasks and we are using now in 3.7 the context bars to store these transactions. But the transaction in Glutina is not a transaction on database. So we start a Glutina transaction at the beginning of the request and we end at the end. But as I said before, we try to make as far as possible, as small as possible the transaction. And if we don't need to, the database, we even don't open the transaction. Yeah, finally, in this block of lifecycle management, kind of what's happening, we have an option that is called xdebug. If you call Glutina and it's enabled with a parameter that is called xdebug, you get really interesting debug headers where you can see the timing of each of the phases that you have on the traversal and the performance of the cache, the memory cache, and how many queries you are doing too as well. It's a really interesting tool too. So when you finish, okay, this call, where am I spending more time? No? Oh, yeah, we also support, we have packages for Prometheus to send metrics to Prometheus and to on a StatsD. So you can deliver all this information to see if there is any problem. Indexing. Okay. We will go faster because we are running out of time. So indexing, it's a really complex problem. And if you want to, we are storing the pickles on the database, so we wanted to make something that is fast, but it's also synchronous. So the first approach that we went is to index things on elastic search. What means is that we have a process that extracts the indexing information and sends to elastic search completely out of the transaction. That makes that the index may be out of sync with the main database. So you may have objects that are not indexed or information on the index that is not on the object, but was a really fast, fast first approach. And it fixed, well, maybe we were losing some information on the index, maybe it was late. For example, now if you try to use elastic search on guillotine and you want to use Volto, you create an object and you go to the folder and you don't see until one second because elastic search needs one second to index that information. So yeah, this is synchronous even from guillotine a point of view, even from elastic search or whatever you're using to communicate with elastic search, it's a problem. So you need to have two jobs that are every some days going through all your data making sure that everything is indexing or indexing all this information. We will explain it before a good solution for this problem with Kafka, but we will explain something if we have time. Then we decided to create also transaction tie at indexing, so it means that there is as we are storing on JSONB on Postgres, that means that it's on the same transaction and if the transaction fails, the information is not stored on the index. It's really goes really fast. It's scalable using JSONB. We have indexes for all the fields that we have used to index. The problem is that the full text search is not as powerful as elastic search and if your needs require elastic search, then you need elastic search. Yeah, so there is API to initialize the indexing, to repopulate the indexing, to delete the indexing. It's not only a container. Container is what we should be a site. In guillotine, it's an index. You can define that any folder is a subindex. You can define multiple subindex for each folder of your site if you want. Yeah, file storage, storing files, it's a bit easier than indexing. We are lucky that there is a lot of services that provide this kind of backend in order to support storing files. We have S3 Google Cloud Storage and Minio that it's on-premise S3 that you can install in your Kubernetes cluster. You can also use database, but as we said before and talked before, it's not recommended to have large volumes of blocked files on Postgres. Oh, yeah, we are building a new driver for guillotine to store also on the file system if it's needed. But what's important is that as it's a syncio, any file coming in and out of guillotine, even because it's coming from the storage to the client or from the client to the storage, it's a stream. It means that we are chunking, wherever if you're sending altogether or with TOS, we are chunking with buffers of 256 or up to four megabytes of bytes, and we are sending this information to the service and get on the other way around. So I remember that in Zoop maybe a long time ago, I needed to download files locally and then return to the client. It's so easy with a syncio to have this buffering of input or output, meanwhile, you're doing other things with other tasks on the same thread. Yeah, it's also important to know that if you're using S3 or go-to-couch storage or Minio, you need to have a vacuuming in case there has been some un-sync information between the storage and guillotine. Yeah, I'm going to go really fast. As is the syncio, queues are really friendly with the syncio, so we support AM2P, mostly designed when you really need to ensure that the task is being done, so you need to have an easy case this is being done. You have Kafka support, that means that you can even log in, use sequence for indexing and being able to have a consumer that indexes that on Elasticsearch or to also assign tasks, more kind of an streaming service, and then you have NATS and STAN that is the best of AM2P and the best of Kafka together. Sorry, I'm using that. Yeah, some other things that we found out in tuning guillotine is, well, here, we don't have to do that all the time, but anyways, be careful with reference cycles, so objects that end up referencing through a chain of references of objects themselves, so it's really hard for Python to garbage collect reference cycles. It takes a lot of garbage collection attempts before it actually gets it out of memory. Also, an asyncio, each task is this object that doesn't get cleaned up immediately, and there's some exception, even if it's like, I don't know, some like a conflict error or whatever, it might be that exception, an exception object usually wraps the local context of what happened in that exception in it, and so that might have references to objects inside of it, and that might not be cleaned up right away, because that task, even though it may have been finished, the object that holds the information about the task hasn't been cleaned up. I don't know the rules behind when, it's like a weak ref, I don't know the rules behind when that actually does get cleaned up in asyncio, but there's like this weak ref dictionary of tasks or something that doesn't always get cleaned up, and that can be a lot of memory sitting around potentially. There's also the built-in profile in Python that's really nice that you should use to know profile like CPU intensive code, and then there's this really nice library called line profiler, and that will give you the full time of every single line of code that you just decide to profile, and so like, you can look at a line, and then that took so many microseconds, and then you can go into like what function that called, and then you can see it all split into like where inside of that function, like you can go, keep going down and see like where the time was spent, and it's actually time, not CPU time, so this will include if you're like connecting to a database and that database is slower or whatever, you can, this can be really helpful to identify slower parts of your code. Yeah, in Guiltinga comes with built-in support for both of them. There's minus minus line profiler, command option and also just profile command option, and then is, oh maybe, oh yeah, so you can kind of see, this is the line profiler output, but if you want to actually profile a function, you have to add that app-profileable decorator, and then once you use this, it will register that this needs to be profiled, and this is the line profiler output, though it tells you line by line how much time was spent on a real line. That's all, thank you. Any questions? Hi, I noticed you turned off the autobacuum on PostgreSQL. Oh, okay, so you're out of the vacuum, not because it's not safe. Okay, you answered my question, thank you. Yeah, because when we delete, we just mark an object as being deleted, and so like, there's another, there's an application vacuuming process where we're actually going through those and doing, going all the way, going in a shallow way and deleting them individually instead of relying on referential integrity to clean them all up, because referential integrity, if it's a substantial amount of objects that need to be cleaned up, can like, take just gigabytes of memory and destroy it. Hi, maybe this is a crazy question, but how much SQL are you really using in guillotine? I mean, if it is just a little part of SQL, for more formats, you can bring more and more using directly or writing and reading from files using the PostgreSQL libraries, for example. Better would be to use a database built with the sync.io directly so it can be plugged in on your application. This would get the best performance, but I don't know if it exists, so I'm asking if you, when you did your research, if something like this can happen or exist. Yeah, I guess at the time, PostgreSQL was really by far the best option. There was this really nice async.pg library that's an async.io driver for PostgreSQL that's really fast. And even when we started this, there weren't, not all databases had async.io drivers. And PostgreSQL is really fast, as long as you use it correctly. It's really fast. And then if you're going to use a database that spoke HTTP, there's like overhead with that and they might be able to scale horizontally, but then regular operations aren't necessarily really fast. And there's no database that's just async.io. In the end, it's still Python. It's not going to be as fast as C or Rust or C++ or Go, but we use Python to do what we like it. And there's other benefits to using Python. So this is fast in Python. We mean Python fast. You can still do a lot with Python. Instagram uses Python and async.io. They're pretty big. It's just a matter of choices and trade-offs. Other question? Comments? By the way, Instagram also uses Postgres. You use buildout, right? No. No, what do you use then? Just pip. I saw Slashbin in example. Slashbin? Yeah. Oh, I just, the virtual environment. Next try. Other question? Okay. All right. Sure, I agree with the Europe lookup prevention schedule because that takes time. The actual issue there is race conditions because first you select then you update. Yeah. So we have a transaction ID that we look up to make sure that it's a safe operation. So that would work in your case specifically, yeah. But there could still be another operation that we'll insert and then you get a 409 because it will be invalid. And then you get another 409 or something. Yeah. So the actual client needs to take care of that anyhow. Yeah. Cool. Yeah, retries. We have built in retries too for database conflicts. So I mean, yeah, like just like ZODB has retries, it's kind of the same way in that respect, but the client still needs to handle it in some cases. So that's what I think. Okay. Thanks. Thank you. Bye. Bye. Bye. Bye. Bye.
|
We are going to go through all the tricks, decisions and compromises taken with Guillotina and their addons to provide performance and keep easy usage as a priority. Topics will include object and field storage techniques, cache storage and invalidation, catalog integration strategies, queue integrations, and data streaming.
|
10.5446/55202 (DOI)
|
Technisanče intensive Stelj Arija imaj almost no Note ne uncomfortable tonoaşčne ili clash vno 도 jan frequency in mi in Reddit, v peel delar ja občедijo landslip kake, suite in sm Sacredboard, los se evoč kom express, in to silno zgod analyse in klavnepaperti in se akundele, čekaj nač Sp Alliedва. TakSee je lepo pot virtually anal편ačno začeliš. Wstmem odenoj complejke epQUE Bhagicesga pritetja mu pomefl arisekima na sk�� Nation, quark v soli je zvoj pravdes. Ta jo je현 čJač in pa je do t encourage crushnen… in zelo sem biločil, da je replikator vsečen, ali da sem biločil, da sem biločil. Vsečen sem biločil, da sem biločil. Zato sem biločil vsečen vsečen. Vsečen sem biločil. Kaj je definitiv umoma, zato sem prikočila vse, da sem biločil in folko toutova! Kaj VIstvЭто pre� math, v Hung Whis O hav una in V UC I Vagrant. Vagrant, če je vagrant? To je tudi, da je vrtične vrtične vrtične in kontainere. In je repozitorija v pristaljne vrtične vrtične. V vagranti, teži, teži v repozitoriji vrtične. In teži, da se vrtične vrtične. Vrtične vrtične vrtične vrtične vrtične in kontainere. V k운데ju v skull lahko svoji bali poš напримерMUSIC OK. In si razienivega. Do želi ne ihmleda HP Kure Arenve Gund, da je nrv do odgleda. email jusqu visited. to sem težko postś altar. Valiča na volju hates. S k Serminježbahn? T黨 dash. cal v tablju. Prej że jaz bolje<|hi|><|transcribe|> Morele,CK Appvs detrimental. Z Heroisup, isto soba že ko bom te ne koristimo, da kratim komander ima guardum v počnem insükr cosmetic on. En zça vhor ne habob hročän.iodi tako, kar ne is atéalkut, nas smo vsega tako rekuja applicable. Hai modna, v brednich prev 1890'v moral sound på sam pensando in strienju učinarv tukoj data in i tako bid target machine. In Vokt is a library to manage shell command execution, and Paramiko to manage the SH functionalities. I call the methods of fabric to install packages, configure servers like Apache, Monit, Postpix, Laptd, Postgres, and so on, or download source code, make the completion or run buildouts. The typical operations are to run command with run or to do, transfer file. An interesting thing is to run the same script on different machines, so you can, for example, update three machines with the same script. This is a simple snippet of code that shows run. You create a connection, server connection web, web one is the configuration in your.ssh configuration, so the configuration to connect to a machine. You can run the command with server.run, in this case, sname, and you can use the result to check if the command is run without fail. The other typical method is sudo. You can use to system super user to launch system super user commands. In this example, I run user add or make it in a place where normal user cannot create directories. In the last, I restart the page. This is the command to transfer file to the server. With the fabric, you can create a script from scratch, launching all command to install the system. Usually, I use some libraries, fab tools, and using. Fab tools, like using, provides some methods to manage system user packages, and then, we use the terminology require method that is similar to Chef for puppet. For example, we have require a patch, require a group, require Python, require user. The system installs the packages and start the server. Similar for other operations, like add the user or make a database. The system is very similar, but the syntax is a little different. For example, we have to update a file, we have file on the score update, we have to modify the configuration. In this case, we change the configuration of the monitor to start with add a layer of 60 seconds instead of 240. This is the same command to add the user, the syntax is user underscore ensure access. To install packages, we have package underscore ensure. Fab tools and using are very similar, but using supports different type systems, so you can use a pt instead of um, or another package system that I don't remember. Usually, I use ubuntu or adebian, so for me, kuzine and fab tools are very similar. This is a simple example. I define a task to run on the system. This is a simple check system, run name, means a, realism, host name, gtl status. I run this from command line, from the shell, with fab, the name of the host and the task I want to run. This is very simple. I have the real installation script, something like 3 underlines of god. I think you know Amazon Web Services, so what can I say. The best thing about Amazon Web Services is that you can choose on very different type of machines, all sides, different number of CPUs, different times of RAM, different types of network interfaces, and you can add some disk runtime, you can change the size runtime, so it's very flexible if you need to change something. So, what I use about AVS, I use AC2, ABS and AP for the most. So the virtual machine, the virtual disk and the elastic IP that is the system to have in internet address. And the snapshots that I use a simple backup tool because you can make a snapshot of a disk. And the security group that I use as firewall. OK. Do I create a machine using bot3 or command line interface? AVS, they are very similar, the only difference is that in AVS CLI you can use it in Bash script, and bot3 instead is Python library. So, which the reason to use script installation to have a Fragstrup task code. So you can launch the script and machine exactly the way you want. For example, we create the stage development production machine with the same script. Just in some cases we have a configuration file to change the size of the machine or the disk, something like this. But it's only a question of change the configuration file, not the script. And so, bot3 is library. It permits to access the service and to create, change, add disk to machines and so on. In my case, I use the library to access these services. This is a simple example. I create a connection to the AC2 services and launch a command to have the instances I've just created. In this other example, I create new instances using the name of the MI, that is the correspondent of Vagrant box. So, it's a preinstalled disk. And the instance type is the type of virtual machine I want to use. This is another example. I create a volume in a specific zone with the size of the 15GB and with a specific type, GP2 is the SSD type of disk. Ok, this is the command to start one instance. And at last, in the beginning, I create the script, not like Python script, but Bash script. But the command are very the same. You can use one or the other and it's the same. Ok, that's all.
|
Or: how to build a complete system It begins by the requirements to have an installation process easy to repeat, documented and auditable. We are going to discuss about "vagrant" to create virtual machine, "fabric" to automate operations and about the tools to deploy on Amazon Web Services (AWS).
|
10.5446/55205 (DOI)
|
Well, like he said, this is my first conference and my first time speaking English for people, so bear with me. Thank you for your attention. Okay, the topic I'm going to talk now is what I work in the last year. It's about data migration from 2.5.2 and Volto. So at concept, we have some websites that need to be deployed in the last version, and we have some options to migrate. For plan 4.3 to plan 5 plus, normally use collective transmogifier, and for plan 5.1 to plan 5.2, we have the option to migrate this ODB. So maybe we can migrate from plan 4.3 to plan 5.1, then migrate this ODB. What happens here when we migrate is that there is a script. If I'm right, Philipp wrote, no? Yeah. And this script basically changed the structure of the pickles inside the ODB to work with Python 3. So the basic way to do with it is you run the script in Python 2, in the build out, still in Python 2, you must not start the instance, then you update the build out to plan 5.2 and Python 3, run the test, start the instance. That's how basically it works. Sometimes have problems related with interface. Philipp talked about it in his speed. And might need to do something to fix it. Another option is to use collective transmogifier also. But at least for me, it looks more cleaner because we deal with the final database don't have any problem related with the penance that don't match. So okay. This is how the name transmogifier comes. It's about calving the enrolment frame. Just he did a joke with cardboard box and call it transmogifier when something goes inside, get out in different way. Okay. Why we use transmogifier? We use transmogifier because there are many generic pipelines available for common case. It's easy to use. We have flexibility to deal with different use case. Some clients have custom data and so on. And it's a brilliant way to use iterator design pattern. So what I see to understand easily how transmogifier works, it have a chain of pipelines. I made a rough example here that we have some objects inputs. I think like it's numbers. So in the first pipeline we have the source that will give a list of items for each item. It's modified inside the pipeline and then it's used to the next chain. So it's starting the source like in this example. Number one plus five, last three, divide by two, multiply by six and the end is seven dot five. And this is happening for all the objects. This is the way it works. Of course there are some minor things but the general idea is this. So what we basically make in transmogifier, my point of view is like if we make a production line, the object is the product that will go from many pipelines on many hands, change it a little bit and send to the next one and then we get the final object working the way we need after migration. The use case, we have three use case. I'm sorry I can't show the name of the place but it's a large universe client. The first use case, just a minute. This client, we have not so big database, many custom packages. The build out looks like a Frankstein. So yeah. The second client, it's a high profile client website. Have many data, migration takes usually around four hours to run. Have some add-ons. There is one custom report object type that is archetypes and this simple object we decided to split in 10 dexterity content types, different. The other client, this is not the website of the client. The website is an internet and have a third party system that is, we are going to migrate into Plon. So we will have to make a new pipeline, source pipeline to get this data exported from the system and import somehow inside the website. Okay. So the challenge we have to do here is from Python 2, Plon 3, 5 or 5.1, archetypes are dexterity, old products, sometimes other systems and in the end we want to get Python 3, Plon 5.2 and Volto. For clients, the advantage to use this way is that they spare migration from Plon 5 to Plon 6, at least part of it. Advantage for Plon solution providers, it's a way to sell clients. We upgrade, like Tim said, we have a new front end, Volto, and it's easier to sell both. It's the right time to do it. So this is our advice. Because to sell to a client Python 3, it's costly, but the client would not understand the difficulty about it. So it's the right time to try to make this. So at KIT Concept, we have our way to deal with it. We create some packages, three packages. The migrator package is the principal package. Oh, sorry. Migration Plon 5 is the principal package. Inside here, there's the KIT Concept migrator and the content creator. The content creator basically is used to create content in the Plon 4.3. We use a JSON structure and create some data for testing. When we run the migrator, in the end, we get the result and test with a smoke test. Inside the KIT Concept migrator package, we have the pipelines, the custom pipelines we wrote, and inside the migration Plon 5, we have a commander, utility, that we say, hey, command, export data, and then import to Plon 5.1, import to Plon 5.2, and after that, we can run the tests. In the end, we have a CI, Jenkins node, and the steps are we build the commander to, there is a conversion to in nodes that we also build. Plon 4.3, Plon 5.1, Plon 5.2. Plon 5.1 and 5.2 are empty in this time. Plon 4.3, we have some custom data created for test purpose. We export from Plon 4.3 and import in Plon 5.1, 5.2, and run the tests. That's how it works, our Jenkins. And also, we have a migration server. This one is the one that, well, now takes around four hours, this time when I took this screenshot was two hours, almost three. And basically, this Jenkins node, we run it when I push the button, not automatically when there is a command in the master. Basically this. So we are working on it to finish all the needs. And that's how we test. In the end, we have a website that we can show to the client and so on. Details how we did things. To migrate 80 topics to the Xteria collection, we use it. We prefer to do the changes needed inside the collective J25 package. We have a fork. And basically what we do there is, oh, there is 80 topics. We run the Plon default migration on the query to get the new structure of the query. And we prepare the export data to look like the Xteria. So during the import, the data is already ready to become a collection. For, okay, the next topic. To migrate the read text HTML to Vultublocks, we created a package in Node that we called the utility, passed the HTML. We convert to Drive.js and then we get this JSON and put inside the Vultublock. For the portlets, we did some code to migrate the portlets. At least now we don't use the portlet in the end. But we have in one client, there is one portlet with download. There that people link a PDF or something like this. And we need to keep this link to use at the other part. But, well, the portlet itself is not used. We just take the link and try to convert to UID or something and keep it to use in the new website. And post migration. It's something like this, for instance, collective cover. If our client had collective cover, we mark this content type to don't be imported. And after that, we have an empty document. And we have a structure in our code to say, hey, in this path, we want to overwrite the content with this JSON. So we export the JSON. We do the chance online in the website, in the Vultublock. Use REST API to export the JSON data and then save it in the right structure. Something like JBot does to import after migration. Okay. I already talked about something about Vultublock. What we need in our use case and is in our code. We use collective folder types because Vultublock don't have the idea of default pages. So to keep the same page structure, we have some code that deal with this to the end URL be almost the same. Need to deal with default pages. There is a default page sometimes I need to fix how to show it. And okay. Rich text to HTML from draft.js. Then we'll do a cheat it. Some content we didn't import. So we create the default HTML that just point to the old website so the client can check it and see how. This cover page, we want to make it different. So just to be able to check it with the client easily. URLs. Well, inside the draft.js, sometimes we have images or links that have to resolve UID. Today, the Vultublock don't deal with it. So right now, it just pointed at the image, no resolve UID. And we plan to fix it in Vultublock also. So probably will be something that you will use the same idea of resolve UID but with REST API and Vultublock. Folders. For simple folders, we just show, we plan to show just a list of the items in the folder. So it's a simple document with the, oh, now it's listing block, not collection, sorry. The listing block, showing what's inside the folder for collection is a document with listing block with the query of the collection. So that's the steps we did. Well, I have some time so I'm going to show you. Okay, why it's not there, just a minute. This is how we work today. So we have three plans here, plan for 3, 5.2. Then run the build out in plan for 3, we already created automatically some content and we wipe the data in plan 5.1, 5.2. So if we run the test, the smoke test here, we'll get error. So make test plan 5.1. Here we'll not have the content type, it still didn't migrate. Just a minute. And after migrate, we get the test passing. We plan to open those codes to the community, but it will get some time to polish it before. Okay, error. Then I run make, okay import, just import. And then it fire a plan website and run the import pipelines. It's creating the data that was exported by plan for 3. I forget to show the export comment. The export comment just create the data inside the JSON data. Okay, let's see one of them. Oops. And the data is exported, co-active JSONify, the default way we do with it. Oops. And okay, I already run the import and if I run again the tests, it will pass. It will take some time again. And we do the same at plan 5.1.5.2. That's how it works. It's the environment I'm working in the last year. Make me, I finished too late. You have questions? Oops. Sorry. Thank you. Thank you so much. Questions? I don't think it, thank you very much for your presentation. So now we have like four or five different ways at the moment by many organizations worldwide using plan build to migrate it. And one of your modules in here which will be very interesting for everybody will be the rich text to Volto conversion. Do you think it will be feasible to for the things that you've done new now in your migration stream to extract those, for example, this as a separate module that other people can start to reuse in their stream? The code is really simple. This is the code. Just 38 lines. I'm used the same structure of there is inside the Volto code. Just this part I extracted instead of import and use because I had some problem but still it was really simple part. And when I run it, I just pass the HTML and get the ref.js code. Super simple. But maybe this code should be inside the next upgrades, upgrades that are planned, something like this. So when we migrate from plan 5.2 to plan 6, should use something like this. Okay. I think I can show this to people. So is there a Python equivalent to that because for the migration from plan 5.2 to 6, we need to iterate over all the dexterity schemata and put that into separate blocks into a draft.js thingy and these usually run in Python? I tried to do it just with Python but it was today it's not possible. It's difficult because the code we have in Python for ref.js is just to do the other part, the reverse. Draft.js to HTML but not the HTML to draft.js. So that's why we created Telet and we fired with subprocess and got the result. Because it's running just one time, I think it's not too much overhead. Okay. Another question? Another one? Let me put this again here. I don't know if someone don't take the picture. I can show the first part. Right? Okay. Another one? Uh-oh. No, just saying. We have a sprint at the end of the week, right? So if somebody is interested in that, that could also be a sprint topic. Start working on upgrade steps and migrations. It's a bit early but why not if people are interested? Nice. Okay. Thank you. Thank you.
|
Plone evolved! Now it's time to talk about how to evolve our customer's websites together! After working last year with Plone database migration, during this talk will share the knowledge acquired with the community! What are the concepts involved? How to get ready for Plone 6? At KitConcept we already have some use cases and our way to deal with this!
|
10.5446/55206 (DOI)
|
So, yeah, I'm Serena and I'm very glad to be here with you today. So I want to talk about the provisioning of a Plone server using Ansible and Molecule. I will talk about a bit of introduction on Ansible, on Molecule and then how to apply those theoretical concepts to practical tasks like provisioning a Plone server. So, for those of you that don't know what Ansible is, it is a tool that I use now almost daily and it helps in doing system administrator tasks, basically. So it allows to write recipes that help us in automating the repetitive tasks like server provisioning and software deployment. It has gained a lot of popularity in the last few years because it is very simple to install and very simple to use. It is an agentless package. Everything is done by SSH. You don't have to install anything on your target hosts. I will refer to targets as the hosts that have to be configured. Its recipes are written in YAML, which is a language that is very dear to Python programmers because we all love indentation. And it is written in a language that is human readable. Its documentation is huge and the community is very active. In fact, we have a place called Ansible Galaxy that contains all the roles that are developed by the community. And these roles are also evaluated. So you find a place where you can download things that are, in theory, well written. If they have a lot of stars, a lot of downloads and so on. Being so easy to use, it encourages to use it as often as possible. And so one becomes more disciplined about never logging into a server to do configuration without Ansible. Or like never logging into a server manually, let's say. So why Ansible today at the PloneConf? Well, there is an Ansible role here that is for provisioning a Plone instance. A Plone in production is actually quite hard to install because it has a lot of moving parts which I honestly don't know anything about. Because I've never installed a Plone before this week. And it has some server and client components and stuff that controls the network connections and so on. As I said, I don't know this project, but I like to jump into things that I don't know using Ansible when possible. Because there are people out there that have written recipes for the community. And so let's pretend that I had to install a Plone on some server. What should I do and what would be my test strategy? Now a bit of theory. Ansible is a Python package. So it has to be managed using the Python tools, which are the virtual and the Python package manager that is PIP. Ansible is available also on many system package managers like APT, YAM and stuff like that. But their versions are very outdated. So please never use your operating system package manager to install Ansible. Always use virtual M and PIP. Now let's create a virtual M using a reasonable Python version and install the latest Ansible version available. And check that Ansible works. So for checking that Ansible works, we call an Ansible module with this syntax minus M stands for module. I will tell Ansible which are my target hosts. It is localhost in this case. And I run the Ansible command that now I have available in this virtual M against the inventory file. And success. I pinged. I have pinged my localhost. And the answer is Pong as usual. So yes, installing Ansible is that simple. Now we will take a look at more complex structures like roles, playbooks and stuff like that. So just for explaining myself a little better, I want to write the command that I gave to the Ansible. So I wrote a program in a playbook which is a YAML recipe that basically does exactly the same thing that we have just done. So I wrote in this YAML file the instruction to ping my instance. And the answer is something like that. So this is the typical Ansible output. And we have pinged the localhost once again using the Ansible playbook command. If you see, instead of the plain Ansible, the plain Ansible calls one module at a time. The Ansible playbook calls a recipe that contains a lot of calls to modules. Now let's do the things the right way from the beginning. So it is very tempting to jump into writing a huge Ansible playbook once we have installed Ansible because it works actually. So why bother using more complex structures if a huge YAML file contains all the things that I want to do? Well, it is an effort that is very, it is worth it because roles are the way that the community shares their code. So people expect to find files in some places and the Ansible playbook command expects to find things in certain places when using roles. So roles are actually just some directory structure that correspond to include the directives. So you write in a directory what the Ansible playbook command expects to find. And it helps not only documenting and doing the things the right way, but also to write more complex recipes because maintaining a bunch of small files is easier than maintaining one very large playbook. And yeah, it makes the life easier for others and for us as well. Roles can be created by using the AnsibleGalaxyInit command. So once you install Ansible, actually you find a lot of Ansible related commands. Ansible, Ansible playbook and AnsibleGalaxy are those three that we are seeing today. But instead of using AnsibleGalaxy, I want to use another thing. I want to use Molecule. Molecule is a project that is designed to help in the development and testing of Ansible roles. Molecule was adopted by the Ansible Baradette project at the end of the last year, so it has become a standard in producing Ansible roles of high quality. And it basically is a wrapper around Ansible calls. So it supports every provider that Ansible supports. If Ansible supports Docker, Molecule supports Docker, if Ansible supports EC2, Molecule supports EC2 and so on. If Ansible can use it, Molecule can test it. So we will create a role using Molecule which wraps the AnsibleGalaxy command for us and does also other very interesting things. For installing Molecule, we do the things the usual way because it is a Python package again. So in a proper virtual M, you install a certain version of Molecule and also dependency, the SIFRA, with PIP. And now we create a role using the MoleculeInitRole command. Now the name of the role is given to the minus R flag and the type of driver that we want to use is identified here. I was in this path, the test directory. So now I have an example directory filled with all the directories that the Ansible roles have to have. Plus, another interesting things. So inside this role, we will find out that there is a Molecule directory which usually, well, which you, not usually, but always you don't find when you use the AnsibleGalaxy command. This Molecule directory contains what is called a test scenario. For those of you that are not familiar with the Ansible role scaffold, just remember that the important things must be in the task directory. Here you have to write the start of your role journey, let's say, your first Yamal recipe. Now, Ansible everything starts from the task main.yaml file. You have to call this file exactly like this. And Molecule has created an empty file, of course. Now I fill it with a very simple command to run some code. Okay, I want to say hello to myself. And don't mind about the changed when syntax. It is done for idempotence purposes, but I don't know if I have to dive into this. Now Molecule has created a playbook for us that stays in the default scenario. This playbook is calling the example role. In the example role there is a task, there is a main.yaml file in the task directory. Everything works and I can use the Ansible playbook command against the playbook created by Molecule. And this runs my role. And it is very nice and it says hello. So we have seen how to install Ansible the right way, Molecule the right way, and create the simple, extensible role following the best practices. So creating a role structure, plus a Molecule directory. Now what about these Molecule tests? So the default scenario contains stuff for running this role in a Docker container because I've decided to run it in Docker. What is needed for running tests in Docker? Well, a Docker file that creates my container and a Molecule.yaml file and the playbook.yaml file that calls the role that we have just seen. The Molecule file tells Ansible that we have a platform to run this test and this platform is a CentOS7 image. So this means that this role will run in a container that is based on the CentOS image. This file is very simple and it represents the components that Molecule uses to spin up the instance. You can have actually a lot of platforms or instances under the platform list. So you can test parts of your role onto some instances and other parts onto some others, which is very neat. The test metrics that Molecule provides is very long and the Molecule test command with the help prints out which are all the tests that Molecule is going to run. So we have linting faces, creation of instances, id and potence tests. So run your playbook twice and check that nothing has changed. Converge, which is the phase where the tasks are actually executed and clean up, destroy and stuff like that. Unfortunately, the linting faces are quite difficult to pass. So honestly, when I am in the development phase, I disable them and for you, I included here how you disable the linting of the YAML code and the Ansible code. Now, the test metrics that is more interesting for me is composed by these phases. We can forget about the others. So we have the lint for those that like linting, creation of the instances, converge. So run this role, id and potence, run this role twice and check for changes that don't have to happen. And destroy the instances. The default scenario is the Docker instance and the converge phase runs the equal law in this instance. The destroy, sorry, the id and potence was completed successfully because if you remember I said changed when false. It is a trick that you use when you use the command module in Ansible, but just bear with me. This is the important test was okay. And then you destroy your instance. So you clean up after yourself. Now, there is an Ansible role for provisioning Plone or for deploying Plone onto a server. I clone this role, I follow the documentation. So there are instructions that says make and download other dependencies. And now I have a very large directory which I couldn't show you here. But just keep in mind that there are a lot of roles and a lot of playbooks that are different flavors of your provisioning options. You want a large instance, you want a small instance and stuff like that. So personally I want a very small instance because I want to do the things just for testing purposes and fast. So I copy the sample very small YAML file into the expected local configure.yaml. Follow the instruction on the Plome website for this. And I did again following the readme disabled all the components that I don't need and configured some things that are instead required. Okay. So now I am ready to use Molecule. Well, I'm not initiating a role here because I have a role. I actually have a lot of roles in the role directory. I want to inject just the Molecule directory here in order to be able to run my Plone roles in these molecule scenarios. So instead of using the init role command, I use the init scenario command which does exactly this thing. It injects a Molecule directory into the Plone folder of roles. Okay. I'm using a different driver here. I tried Docker first because it is always my first choice when I run tests because I like how fast spinning up and down containers with Docker is. But it didn't really work because services bother Docker. And I didn't want to use strange Docker instances or strange configurations, so I switched to the EC2 option. Okay. Now, what's inside the Molecule directory? There's not a Docker file because I didn't use Docker. I used EC2. So I have some playbooks that create the EC2 instances for me. Now, the Molecule.yaml file looks a bit different but not very much. You just tell Molecule that its provider is EC2 instead of Docker. And you use a different code for the image keyword. And you also use another variable that is the instance type which applies to EC2 and doesn't apply to Docker. I wanted to use a very small instance because I don't want it to be charged for my tests. And the create.yaml file which can be compared to the Docker file that we had before, those files are for creating the instance, basically. It is a bit more complex and it does a lot of things. It creates the whole EC2 instance environment. So it creates the networking part with the VPC and subnets and firewall rules and stuff like that. And it creates an SSH key pair for talking to this instance. It expects, unfortunately, to find an environment variable called EC2 region. It is a bug in Molecule this actually. It expects to find the AWS credential somewhere and it starts the AWS instance. This is just a little part of what this create.yaml file looks like. Now, I wanted to change the default playbook that I'm going to run with Molecule because I don't want to include the plonarole because it doesn't have the task directory directly in the level 0 of the folder, let's say. I wanted to include the playbook that includes all the roles. So I use the include word instead of the role word and I include the playbook.yaml file prepared following the plonarole instructions. And this playbook is a collection of around 13 roles, more or less, if I'm not wrong. And well, using Ansible Molecule EC2, I can dare to say that this has become a test-driven cloud deployment. That's a lot of words. But it is also true. So running Molecule converge instead of Molecule test because I don't want this instance to be destroyed right after the creation of the plonarole server. I want to be able to check if I see a plonarole website there. I wait around 15 minutes of Ansible output and ta-da! I see the interface of the plonarole, I think, administrator web page there on the HTTP standard port of my EC2 instance. So just maybe we can discuss later if this is enough for testing the deployment of plonarole. I don't know actually because I've never done that before. But it was good enough for me not to know what I should look for. Okay. Okay. Then remember to destroy this instance because you just run the Molecule converge command. So there is an EC2 living instance somewhere in AWS that is charging you for money. Molecule destroyed and it destroyed this instance. So it remembers what name is that. It has the SSH key part correctly cached somewhere and so on. As I said, the Molecule test command would do the full converge id importance, which is very interesting, destroy workflow. And for wrapping up, because I think I run out of time, yes. Everything is very easy when you know how to do it. So Ansible is easy, Molecule is easy. And it allows to have many pre-configured test scenarios, which is very neat. Think about that for concluding. I could have done the exact same thing without Molecule. I could have spin up an EC2 instance, write the name of the instance somewhere and run the plonarole against this instance. But how better is that I have a directory where I have a pre-configured scenarios, one like EC2, the other Kubernetes, the other is Docker and stuff like that. And I can think about testing a complex architecture in a pre-configured way. I don't have to bother with writing Ansible code for spinning up and down a complex network infrastructure and many machines and stuff like that. So actually, doing the things this way, I think it saves a lot of time and encourages you to do the things the right way. Thank you for your attention.
|
Ansible simplicity is one of the key aspects that determined its success as provisioning tool. Nonetheless, it's ease of use often leads to rapid development of code that is hard to maintain. Molecule, an official "Ansible by Red Hat" tool, encourages an approach that results in consistently developed roles that are well-written, easily understood and maintained. In this talk we will look at the development of an Ansible role using Molecule.
|
10.5446/55208 (DOI)
|
Hello everyone, how are we doing? Okay, let's get started. Wait, how do I get started? Oh, it's not turning. How is it? How I switch these slides? Oh, look at this. Okay, first of all, shout out to Red Turtle. First of all, thank you for the huge tray of tiramisu. It's awesome, and we will try our best to enjoy every bite of it, and thinking about Red Turtle and Plone. We will do, we will do. So welcome everyone, let us say it. And first of all, thank you for attending my talk, and also thank you to the community to accepting this talk proposal, and also thank you to the community and to Red Turtle to make it possible that I'm here, because that was not that easy. So let's move on. Okay, first it is a disclaimer. Basically, this talk is only about the CoreDocs, Volto and Kiewitina. There's nothing about the training documentation or add-ons documentation, because these are completely different topics, and because of that they are not part of this talk. Also, everything like what we are telling, or what I tell in the name of the docs team, is not written and stoned yet. It is basically what we discussed, and we think it's the way to go, and lots of the stuff, so techniques, they are also used already since a couple of months in production, even for some huge companies like certain banks or insurance companies, and the stuff is working, but then again, I mean, that is what we came with up, that doesn't mean it has to be completely like this. So first, let's get into each other. Who is developer? Wow, who is technical writer? Who is a developer who is also forced to write the documentation? And who of you love that, the drawing writing documentation? This is less. And who is already familiar with DocSys code and using the approach like DocSys code for intensive testing of the quality of your documentation? Wow, some already, that's cool. So there are a gender of today. Can I make that bigger? Anyway, I look there. So first, we have a short look about the current status of our documentation. Second one, we will mention shortly about the Google season of the docs. Next part, then we will talk about some mistakes and it's time to lower the barrier for our documentation because we have some slightly issues there. Then we move on to the really cool part, which is where they live the future, what I call Plone X at the moment, because according to Iriku, I'm not allowed to say Plone 6 anymore. And this also will cover then the last part, which is 5, which is basically covering the whole DocSys code approach and why we should use it and why it's helpful. So let's start with the current status of our documentation, how we are doing. And the next quote on the slide, just to make it clear, it's not from me, that is from one other member. So our docs are far from perfect, but considering that there are only less than a handful of people working constantly on the documentation and all of that is basically done as volunteer in free time, there is no company behind it or no one is paying for it, we are doing not that bad. And I think that is a really important thing to say, because it's like this. People in the audience will know me a bit longer, they also know that I love complaining about lots of stuff, including documentation and how bad our documentation is. That's completely true and I still have this opinion, but then again I also grow a bit and I mean, we have no company behind it, taking that we are doing it now free time, for that we are not doing that bad. So that is what we currently have, as you know it and you love it or you hate it and you love and hate relationship with that. And there are some things that are not really good or where we have room for improvement. I mean, when we came up with the structure and all that, it's partly because we didn't know better how to do it, because when we started to do the documentation we had no clue about documentation at all because all of us were basically developers. But we are facing currently some issues with the setup, starting from that the structure is not really clear and confusing. We have the weird mix of user documentation and developer documentation. The documentation in general is written way too focused on developers, which is not surprising because 90% of the stuff is written by developers. We never ever thought about a content strategy or a message strategy, the only thing what we did in the beginning was thinking about what's more or less a okay kind of structure for the documentation. But it was also then again years ago and like, the plan changed and also how we use it changed. So it's not perfect. Personally, I say it's bad use, but I mean it's not bad at all for a volunteer project. So let's talk shortly about campaign points besides them which I already mentioned. Like, we have no content architecture or strategy or message or something. It's confusing, it's too developer focused. We have lots of stuff which is redundant, outdated, and it's not available or not matching up at all. Like for example, using, we have, and still in the Plone 5 docs, lots of references from Plone 3 which are never got updated. If you're talking about the editorial or markup styles and quality, it's inconsistent, it's completely confusing, it's way too wordy, and lots of other things what you could improve. And then also not forgetting like the whole build and deploy process. It's super unstable, it's slow, it's really, really complicated at the moment, and also it's not that well documented. So I mean the people who are doing it, like it's usually Paul or me usually, we usually are able to do that, but even I for example getting annoyed because I never ever bothered to write documentation about that or at least not proper documentation. So we sort of fixed that. So for that I'm really, really sorry because in some of those parts I was at least involved in writing it or in making decisions about it, like the current state of Papyrus, our deployment tool, or other things. But it's also good because now it's time to move on and change. So you also maybe mentioned or recognized in the last year there was a list going on in documentation on the less frequent releases of New Dogs. And this is basically because we split more or less the documentation team. So basically we have one sub part which is the current documentation like for Plone 5x something. This is basically at the moment maintaining the build-up to infrastructure, maintaining to the dogs and improving the dogs, but it's also getting slow because we are also depending there like always on other people and developers. And from the co-op dogs team basically Paul is doing the maintaining and building. And I switched more over to like Plone X or Plone 6. This is like the whole preparation and thinking about structure and building the new setup which is including editorial style guides, markup style guides, working together with the Volto team and even with the guillotine people to thinking about new content architecture, what kind of message you want to do, user stories, new build and deployment infrastructure to make it flawless and faster. And then also like you see on the bottom there's also Google season of dogs because we all started, Google got granted to the Google season of the dogs so we are working together with our technical writer Chris and also then with Kim, Victor and lots of other Plone people to improving the developer or the experience for starting developers for the Volto dogs and that was also including lots of talking about content strategy, message strategy, architecture, what kind of dogs make sense and so on. And this is basically the Google season of dogs which is still going on. We are now basically at I think halfway with the whole timing so soon there will be the first commits with the new blueprint about the new structure about the developer dogs for Volto. And this is really important. We should lower in the future dogs the barrier for contributing to documentation because nothing is more demotivating and annoying if you, for example, you are browsing the dogs, you are sporting a typo or a grammar thing or some just wrong or outdated documentation and you only want to basically add two or three lines. Basically you want to create a pull request where you just say I changed line number five because it was outdated, here's the new write version, you want to create a pull request and then you want to see it somehow, somewhere end up online in the documentation. Currently that is not, that is the end possible. Currently the whole thing is kind of complicated and really annoying because we don't have, like, for the average user an easy way to refute the dogs easily to getting the research out faster and things like that. So that makes it less fun to contribute. The really important takeaway here is we should make it as easy and as effortless as possible to creating pull requests with improvements to documentation and seeing them as fast as possible online because that basically creates good feelings and makes people happy and also motivates them to create more. But this is also really important, like, I'm going to come. So you always should remember, usually you're not writing the documentation only for yourself, there's also an audience. So that meaning if you write documentation, never assume that the people who are reading documentation have your knowledge about certain things that you are writing. Maybe you are working since 10 years on that and the person is the first time ever reading your documentation. Also always try to provide background innovation, like, for example, Whiskey. Different information about Whiskey and maybe a link to Whiskey or to the Whiskey documentation with blown, edit, not only say use Whiskey, people will stop reading it, stop continuing because it's just demotivating and also really unfriendly and hostile. Be always clear. Try to be, like, as short, straight to the point, friendly and clear as possible about what you are writing. We don't need sentences from, like, 20-something words. Just make it short, make it easy, because also think about translation and accessibility. And then also be consistent. We still have a huge problem in the blown documentation that we use, like, for the same functionality, different terms. And this is super confusing for people. It's the same, like, if people, like, for example, using, talking more like developer language, we have the standard default Docker container for blown and we have the volatile Docker container for blown done by KitConcept, but they are not, like, identical. They are built different, so people are getting confused, because if you look at the source, it's just different. So be consistent. And also, like, we always, we as community, alter the documentation, we should try to be more friendly and welcoming. We still have this issue that we always tend to write, everything is just easy and just do that and things like that. All the stuff is not easy. Even, I mean, if you're a developer and you try to write proper documentation, you will also find out writing properly and good documentation is not that easy. So, and now, move on to the fun part to the future. Now that was just ranting about the old setup, and now let's have some fun about the new stuff. Are you already asleep or are you still awake and following me? Okay, cool. So, like, in the title description of the talk, I mentioned stunning, so first, maybe I thought I should explain what I mean with stunning. But stunning for me, it should be always about all aspects of API, or in this case, API-related documentation. That means, first of all, it should be appealing for your eyes. If I look at it, I should be motivated to look at it, because it should look nice. I don't want to look at an interface from the 80s, or which is completely confusing because there's way too many information. I also would like to have some pictures and graphs, because they make it easier to understand stuff. I want to have a clear content structure that is completely clear to me, where I can find stuff, not mixing developer and user interface documentation and other stuff. I want to have clarity, like what I mentioned already before, be clear about your wording, be clear about your grammar. Also really important is good SEO, so think about how you write, what you write. Because zero is not only good because you're ranking high in Google and people finding you better, SEO is also good because it helps you to get a better clear picture about your documentation at all. And the other thing, be accessible. We have still kind of issues with accessibility. We should care about that. Our themes are working with green readers. They are working for colorblind people. Another famous thing, people tend to have lovely sentences in your documentation, like, click here for more information. And I'm really sorry, this is just wrong. Stop using click here for more information. Use a description that people understand if they click on these certain links where they end up, because this is how screen readers are, for example, working. Also use this, like as link descriptions in your source code of your HTML link. I mean, come on, we have to talk 19 almost, so we should do that. So in one part it's like moving on to Docs as code, and this is now a lot of text. So what Docs as code basically means is it assumes that you use the same tooling for writing documentation, which you also use for working on your code. In the same manner you write on work on your code, like, for example, using plugins and linters for reporting style violations or errors and other things, you should do the same for documentation where you check, like with linters, your documentation against your editorial style guides or your markdown RST style guides and other style guides. And then depending, of course, on documentation, you always should add some more tests for the consistency of your quality. That means, for example, if you decide to write your Docs in US English, check your documentation that you're really using US English or not mixing US English with UK English, your links are working and things like that. And why should we use it? Because if we have consistent documentation, it helps not only the users because they're happy, it also helps the developer, they're happy because all of us are finding stuff faster, which also could lead to that we are able to do faster releases. Also really important, consistency and good documentation creates some kind of trust with the audience, with the user guy, with the people who are using it. If people are re-genizing things because, like, think like what we did with README, when the Docs team introduced a couple of years ago the example, default README, they were also some resistance and people were complaining, I want to still use my hitchhiker to Galaxy README and other things. And these are nice for your private or maybe even for collective stuff, but think the bigger picture. Like, if you are, for example, a CTU and you browse the plough and organization on GitHub and then you figure out that README is always written in the same style, that the documentation is always using the same wording and the same terms, it actually creates for me a warm feeling and I started to trust this organization and these people more because they see that they put effort into it and they try to make it good. And also take this and think for the bigger picture. This also is really good for you as a plough provider or a company because then people look up which companies are involved in contributions to plough and selling plough and then they also see that you are taking care of documentation and this is also good for marketing. Even if you don't like it, it is marketing. And at the bottom, for example, there is an example picture of running like Docs as code, as a lintar testing that I don't use here as a link description and that the second test is that I don't start paragraphs with the words but and this is just a slight example, they are endless tests what we can do. And to make it easier because it was always now a bit technical and just too many words, here is the theme and like as a graph, basically the idea is you start writing in your text editor and you review by yourself, then you commit to your version control system, then there are already some checks running, then you may create a pull request, then CI CD is running against some QHX and if that is nice and everything gets merged, then the deploy is triggered and the new version will end up somewhere online. And this is even more because we are not only building cool automatic deploy things to websites, it is also like we are working on tools to improving the whole experience, like how to work on documentation. And this is for example, this is an add-on we wrote for VS Code. Basically, it is an extension pack for VS Code which basically installs, I think, 10 different add-ons for VS Code which are pre-configured for writing documentation according to best practices. And if you are already, for example, using VS Code, you can also say, I don't want to have automatically installed in other extensions, then you can get a link to the detailed documentation and just can decide and do it manually, copy and paste. And this is also really important because this is where docs are at least for me at starting. It is not just about this last cool thing with CI CD and deployment, it starts in locally in your editor and using also linting and tooling in your local editor and on your laptop. And this is then, for example, how it looks like. This is an example taken in order for real life. This is basically like an API of a well-known Dutch bank. And the technical writers of these banks are using exactly this VS Code setup. As you can see on the left, there's a markdown spec which they're writing. On the right, they see your output HTML and on the bottom there, like you know it from your Python linters or other linters, you get the whole output about what is according to the defined checks, wrong and what you should improve. And this is basically the same, the same tests as BB8. BB8 is my personal helping droids which is basically if I do a good commit, running all these tests automatically like in containers on my laptop, and it's checking for markdown, then it's checking for accessibility, it's checking my open API spec, it's checking for broken links, it's checking for my other style guides, then it's checking out with Lighthouse for accessibility tooling, and that takes some time, so be finished. And then basically if everything is green, BB8 will automatically stage my commits at the commit message, and then I still have to push it to my repository. And let's jump a bit more into the details and change, but everything will, all of them will change. The biggest change, the core docs will move from restructured text to markdown. So who loves it, who hates it? So there are a couple of reasons why we make the decisions, and we took a long time to think about it with lots of pros and cons. So, Spring's and restructured text are not bad, they are working, they are nice, but the core docs, and I'm really talking about the core docs, Volto and Guillotine, I'm not talking about add-ons or other things, we use less Python than we used years ago, we are moving to more different things like using auto-reacts, using other JavaScript things, using API things, so we also should adjust our documentation tooling for that. We are also saying we do that because we also want to attract more people, it does not make sense for us because we are a small and even shrinking community sometimes, to force for example React developers to write restructured text. They are no markdown because they are used to markdown, it's way easier for them to write markdown, the same goes basically for open API specifications and swagger and all the other things. So we should adapt and make it more friendly there. Also 99% of the features which are built in Spring's we are never used, they are awesome and they are nice but we don't use them. So why make our life complicated and stick with that? We will use Common Mark as spec for our markdown, because lots of people were always complaining the big advantage to compare restructured text with markdown, besides that we are able to do more things with RST if there is an official spec for RST. That is kind of true but not really anymore because Common Mark is getting to be the official spec for markdown. The whole board from Common Mark is quite active and also the specs which they came up with, they are really really nice. So for us that's also important, they are more than enough. It's covering all our use cases and yeah, for that is perfect. So the other thing, I mentioned that we are stuck using Spring's, we will move to Gatsby for building and deploying the documentation and there are also different reasons why we decided on that. First of all that is also what WOT was using, what lots of other React projects are using. Because we suffer now, for example, really hard from that, that we are not able to do proper Spring's changes and buildups, because there are not enough people who have time or their energy to do it. With Gatsby we can get more people jumping and it's also some technique that they know and what they love. And there are also lots of add-ons and libraries already there anyway, so it's easier to get changes and we are more flexible. And also this is really important, there are more people able to maintain the stuff. It's not basically the bottleneck, it's not Paul or me or two other people on a Spring's, then basically it's really the responsible from the whole community. We get a new build CLI. Some of you may already know that at the moment we are using Papyrus, which was something what we set up a couple of years ago, and then it changed into some Frankenstein set up with lots of different Python and bash scripting and weird configuration files. It's complicated, let's keep it by that. And this is now for example interesting because there is no decision made. So basically I tried out or I'm still playing with one JavaScript based one, where I use Oaklyph which is from Sageforce.heruco, and the other one basically it's Golang based. And they are pros and cons to both of them. And like if you use the JavaScript base, there are more people from what's a 100 JavaScript who can help do it, so it's easier to contribute because more people know JavaScript or note in this case, but the less cool thing is they are less native integrations to like for example container APIs for Docker, for Kubernetes, for Firecracker and things like that. I personally I would go with Gol because I got all this native API integrations and for me personally it's easier, but then again it should not be only about me and what I like or prefer, sadly, sadly. And this is also about now why I'm basically asking like the community, maybe the decision. If people are saying we are helping and we know JavaScript and we willing to help, it's fine, I'm completely cool for that and then we go for it. But if just like sadly quite often people are only complaining but not jumping and helping, then I will just go with go and you have to live with it. I'm really sorry. And the reason why we did not choose Python, it's quite basic. Python is nice, but still to today I'm not able to get native binaries from Python. To run a Python binary you still need to have Python installed on your machine. And the idea is I want to have it as easy and as for less for other people. You don't have to have something like Python, virtual and for other things installed on your machine. Just download the binary of CCLI and you can run, deploy and be happy with whatever you like to do with it or what you can do with it. The other thing, we will start hosting on Netifly. That's nice because it's fast. We're getting automatically CDN and we have one less server to maintain ourselves. And the really, really cool feature is we get preview of branches. That means if you create a branch of your documentation and you write stuff and then you push the branch to GitHub, you automatically get in GitHub and URL which is pointing to Netifly where you can go and then you see there are compiled run on the HTML of documentation of your own branch. With those are CLI and the setup of the editor, like VS Code in this case, you can still do it locally also. But this also makes it easier because maybe you're not using VS Code, you're using, I don't know, Nano or Joe or some other weird editor and it's not configured yet. So it doesn't matter, write in it, push the branch, go to Netifly, preview your branch there. If the branch gets merged, then the preview branch gets automatically deleted, so that's also cool. And it makes it also easier if you want to ask other people to review it, so it's less work and it's faster. And this is basically how it will look like in GitHub, it's saying the preview deploy is ready and then you can click on it and then you're ending up on your branch. We will move to GitHub action for continuous integration. The nice thing is all the checks we're using, they are container based, so you also can use them with your Jenkins, drone CI, build kite, AWS pipeline, whatever continuous integration you prefer to use. But for the docs self, use GitHub actions because they are fast and working for us. And this is for example, like the debug output of a GitHub action which is running a link check. At the bottom, you can see it is red, that's saying it fails because I wrote a poll on wrong, I wrote it with two ease. What we also do with CI CD, to make it clear, it will not be only link check, we will run currently something like 20 different checks for markdown quality and the quality of the content. And we also use it to run lighthouse checks once a week at the moment on your site. And there's no decision made how we configured, like if the docs team gets a mail with a link to download the latest results or something. But this is really nice because it gives us more insights about the performance of the site, accessibility, CO and stuff. So if we do that over a long time, we can see how we are increasing or decreasing and it's nice to get a better overview. And also one reason why there were less contributions to the documentation is because we are writing on a completely new blown documentation guide. The idea is that it's one guide to rule them all. This is a completely markdown style guide and a total style guide about wording, terms, length of sentences, lines, why you shouldn't use certain words and stuff like that. It is almost finished. I think we are now at 99% or 98%. So there are still two or three tests for the style guide itself, something missing. It's good to go. It's also nice. This style guide got a review by technical writers from other companies. For example, we got a review from Sarah Maddox from the tech team from Google and from another person from the Microsoft Azure docs team. And they loved it. I mean, still it's not perfect, of course, but the feedback was really, really positive and they even took some parts of them over to their own style guide to be sure it was Kubernetes to be clear. And also what we will do, we will do content splitting. At the moment we have just Docs.plonar.com. There you have developer documentations and user documentations. And like I already mentioned, this is just kind of confusing and not clear from the content structure. So we will split that in Docs.plonar, which will be end user documentation for Volto and end user documentation for our current front end. Like if you want to create a site, click here, create a site, blah, blah, blah, blah, blah, how you set, do the configuration. All the developer documentation, meaning REST API, development add-ons, theming, Volto, back end, Guillotine, will go to developer.plonar.com. This is nice because there's also a common setup and standard for lots of other projects, open source or commercial using it. People know that, they know what to expect. And this is also why we decided because it also makes it easier and faster to add changes and deploy stuff. And another fun picture at the bottom is the same link check, but then he's passing for a Plon document, but this is running on a drone CI. And shortly about content structure, these are only just some mockups. What I created, I am really bad with creating mockups, but it doesn't mean that it will look up in the end, look like this, just to get some kind of idea. Basically, the idea is, for example, if you go to developer documentation, you get clearly like the topics. I want to have Volto, REST API, Guillotine, or maybe developing Plon Core or whatever we add there. And then, for example, you click here on Volto and then you get a really clear page, not too much stuff around, not too much colors. I mean, we should add some colors, but make it clear that the content is important. It's not about colors and fancy pictures all the time or gifts or whatever. The content is the main message what we want to divide. We can still have posts about Brittany, docs, and other things and lots of glitter, but we have to find a way to do that properly. And because of that, we're also getting a more flat structure, which is nice because we're getting shorter domains. So, for example, docs, Plonork, external, Plonup, Dexterity, docs, advanced, validators, which is really a mouthful word change to develop other Plonork Dexterity validators. This is easier for people to remember. It's better for CO, and it makes me a lot more happy. And also, we will start running like, and at the bottom, you will see this basically, a standard test using Spectral against my Open API spec. We will also start using that for Plonrest API and for Glutina, but not as part of the docs team. And in case people do not know me, that is basically, that is me, talking about fancy job descriptions. My officially job description at the moment is content architect slash docops engineer. And basically what I'm doing is I'm working on API and API-related documentation for high availability. I hate this word. API is gateways. Think about APG or MuleSoft, and then you're talking with banks or insurance companies, and you have millions of requests per second. And that was all. So who is completely scared? Who hates it? Who is not agreeing with everything? Scared is okay. Yes. We have time for questions. As ever, I will ask you to indeed ask questions and not give a monologue, because that way, if you want to hold a monologue, there's lightning talks. And wait for the microphone because it needs to be on tape. Or how do you call it in the digital age? Hi, thanks. I know you just said that images and glitches are not everything, but in good documentation, we do tend to add screenshots and pictures. Oh, sure. Sorry, I didn't mean that. I know. But how easy is that in the new documentation system to add? Way easier than current, because we are using, I forgot the slide about that. We will use Puppeteer for that. And writing things for Puppeteer, you can choose if you want to do it from a command line, or if you want to do it via an editor, for example, in a web-based editor, if I'm right. Yes. We're using Puppeteer, which is well maintained, but there's also a very nice editor that does clickity-clickity. Basically, you click your text together, and then it spits that out as a command line file. So even I can create a test just by clicking. And then I say download this as an XML file, and it will repeat itself for automated screenshots. And these things are run as also as GitHub action once a week at the moment, then as Chrome, and then we embed the pictures to the right places. The base idea is still the same, what we're doing with the robot test at the moment. It's just easier, because you can choose to use an editor, or like me, just do it from command line. But it's more stable and faster than robot tests, for this case at least. Sven, one question. You're switching from RST to command mark, from Sphinx to some other systems around, and did you have something like a preview for us? How it looks like to write just the normal documentation, how to link in between pages to get... Yeah, well, all the stuff we have with Sphinx at the moment, where you can include code, highlight the stuff you need and everything. This is partly already covered in the Star Guide, and for the rest we will publish it at the moment when we really go online with it. But the idea is basically it's working like normal markdown things, that means as we're right, there's a bit less functionality than what you get with Sphinx out of the box. But after comparing all the pros and cons, we decided it's worth, because adding links in markdown is usually, at least if you use a pre-configured editor, or even if you do it from hand, easy and fast, and we still can reference documents and links and stuff. Basically, the only thing that will change for you is the VintX, because it's your change from RST to markdown, but we keep the same functionality. One of the problems also of what we currently have in RST is that our documentation comes from a lot of sources, which quite often means you get hard errors, because the same links, link names are used in several documents, and that's one of the side effects of an organically grown documentation. But it's where the REST functionality is actually more a hindrance than a help at the moment. What about doc tests? This would not be possible to do anymore in the documentation. We never ever were in favor of that. I did that anyway. No. The doc test for the Plone API? We know about the challenge with Plone API, but we have that already also now, because now building the docs, the main pain point is still as Plone API, and we still have a date with Nates to talk about how we could improve that. Because this is again, from a developer point of view, I completely understand it, but from the main general point of view of good quality documentation, this is just bad, and it's not to blame Nates or anyone else who was ever involved in doing that. It was still awesome, and at this time we did no better, but now it's also time to modernize, improve, and go with time. It's also more or less the opinion of at least the tech writer community that doc tests are neither. They're not a doc and they're not test. Those two shouldn't be mixed. One of the main issues with getting contributors to write good documentation is when they stumble over syntax and being unable to build their own local version of documentation before committing and pushing it up to a repo. Will there be or is there now a tooling available for people so that they can build the docs locally? For the future, the really, it's called Doza CLI, and you can even hook up Doza, for example. I hooked the previous test up to my VS code, so I have a VS code shortcut and then it's building it. For the current version, there's also possibility, but that never gets published because we try to publish it, but then we got someone friendly comments and then we decided, so I decided not to publish it, but it is. It's working. It's even on Dockerhubs, it's regularly updated. I know that this is a pain point, but this is one reason why we try to make it easier and move to a new setup. This is also why the nice thing is, and I know this is really hard to explain to, I'm sorry to say that, more often to developers than to tech writers or people who only want to contribute like a typo. You should, even if you talk about docs as code, this is right, but this is also why, for example, get markdown previews in the editor. If you say, for example, I want to have the markdown preview with the CSS of the documentation side, we can also do that. The main difference is only like the markdown previews and the mark preview only off your side. You don't get, for example, the whole sidebar to the right or the menu or the left, but the question is also, do you really need that if you're just starting editing one file? Why do you need the other files? I can understand that you want to see the main impression generally of the file you're editing. We can do that, that's no problem. If you do not want to run DOSA locally on your setup, then push it to your branch and wait 10 seconds till it's on right on the fly. Currently, there is, well, there's half, but not really published Docker way. There is the old way, but that basically depends on having a functioning Python environment and quite some other dependencies, which basically means it does not work on Windows, and we cannot get it to work on Windows, probably because we're stupid and I only know Linux. I'm sure there's a way, but we can't. It would be great if also developers, or not even developers, but writers, volunteers, other people who have a Windows machine can contribute. That needs to be done. We cannot assume that everybody who wants to contribute to documentation has a developer setup. That's insane. Exactly. There's also one really, really important point. If people want to contribute to documentation, they should be able to do it in an easy way. I don't want to force them to install Python versions, messing with build out, getting frustrated because something is not working. And they're literally the only thing they want to do was change one letter. But being busy trying to figure out a setup for four hours to only change one letter is so demotivating we will lose the people that move away. That is also the reason why we decided to stop all that. That's nice for developers, but not for non-developers or less technical people, so make it easy. People want to commit to documentation and not to debug and build out areas or something. Quick question. Dozer is going to be a similar thing to, for instance, what make the docs running in a docker is when you do make the docs server? Basically, it's more than that. It is one functionality. Basically, you get an old CI where you can say, make docs, make local builds, run tests, run certain tests. There will be a lot of functions. Not from the beginning, but the ideas also we can use dozer for what you do now with Papyrus to fetch different repositories. Basically, then you say, this is my branch, this is repository, this is a part of the desktop where it should end. And then it can also fetch them. So there will be lots of functionality and after, so step by step, we will add them and more. There's room for one last quick question, because otherwise you'll miss your coffee and you're going to need it. Make sure I'm not taking this away from someone else. This is going to start out sounding like a statement, and I promise I'll end up with a question. So I've been to several conferences with people from a lot of other open source communities, and more than a few times they've brought up how amazing our documentation is. And I know it will never be good enough for you. But not just the quality of the documentation, but also the way we do the automated screenshots, the whole infrastructure of the thing. So thank you, first of all. And the question is, are you aware of how awesome you are? Oh, no, thank you. I didn't know. Thank you. Oh my God, I'm flushing.
|
Building, maintaining and continually improving documentation by doing Docs As Code the DocOps way. In this talk we will share our journey from our current (Plone 5) documentation to modern and astonishing docs for Plone 6 and related (JavaScript) friends. We'll talk about the reasons, how and why we made certain decisions. Besides that we'll discover how Googles Summer of Docs helped us, and we'll see where we are with the quality and setup of our documentation compared to other open source communities.
|
10.5446/55209 (DOI)
|
Everything in life starts with a question. We gain understanding of the world we live in by asking questions. If you look at young kids and how they discover the world, you will realize that curiosity is a vital part of the human nature. It's actually what defines us as human beings. I have a three-year-old son that asks questions over and over again. I think if you're kids, you know that. I made the mistake to actually, when he asked me questions about something, one of my computers, I took him and we took the computer apart. Now he wants to take apart everything. I'm not sure, but maybe he's the only three-year-old and his favorite website is like, I Fix It. The difference between a kid and a scientist is that even though they share the same curiosity and they're asking questions, the difference is that the scientist answers questions in a different way than maybe a parent or a kid. A scientist needs to provide evidence and make his or her results reproducible for others. Others can revisit that and evaluate that. It's a bit like open source. Show me your source code and I will check it. What you're saying is valid. It's easy to get on the stage and tell people whatever they want to hear. Everybody at some point checks the source code and the system really works. I could fake everything that I'm doing here, but you can go and check it. This is what sets scientists apart from kids or maybe other people. How would you reflect that idea that I just outlined in a website? I will give you a little bit of context why I started to talk about this. The Humboldt University in Berlin, one of the big universities in Germany, asked us to help them with a new website that should run outside of their main cluster that they run currently with Plan 4. Their idea was to have a website to present themselves in the German Excellence Initiative, which is an initiative that was started by the German government 30 years ago. It involves a longer selection process to find the best universities in the country. They planned to spend 2.7 billion euros in total and give them to excellent universities that aim to promote cutting edge research in Germany. This is what Wikipedia says. Last month, eight universities in Germany after this long selection process were selected. You can imagine that this was quite a big thing. We sat at our office and were watching the live stream and them announcing that. There were a few surprises about the universities, but now this is settled for the next year. The Humboldt University was one of those eight universities that won that initiative. Now that you have context, let's come back to my original idea, questions and exploration. The idea of this website was to show the scientific process that researchers do in universities and make that idea reflect on the website. The first step in the scientific process is to ask questions and then explore possible solutions in either way. This is the website that loads like this and starts from zero or more or less, except a bit of branding. Then it starts sorry for all the German in there, but I will translate it. There is this typewriter animation and it says everything begins with a question. This is where it starts, if you are like a scientist or maybe a kid. Then when that animation ended, this question cloud appears. It's different questions that you can ask about the excellence initiative. What does excellence mean? Do we have to control algorithms and those kind of things that people ask? You can start to explore that site. You can hover over it. You will see that there are pictures, there are videos. You can scroll down. At the end, you see that you actually can start to enter your own question. There's an input field and you can enter your own question. The input field says please ask your own question, so you can type in a question and then send that. It will ask you before, do you really want to send that? And explains you a bit like GDPR issues and everything. Then you can actually send that email and it goes to review process and they will actually check those questions and see if they can produce that and put them on the website. It's meant to be a bit of interactive element in the website. We have to see how it goes, how many people actually ask questions and if they can keep up with the pace and answer those questions. So we have all the questions there. Some of them, the university came up with them. We are waiting. We got a few questions from users and that will continue. But that's only the first part. Everything starts with a question. It does not end with it. We also have to answer the question. I won't go into detail about scientific processes and stuff. It's a lot more complex than just questions and answers. If you have to put something into a design, you have to simplify things. So you have questions and answers. You have a hypothesis, of course, and then you go to review process. But in the end, that's what differentiates a scientist from a three-year-old kid. You have a specific process and that you have requirements. So let's look at what happens if you actually click on one of those questions. So you click on one of those questions and you get the answers. So the question is actually there and on the left side it says this is the answer. Then you have different forms how you can present those answers. You have the text form, the image form, and a few other forms like audio and video. But I will come to that in a minute. And then on the right side, you see the sidebar. So the sidebar, you can explore that further. You can check that images. You can zoom them in. And then if you ever read a scientific paper, you have to cite everything. I guess not only in Germany, there was big discussions about PhD thesis and things like where people did not really cite properly. So that's like no matter what your studies are at a university, you need to cite properly. That's important. So the idea was that you have this sidebar and you start to cite actually things. So you add references, which were links actually, and then that's too fast for me. So you add links, you add references, you cite something, you quote somebody, and then you can link. So all the links are actually in that design not in the page itself, but on the right side to reflect that scientific process. And the part where I was too slow at the end is you have the ability to tag content. And then it will show this question cloud at the end. And you can actually in the question cloud also enter your question again. So it's that same element. Since it's reacted, it was super simple. We put it on the front page, we put it there. Same thing. So super easy. So since this is about content management systems and about Volta, let me show you the editing process for this core content type. It was a standard content type that we created. Answer and questions is a one-to-one relation. So we just created one content type. So let me show you how the editing looks like. So we are locked in in Volta 3, as you can see. And you have the left toolbar and you can click on edit. And then you see the edit view. It's pretty much the same than the view. You saw that like small hiccup in the middle that's already gone in the new version. You have draft.js in there. So you can do both italic. You can add links. You can add HTTP, Google.com, whatever. Or you can just type in two characters and then choose from that. So both internal and external links. Then you can go further. And of course, you have the standard functionality. We stripped that down to the things that the client asked for. So you can choose an image. Then you can choose it from the image chooser. You can choose image left, image right if you want. And we have the second element, which is the sidebar. And that's a bit more complex because you have the editing part and the other element actually shows up in the sidebar. So we created a block that in the edit mode shows up like a regular image, but it will move to the right when you actually save. You will see that when we scroll down a bit. It was kind of a compromise because in every project you have a budget, right? And we would be able to do that actually in the sidebar. But it's quite complex, right? So this element now shows up there. And the client basically told us, we don't care that much. They're used to all kind of systems that are really crappy when it comes to user interface. So they like that a lot already by default. So we hadn't like to super-polish that. So since this is like a standard content type, you also have like the metadata field, right? Which moved to the right in Volto 4. So you have like the default metadata. You can upload videos and audios, which I come to later. You have the standard settings. You have the categorization that is used for this question cloud, right? You have publishing date. And we have Kit Concept Zio there as an add-on product. So you can override the very basic Zio fields, right? So that's the basic editing of the main content type. So you already saw in the editing that we offer rich media and before as well, right? So you can switch between different forms of showing content, right? So one of the things that the client wants to have is an image gallery, right? That's pretty much standard when you look at like news websites or newspaper websites that you have an article. And in this article, you have a few images, right? And then you want to like switch to the image gallery and show all the images that you have all together, right? This is something that I'm not sure if that like international news sites do that as well, but in Germany, all the sites like basically do that. But I wanted to have something like that, right? And those are like the different kinds of presentation modes, right? So you have answers that are text. You have answers where you can switch to the image gallery or you can switch to audio and video. And this is like the image gallery, right? You can switch back to like audio, video or image gallery. You can share your information and at the bottom, there's lots of like legal information, right? Like who should shot the photo and all those kind of things. Then you see here that we have like a Netflix like animation, right? So if you don't move, then it will like fade out, right? And if you like move again, then it fades in again, right? And then you like just can go right and left and then back to the text view, right? So this is something that's like completely configurable by the editors, right? They can upload videos, audios, whatever. So videos are next thing. Here you see a video example. Since universities like mainly target like or at least partially target like younger folks and I can tell from first time experience that like all our interns, when I asked them like how do you want to learn, right? Like do you want that big book or do you want to watch an ACAD course? They're all like ACAD course. Sure. You can sit there and watch it, right? Good. So yeah, YouTube is incredibly important. Of course they have a YouTube channel and all we do is like use the YouTube blog or in that case actually just in a field where you enter the YouTube video, right? And it will show it up. So that was like, that's like super easy. But you can still like switch between the two. So next thing that we have is audio. So the idea, one of the ideas that we came up with together with the agency and the client was of an audio, of combining like audio with a slideshow, right? Because like listening to audio is like, if it's like done very well and everything, it might be like interesting, but it could be also a bit boring, right? Like especially if you're like visually oriented, like just listening to audio might not be enough. So we came up with the idea to have this like audio track and then allow the editors to tell when to show like the next slideshow element. And you can see that here, this is a splash screen animation that you see. And then if you, oops, oh crap, that was just, so splash screen animation. And then you hit play. And then you see on the left side, I'm not sure if you can see it in a good way. But that you have, you see the audio track, right? And it will switch soon to another image. And then you see like a small line there, right? So yeah, and then it switches to the next one, right? And this is something that's completely configured by the editors, right? So it can go to Plone and can say, okay, after 50 seconds, this image shows up and then this image, right? And that was kind of the idea. Another, that was like all the multimedia elements that we have, right? Text, audio, video, and image galleries. Then they also wanted to have a block, right? So their idea was that like scientists, wherever they are, right, in Alaska or whatever, right, they can write blog posts about that, right? That would be like pretty cool. And they wanted to have a block with like a simple publication workflow. So we created a block section there, right? So this is like the block, it's called like Ha'u and Taviks, like H.U. on the road, so Humboldt University on the road. You have an overview page. This is the navigation. Yeah, you can go to the block section, you can click there. And it's like, it's super easy, it's nothing special, right? You have an image, a lead image actually, then you have the legal information and you have like the standard text stuff, right? So nothing special. In the blocks, it's actually you don't even have a sidebar, right? That was something that like the designers decided that shouldn't have a sidebar because only the answer or the question should have this scientific thingy on the right side, right? So differentiate that. So that's like standard Volto, right? It's almost boring. I already showed you that quite a few times already during the conference, so image left, right. Nothing really special here, so I could as well just like skip that. Another thing that I wanted to do is like to have profiles for like the top scientists, right? To like show them what they're doing and like, I mean, you know how that works at universities, right? You get one of the top notch like scientists and then you want to like show them, right? And show like profiles of them. So this is also something that we did. So you go to the navigation. The first like three parts like questions, answers and people are actually fixed and the rest of the navigation is like blown and flexible. So you have this mansionary view here that we did. We just grabbed a random React application and you can go to the individual page, right? This is of course dummy content, right? Chomsky isn't at the Humboldt University. But yeah, there you see the edit view and that's an example of like a standard content type, right? You can still do like standard content types in in Volto, right? You just create them like you're used to and they will show up like that, right? So with zero react knowledge, you can do a content type like that. You just create the schema, XML, Python through the web, whatever you want and it will show up like that, right? And you can use it. And that's good enough. If you have like content that's like highly structured like with those profiles, right? So you have a name, you have the number of publications and the biography and that's it, right? So you don't need that flexibility. We could have as well have done that with the fancy Volto editor but there was no reason to do that, right? So we didn't. And that's almost like too boring to show because that's like standard content editing, right? So we wanted to have like standard content management capabilities. And this is how it looks like. So as I said, like the first three were like are kind of fixed because those are like the questions and the answers that you can go to overview pages. I will skip that here. And the rest is something that the editors can freely like fill, right? So we have this overview page for the excellence cluster. It has listings as said, I mean nothing special, right? You can add images there if you want left, right? Like the same as block, right? So we basically have three content types that are pretty much alike like the question answer content type, content type and block content type, right? I know that on the stage yesterday I said that like I want that single content type, right? But there's a good use case for like having multiple content types if that makes sense to your users, right? So Plone strings was always like to be like super flexible, right? And I don't want to take that away from Plone. So it's totally fine to create a bazillion of content types if you have that use case, right? And if that makes sense, like Rodrigo said in his talk, like we actually like migrated a client from using a single content type to multiple content types because that just makes sense to us, right? So I don't like, it's not that I dislike content types, right? So that you didn't get like the wrong impression. So that were the basics, but I would like to share like a few pretty common requirements when we do Volto pages, right? So if we at Concept like do Volto pages, we have like recurring requirements, which is something where Plone really shines, right? Because it comes with a lot out of the box. And usually you can just go to client meeting and they ask you, yeah, can you do like multilingual and whatever and you just say, yeah, check, check, check, right? And then like you have the deal, right? More or less. It's not that simple with Volto yet. We're getting there. But I wanted to like share a few of the standard requirements that we have. So the first requirement is multilingual, right? Like actually all the sites except like the first two Volto sites that we did had that requirement multilingual, right? And I used to work at UPC in Barcelona for two years and I worked with Ramon and Victor. This is where I met them. And I know how complex like multilingual is, right? It's an incredibly complex topic and it's something that not many systems in the world get right, right? And it's still like, it's hard. So what's the status of multilingual in Volto? So the basics in place. The basics means we have the language negotiation, that mechanism, right? That when you like go to the site at first, that it like does those like three to four checks and tries to figure out what kind of language you would like to have and then it shows it, right? And then if you switch, it stores the cookie with your language preferences, right? Which lately like lots of clients tried to try us to like remove because of GDPR issues. They came to us and said, like, can you remove the sticky bit and the session cookie and the multilingual cookie maybe, right? And we were like, we could, but you don't want to do that, right? So yeah, but back to multilingual. So the language switcher was also trivial to implement, right? It's just react like setting a cookie. So like every intern like with two weeks of experience can do that, like with react, right? So that's super simple. So that was like more or less enough for us to like get our clients started and get our clients happy. We were quite open with that, right? If a client asked us for a requirement, I won't lie to them when it comes to Volto, right? I say like, okay, this is like something new that we're building and this is the stage, right? And like if we have enough budget, then we can do that, but it could also be something that's like hard to implement, right? And just tell them, okay, we have the option to like go back with old-plone and so far none of the client wanted to go that way. So we implemented multilingual. The good thing is that Victor told me that it's like that's that everything like on the rest API level and stuff is in place. So we don't have to migrate anything. It's just something that's missing on Volto now, right? And what's missing is like actually the direct link between like the like the content object that are like deeply nested, right? If you go like to the down to the content tree to a page and then you want to switch to the language, currently is switched back to the root, right? Which is something that quite a lot of systems actually do that don't do multilingual right. And it's something that we'll definitely like want to implement in the next like I'd say like three or four months because like we have that requirement a lot and if we have budget and a project over, we will like most likely implement it. And the other thing is then like a bit harder, maybe because it includes like UX is the the the the side by side view of like multiple languages, right? So you have an English version you want to translate it to maybe Italian and then you have them like side by side so you can translate it, right? Because we have blocks now and yeah, that might be a bit harder. But anyways, I'm pretty sure that we will like that we'll get there, right? So multilingual is kind of like a check with some like additional loads, but we'll get there. Another really like basic requirements for actually actually all of our sites right from the start is accessibility. In Germany, if you have like a public website, a government client or anything, you have to sign a contract that says like WCGA and stuff. So you have to make sure that's accessible no way around that, right? And at Kikoncept we have lots of clients like that are public institutions, universities and government bodies, right? That's like a wonderful like a big part of our client base. So it's clear that we need to be, we need Volta to be accessible. And the state of accessibility is not that bad. I mean, usually people think that like JavaScript is like, there's still people that think that if you do do a JavaScript site, like there's no way that it can be accessible, right? That's like, that's just far from the truth. That's not the case, right? Actually the tools that are around for accessibility are quite sophisticated actually. So you have set a code analysis, you have, you have Cypress test that you can test so you can automate a lot, a lot of things, right? That's not everything, but it's a pretty good base actually. And at the Beethoven Sprint we put the basics in place. So we have like a set of code analysis, we have those acceptance tests that do the accessibility checks and we run them both on Volta Core where they pass and on our client projects right from the start, right? And then it's not that much of an issue because that's a good thing about CI and like automated testing, right? So if you introduce something like the CI will show you right away, okay? You did something wrong, right? You forgot something, so please fix it. So it's not that much of an issue. And yeah, we will, also we had clients that like, that like started to do audits and for the Humboldt University actually will get an external audit. So the Humboldt University hired an external agency to do an accessibility audit for our site and we will like hear from them hopefully soon about that and see what they think about like the accessibility, right? And for another project, we also have that. There's also the possibility of a European Union funded project where we can get another audit for accessibility, right? So I think we have a pretty good base already and it will even further improve. And Paul is doing a fantastic job there as well and helping there. So yeah, the status is quite good. Then we come to kind of a delicate topic which is loading time, right? This was actually one of the reasons why I moved away from Plone and to the JavaScript world in the first place, right? Because Plone was just too slow. That was the thing. I mean, I like Plone, I like Python and everything. I didn't like JavaScript. And the only reason why I like at some point tried out Angular and React and stuff was because like Plone was too slow. The approach was too slow for like our use case. So it went on from there. So in every project, what's best practice is like that they have a kind of performance budget, right? Which is basically the bundled size because that's the main limiting factor, right? In modern JavaScript, you like or even in Plone 5, let's face it, you have a huge bundle of JavaScript. If you don't do that, then you're screwed anyways. So we have a huge bundle and if you don't do service rendering, then you have to transfer everything to the client, load the huge bundle, load your CSS, then execute that in your browser and then you can show the website, right? So this is something where Volta is a lot, lot better than standard Plone, but still loading times are an issue and you don't get like fast loading times for free, right? The more you add, you have to pay for that, right? For every single thing that you add to the bundle, you have to like pay for that in terms of the performance budget because your bundle grows bigger, you can do code splitting and stuff. But in the end, my experience is that if you don't have a performance budget and you're not careful with what you add, you will screw it up, right? You can't screw up any site. Actually when I did my first Gatsby site, and Gatsby is out of the box blazing fast, right? You get like 99 points on PSI from Google and stuff. And what I did was I added a custom font there because we used that in our corporate design, right? And then immediately like everything dropped, right? And it was like I got 60 or something. So I broke my Gatsby site, my first one with like my first commit basically, right? So you can screw up the best systems on Earth if you do things wrong, right? No system on Earth will prevent you from that. So we work together with an agency, right? For that design. And the agency like came to us and said, okay, we want that image gallery, we want the audio player. The Humboldt University came to us and said we want an HTML block so we can like just like enter HTML, right? And every single time they came to us I said like, yeah, sure, we can do that, but like that will hurt performance, right? And at some point we have a certain budget and if we push that too hard, your site will be slow, right? And the agency like always came back to us and said like, yeah, hey, you know what? That's like super easy, right? I mean, just in the image gallery, just do what Netflix does, right? Super easy. No, the web is not like an app, right? So let's have a look what we are competing with here, right? Because that's the perception of like clients and also like of agencies, right? Because they don't have an in-depth understanding of like the technology behind that and they just expect it to happen, right? This is something that Albert also said, we're not competing with like WordPress and Drupal, we are competing with Google and Facebook on the UX level, right? And let's have a look what we're up to, right? This is what we're up to. So Netflix is like 60 megabyte like the app, right? On your phone, that's like on an iPhone. Facebook is like 170 megabyte, right? And people just think like whatever they have on their phone, right? It's like the same as a website, right? So they have like this huge like budget and amount that they can use, right? To have a rich user experience and do all the fancy things that the app does, right? And for the user, it's the same, right? They just look at it and say, it runs on my phone, right? So why don't do it? So this is what we're up to, right? And this is not the only thing, that's only like the front end requirements, right? The image gallery and stuff, that's easy. We also have like the editing UI where we add lots of stuff. I mentioned the HTML block, right? So we added a library that does syntax highlighting for the HTML, right? For the blocks. And that's like 20 kilobytes G-SIP, right? It's all Paul's fault, by the way. He wanted to have that. And we ship that in our bundle right now, right? So we have to either have like proper code splitting and like lazy load that end thing or we have to kick it out. But yeah, anyways, just like side note, we won't kick it out, no worries. But this is what we're up against, right? So on the one hand, we have all those requirements and on the other hand, people expect it to be fast because the first thing that the Humboldt University did when we shipped it to them was like sending us a page speed inside, right? With the numbers. So the thing is that Anton did that great talk about like perceived performance and actual performance. All the screencasts that you saw here, I did them all on 3G, not by choice, but because in my hotel room, like the Wi-Fi really, really sucks. And I did all of them with 3G, seriously, right? Please try that with Plone, with standard Plone. And that was like really quick. I don't know if you noticed that. I mean, to me, it felt like quick when I checked it, right? The editing experience and the switching between the sides, right? That's something that you would never, ever be able to do with Plone with a slow 3G connection. Okay, so let's revisit. What can you do with Svoltor to sum up things? I would say like quite a bit. Of course, I'm biased, so like, don't believe me, right? Look at the facts, look at the source code and everything. Try it out yourself. But I showed you a website that's based on agency requirements and they pushed us quite a bit, right? Because what the web can do, right? Because they want kind of an exceptional design that derives pretty much from what lots of other websites do, right? And then you realize, oh, that's not that easy. I never tried that and it's not that easy. And something that you don't see if you look at the site, right? So it's like subtle differences. But we could make that work because we have React, right? Look at the mansionary thing. I first looked at the rhyme and I was like, I don't know. But then you just choose a random library from the thousands of libraries that you have and it just does the trick, right? So that's easy, right? So other things are really, really easy. So I think we could do a lot. So let me show you one other use case that we have from the opposite spectrum, I think, of websites that we do. Dylan said in Tokyo, and I hope I can quote him on that, when we showed him Volto, he said, I don't know about Volto. I mean, there is a use case for ugly websites, right? And I was like, what? So I strongly disagree with Dylan, right? We don't do, at least as a concept, we don't do ugly websites, right? No matter what client that is. Though I think I kind of understand what he means, right? Because I think he's also doing lots of government client websites and stuff. And we're not allowed to show you the actual website of our government client, but you can take any high level government client in Germany and the website will look like something like that, right? So disclaimer, this is not our client, right? So this is a different site. But it doesn't matter because they all look the same, right? They're like, I think, 12 or 18 highest level public institutions in Germany and they all look the same, right? They don't have the same design manual and stuff and the same typo. So there's not much flexibility here, right? And this is how the actual website looks like, right? So you have the header, you have the navigation, then you have this slider here that usually shows the head of the ministry or whatever that is, right? Because that seems to be really important. And you have this slider here, then you have a grid-like element with two elements. So that shows that that's actually the news. And then you have three elements, then you have a Twitter if they're really hip and up to date, they have a Twitter feed, they have a YouTube video, and then they have a few other things, right? So that's one of the other projects that we had and that we did with Volto, which I think are a bit of the opposite end, right? And that also worked quite well for us with Volto, right? So I think at least at the spectrum that Volto usually covers, Volto can actually cover those use cases, right? To summarize and sum things up, I think Volto is a good choice for large government sites, for university projects. You have to be careful, of course, with every project, right? Like to check what Volto can do. And you shouldn't just throw Volto on 500 Plon instances, right? And expect everything to work, of course. So there's always a good reason to choose Volto and a good reason to not choose it, like with every other system, right? I think it's also a good choice for large corporate intranets, which is one of the use cases that we have right now. And it works for design-heavy websites that push the boundaries of the web platform as well as maybe a bit more boring or traditional websites that just need to scale, right? And that was about it. Thanks for listening. I'm Timo, like Tiszt on GitHub, Timo Stollenback on Twitter. That's my Plon.org email. Feel free to shoot out to me if you have any questions. Thank you. Thank you, Timo. Is there any question? Come on. Okay, come on again. Hi, Timo. Thanks. Then I'll just make up a random question. Yeah, thanks. You have to continue talking, right? Yeah. So, you still can do content types. That's really, wow, like you don't have to do anything schema. How do you think for the next months, because that's the thing we've also been talking at the Beethoven Sprint with the properties page with the normal content type, the stream with the blocks, which are really cool for the multimedia and the freakish editors that want to control everything, and some of the metadata, for example, with news items and events, which are on the properties page, but you also want them in the stream. Can you tell a bit more about that? Yeah. Because you want to put them in the stream, and you want to edit them, have to edit them on the properties page, and I'm afraid that our entry level editors will get confused between switching to the two and where they can find the start and end of a news item or for an event date and these kinds of things. As always with Volto, that's not a technical problem, but like a UX problem. And actually, to tell the truth, we did all variations that you can imagine. So we did the standard blown stuff of, that was what we did first, description and the standard lead image. So we have a block that has a lead image plus a title, and that's fixed at the top. So you can do that easily. That's not a problem. We also have quite a few clients that ask us to separate that specifically to have a teaser and maybe sometimes even a different teaser image with a different sub-line and writes and everything. So sometimes we have four fields and that's like duplicated. So we have eight fields just for the preview. So there are clients that ask us for that. We have also clients that ask us to generate that. So it's something that we have to decide as a community what should be the default. And that's a process we have to go through. But blown is so flexible that you can cover all the use cases dependent on your client needs. And this is what we always did. And this continues to work. So did I answer your question? Okay. So thank you, Timo, again. Thank you. Thank you.
|
The Humboldt University Berlin was chosen as one of eleven universities in Germany to hold the title "University of Excellence" in July 2019. The German excellence initiative aims to promote cutting-edge research and to create outstanding conditions for young scholars at universities, to deepen cooperation between disciplines and institutions, to strengthen international cooperation of research, and to enhance the international appeal of excellent German universities. The Excellence website of the Humboldt University Berlin was developed by kitconcept in mid-2019 with Volto, the new React-based frontend for Plone. Volto will become the new default frontend for Plone 6. This talk will outline the challenges we faced and the solutions we found when developing a cutting edge website for one of the largest and most renowned universities in Germany.
|
10.5446/55138 (DOI)
|
Music My colleagues have addressed deep intellectual questions. My topic, human population growth, is intellectually simple and straightforward. Nevertheless, it's probably of more concern for mankind than any of the other presentations you've heard here during this past week. I'm not a demographer and I'm not a specialist in the subject, but I want to show you that by reading a few comprehensive books on the subject, you can all become experts and it's the purpose of this talk to generate as many experts as possible on this extremely important topic. The newspapers are full of articles about pollution, the increase in the atmosphere of methane and carbon dioxide, of the overfishing of the oceans. A little less publicized is what is even a more serious problem and that is the decline of biodiversity. It is estimated that there exist 10 million different biological species, animals and plants. But 50,000 species are disappearing each year in the present time. That means an extinction rate of one half of one percent per year and this extinction rate has never been as high since the time of the dinosaurs. And this disappearance is due to the action of human beings who cut down tropical forests and wetlands are disappearing and so on. And it has led in a few ways on a very small scale to a curious alliance between ultra-right, deep religious groups and ultra-left, green groups. Because they both deplore that mankind is destroying the creation of the supreme being. Well, here are two highly recommended texts, very recent. The first book is really labeled as agnostic. That means that he has absolutely very careful to insert any politically charged statements. Just presents facts. Second book is maybe a little more readable, but it has a few social, somewhat of a social agenda. Not very pronounced, but just enough to make it scientifically maybe not completely objective. Now, this is the basic curve in how the human population has grown over the past six thousand years. And it looks a little bit like as the specific heat of helium near the lambda point. And really mankind is approaching a kind of phase transition. And I think all human beings and especially politicians should be very worried about a phase transition in human society. Nobody disagrees with the numbers because you can't. And everybody agrees that at the present time the population increases by one hundred million people per year. There's no question about that and there's no question that we will move from five point seven billion to six billion people. That is six million by the year 2000. Just imagine every month the birth minus death adds a number of people equal to the population of Sweden, which is about eight and a half million. This month we had eight and a half million people next month and so on. That is the problem. So let me reinterpret this curve and see how it really historically developed because on this scale you cannot read what really happened. Now we all descended according to modern DNA studies from about 10 to 20 thousand individuals who lived about 100 to 100 thousand years ago in a part in Africa. And all human beings descend from those. And it took about 100 thousand years for the world population to reach one million. And that means that the population doubling time in those early periods, prehistoric periods, was 15 thousand years. Human beings barely survived. Clearly some tribes completely disappeared. Others were a little more lucky. And clearly sexual reproduction was barely adequate to keep the human species alive. Then came the year 8000 BC. It's the beginning of the first agricultural revolution. For the first time, people stopped being gatherers and hunters started to grow wheat or rice. Both about the same time in China as in Mesopotamia. And they began to live in villages, small cities and human civilizations as we know it now started. And then it took about 10 thousand years from 8000 BC to 1750 AD for the population to increase by a factor of 1000. So instead of one million people, we had here we had one billion, about three quarters of a billion to be exact in 1750. That means a doubling period of a thousand years because you had 10 thousand years to grow. So you get a factor two to the power 10 that is 1024. So in that period, the human population increased by a factor 1000. And you see the doubling time dropped precipitously from 15000 to 1000 years. What happened in the last 250 years? In 1750, the second agricultural revolution comes into play. That is that due to technological progress, due to use of fertilizers, due to improved transportation, which introduced potatoes from America to Europe and so on, the people were able to produce food at a higher rate. And from 1750 to the 1950, the population increased by a factor of 3. That's a factor 3 over 200 years, that means a doubling time of 127 years. So it dropped from 1000 years to 127 years. What happened during our lifetime? In the last 50 years, the population doubled in 50 years from 3 billion to 6 billion. Why this most recent increase? That is the revolution of public hygiene. We all think that public hygiene started more than a century ago in Europe and Northern America. That is certainly true. But only in about 1950, the well-known results of public hygiene and medical knowledge against infectious diseases and so on were introduced into the most popular part of our globe, Africa, Asia and Latin America. And that led to this enormous decrease in the doubling time of the last half century. And because of the same medical and public hygiene improvements, of course, the life expectancy of humans increased. It was for Western Europe about 45 years at the beginning of this century. Now we are all happy that we may be expected to reach the age of 80 or beyond. But this means that for the first time in history, the human life expectancy is larger than the doubling time of the population. So a very significant fraction of human beings are alive today, of those who were ever born. And this will be even a larger percentage 50 years from now. Then maybe about 10% of the people who ever lived would be alive. So this is the problem, which was partially due to very beneficial effects, like improved public health and medicine. And also at the same time you had to feed these people, and fortunately there was the green revolution that caused the food production per acre to increase by a factor 3 during the past half century. Now is the big question. These are the facts. It's all history. What about the future? And here we have to look. This is still facts because we are beyond 1990. We have to look a little more carefully at that slope of the population increase on that linear scale. And you see the annual growth rate in percent is of course related inversely proportional to the doubling time. 1% growth rate corresponds to a doubling time of 70 years. And you see here we are at about 50 years doubling time. But what is the most interesting thing is that for the first time in human history, the annual growth rate dropped, reached a maximum. That means that the second derivative of that growth curve, which you couldn't recognize, has gone through zero. And all our hope is that the percentage growth rate will continue to decrease. For the first time it decreased not because of terrible things like war, hunger, and pestilence, but due to population planning. And I think that is the only one great hope for the future. Now of course I've only talked about global averages so far. There are clearly important distinctions between the early developed world in which we are fortunate to have been brought up in most of us. And you see there the population growth rate is clearly quite significant lower than in the countries that make later demographic and economical transitions. Let me first discuss what happens here. The growth rate is still not zero, which you would need in the steady state. As long as we keep growing it, it would never end. So eventually we have to reach zero growth. But Europe as a whole is essentially there. It has essentially a steady state population. There is negative growth in Germany and in Eastern Europe, in Italy, is about zero. The only exception in Europe, I'm sorry to say, is my native country of the Netherlands, which still grows in the most densely populated part of this globe at an annual growth rate of six tenths of one percent. That means a doubling time of about 110 years. Which they shouldn't afford. And they can only live that way because they use the food production from an area that is ten times as large as the size of their country. The other big exception in the industrial world is the United States, of which I'm now a citizen. That country population grows at one point two percent a year. But that is due by a factor two, fifty percent of that rate is due to immigration. It's the only country in the world where immigration causes a significant percentage of the growth. All right, let's now look at one of these countries in the late transition or pre-transition. I'll just give you one example. That example is Iran. In Iran in the seventies, the Ayatollah Khomeini decided that women should be marriageable at the age of eight or nine. And the more Muslims were produced, the better to get the devil of America in check. This population policy backfired. Now Iran has a terrible problem. More than fifty percent of the population is under seventeen years old. And the Ayatollah Rafsanjani, who just retired, has re-read the Koran and has promoted family planning. Clearly, the situation in each individual country is different. It's very interesting because there are such good data to look at the population of Egypt through the centuries. And you see, five hundred, six hundred BC, it had the population of about twenty five million. And then it dropped at one point as low as two million in the eighteen hundred. And you see the drops occur due to the scourges of the third horsemen of the apocalypse. War or pestilence, the plague, and the plague again. As you know, in Europe, the population dropped by thirty percent between thirteen forty and thirteen sixty due to the plague epidemic. Clearly, that type of pestilence we are trying to avoid. But it's by no means clear that there couldn't be a virus much more serious than the AIDS virus to threaten us in the next quarter century. So our only hope is that rather than relying on population balance due to these biblical catastrophes, war, famine, pestilence, and the force thing we don't have to worry about. The force thing mentioned in the Bible in the revelations of Jinjom are the wild beasts. We have killed most of them. So we don't have to worry about those. But our real hope is to keep the population in check by family planning. Now these are the United Nations Population Council projections. And you see they vary by orders of magnitude. Instant replacement means that from now on every woman would not produce on the average more than two point one child. I think two children per woman is emotionally reasonable. But two point one because some people die before they reach reproductive age. And if that happened instantaneously now, the population would still grow due to the inertial effect that the age distribution is skewed towards the young at the moment. But eventually we would reach a steady state under ten billion. Then the high we would go approach 30 billion or more, which would not be so pleasant. And maybe we can have a reduction in the population if we really plan carefully. Now you see demography forecasting is like long range weather predictions. There are too many factors and no mathematical model puts all the factors in. Nobody knows yet what will eventually determine the slowing down of the population growth. And here you see predictions that have been made and they range from one billion people to 100 billion. Depending on different types of limiting factors. Now you might say clearly these people are wrong because we already at six billion. But nobody knows with a six billion is really sustainable indefinitely. Maybe we should go back to one billion so that the human race can live for another millennium. That is the issue. So even these low predictions may be correct. We don't know most clearly are in the range between 20 and 10 billion, which would be a factor three more than we have now. Yeah, that brings me to the end. How many minutes do we have? Five? Thank you. That should be enough. How about solutions to this big problem? At the very end I'll mention three that should be politically acceptable to almost every nation. And the United Nations could put pressure on those few nations who wouldn't agree on that. But before I do that, I'd like to ask the philosophical question. Why should we try to attain the maximum possible number? Would a total world population of one billion as existed 200 years ago not represent a reasonable compromise between quantity and quality of human life? These are clearly value judgments and clearly this problem will require a revolution in religious and societal thinking. Clearly the command to go forth and multiply, which was very pertinent and apropos when it was first divulged a few thousand years before Christ, is not applicable today. Why? Because the human population grows, but the size of the earth remains finite. I think it's completely wishful thinking that we could colonize outer space, as I must say some of my physics colleagues have suggested. But these same questions have been raised before. Fifty years ago, Julian Huxley wrote an article in Harper's Magazine and he says, why in heaven's name should anyone suppose that mere quantity of human organisms is a good thing? Irrespective either of their own inherent quality or the quality of their life and their experiences. And even 100 years before that, in 1848, John Stuart Mill wrote, there is obviously room in this world, even in old countries of Europe, for a great increase in population. But I confess, I see very little reason for desiring it. So what are the three items that could help? And which I think the majority of the people living in the world could agree on. You see, because even in the developing countries like Kenya and Bangladesh, for the first time people are becoming aware that more children makes them poorer rather than richer. Life gets so bad and life gets worse if they have more children. And that is the main reason why the population growth rate in the past 10 years has somewhat slowed down. And the people, both men and women in Kenya, don't desire anymore to have a family with eight children. They now would rather prefer to have 4.5 children. But we should make the means available for them to reduce that growth rate. And rather than impede family planning methods, we should promote them everywhere we can. So these are in conclusion my three solutions. They are not my personal. They are, you can read them in Cohen and many other books. And they should be politically acceptable except to the Taliban and Afghanistan and a few other places like that. One is educate and empower women. Two, educate men. And three, promote the distribution of contraceptives. Thank you very much.
|
The 1997 Nobel Laureate Meeting in Lindau was dedicated to physics, but the lecture given by Nicolaas Bloembergen, a prominent Dutch-American physicist, who won the Nobel Prize in Physics in 1981 for the “contribution to the development of laser spectroscopy”, was on a more universal topic, namely human population growth. Bloembergen stated he’s not a demographer or a specialist, but the audio recording of this lecture brims with his interest and concern on the subject. Bloembergen briefly described how the first agricultural revolution propelled human population towards the 1 billion mark, kept steadily increasing over the centuries, and swelled from 3 billion to 6 billion in just the second half of the 20th century, mainly due to vast improvements in public hygiene, particularly in developing countries. Since the time of this lecture, the world’s population has kept increasing by about 100 million people per year and has grown in total by an enormous 1.4 billion. How much is too much? Then as now, there are no definite answers to this question. Long range data planning is very difficult to conduct, and there are too many variables to provide reliable numbers. Bloembergen stressed the need for education and family planning, and while this is just as relevant today, demographers note that even very restrictive family planning policies, or a catastrophic mass mortality event would not be able to greatly reduce human population in the next century. The policies put in place by particular governments reflect current approaches to family planning. Most European countries are experiencing negative growth, and facing large population decreases in the coming decades. Governments are introducing incentives for young families to counter an eventual population decline. An opposite scenario can be observed in African countries, such as Kenya and Tanzania, also mentioned in Bloembergen’s lecture. The current total fertility rates in Kenya and Tanzania are 3.14 and 4.83 children per woman, respectively, and these numbers have also decreased in recent years. Bloembergen also mentioned the case of Iran, where more than 50% of the population was under 17 years old at the time of his lecture, a result of the promotion of large families after the 1979 Islamic revolution. However, the total fertility rate has plummeted from approximately 7 children per woman in 1984 to less than 2 children per woman today, and the government is again restricting population control policies in order to offset a future demographic crisis. There are small fluctuations, but in general there is a worldwide downward trend in fertility, even where total fertility rates are highest. As Bloembergen explained, for the first time in history the decrease in human population is due to population planning, instead of war or famine. That in itself is an extraordinary accomplishment for civilisation. By Hanna Kurlanda-Witek Further Reading: Bradshaw, C.J.A., and Brook, B.W. (2014). Human population reduction is not a quick fix for environmental problems. Proceedings of the National Academy of Sciences of the United States of America, vol. 111, no. 46, pp. 16610-16615. Karmouzian, M., Sharifi, H., and Haghdoost, A.A. (2014). Iran’s shift in family planning policies: concerns and challenges. International Journal of Health Policy and Management 3(5), pp. 231-233.
|
10.5446/55140 (DOI)
|
Andrea is heading back to handle the slides. Hello, fellow scientists. What I want to talk about today fits rather closely with what Paul has talked about. It's a situation I think that Paul and Mario and I have been in for much of the last 20 years, that our work keeps interacting in ways that have been profitable for all of us. If we're ready, the first slide, and now we need some lights down. This is a slide taken in Southern California looking out towards Santa Catalina Island, which is back behind there. What one sees across in front of that is a brown line. That brown line is nitrogen dioxide coming from Los Angeles. I live south of Los Angeles. We produce our own gases as well, but this particular case, this pollution is from Los Angeles. It's only in polluted areas that you really see colored gases because the existence of color means that the molecule can be absorbing visible radiation, and that usually breaks it up. In the particular case of nitrogen dioxide, it also breaks it up, but it reacts to form nitrogen oxide, and that reacts with ozone to reform it. It's constantly being made there, and that's why you see this as a characteristic of smog. I will now be looking at some of the same things that Paul did, and that is the composition of the atmosphere in which what one sees is that down to the level of one part and 10 to the ninth, there aren't many things that you can see. All of these gases are relatively stable. The amounts of carbon dioxide and methane, as you've heard, are changing, but other than that, this is the way the atmosphere with small changes has been for a long period of time. However, when you can get down to looking at concentrations that are very much lower down in the range of 10 to the minus 11 for nitrogen oxide in the brown NO2, and even down below that for hydroxyl, then you see the reactive species that control much of what's going on in the atmosphere. The ability to understand what's going on down in this range has depended upon the development of instrumentation that allows you to be able to make measurements there. We did hear from Paul that we had been making measurements around the world, especially in the Pacific. This is right at the beginning in 1978, and I'm holding here a canister in which it's evacuated in the laboratory. You take it to a remote location, and you proceed to fill it and bring it back to the laboratory and see what was there. This was on a trip, the picture was taken by my wife, it was before we were funded for any of this kind of work. The canister itself is shown here, it's a stainless steel canister evacuated in the laboratory and returned for analysis of what's in it. One of the molecules that we started measuring immediately that we had those samples was methane, as you heard from Paul. The concentration which we measured in 1978 was about 1.6 parts per million in the northern hemisphere, a little bit less than 1.5 parts per million in the southern hemisphere. The sources of this are mostly biological. We know that from the radioactivity, the carbon-14 radioactivity in the methane in the atmosphere. They're a source, the average cow produces half a pound of methane per day, and there are 1.3 billion cows in the world, and that makes it an important source. Rice paddies are another source, and so on. But the important aspect of this is that if you continue to make measurements of methane, as shown here, 1978 to 1986, we move ahead eight years or so, and instead of 1.6, one finds 1.75, and in the southern hemisphere, instead of being down around 1.5, it was up over 1.6. So there's been a steady growth in the amount of methane in the atmosphere in that time period, starting in 1978 and running up to here. We're now up at around 1,750 parts per parts in 10 to the ninth, where we started at 1,520, and where in the pre-industrial age, the number was around 700. So there's been a very substantial increase in the amount of methane in the atmosphere, especially during the latter part of the 20th century. But that's not the only gas that's been changing in concentration, and the initial measurements of tropospheric ozone that were made at a place called Montserie, which was outside Paris, and this is a 20-year average from 1870 to 1890. And the characteristic that one can see in the amount of ozone present at that time more than a century ago was that it was about the same throughout the year, from January to December, had essentially the same amount of ozone, and the amount was at the order of 10 parts per billion. Several 20th century, late 20th century measurements are shown here, one of them being Hoenn-Pyzenberg that you already heard about from Paul. And what you see is two things that have changed. One is that the total amount of ozone is larger in every season, but it's much larger in the summer. And this is because of the photochemical production of ozone, which is favored by the sunlight of the summer and the greater humidity. You get more ozone production at that time. So this has been a major change in the composition of the atmosphere over the period of time of one century from low ozone here to being seasonally dependent with a lot of photochemical ozone being formed. One can see this in a dual satellite measurement, which you subtract one from the other and you get the tropospheric ozone. And what you see is across, during May and June, across the entire northern hemisphere, between 25 and 50 degrees north, the red indicating higher amounts of ozone, that you see a band of high ozone in which most of the developed world lives. And so this is a characteristic that we think is only something of the late 20th century and that wasn't there in the 19th. And one of the major causes is shown by looking at this. You have major traffic problems and traffic is a source of tropospheric ozone. And if I summarize again what Paul had talked about, these are the three characteristics that one needs for making tropospheric ozone. You need hydrocarbons or carbon monoxide. You need the nitrogen oxides and you need sunlight. And you get the nitrogen oxides by heating ordinary air, which is nitrogen and oxygen, in the presence of some catalyst like your carburetor that will convert N2 and O2 into two molecules of NO. That gives you the nitrogen oxides. The hydrocarbons come from unburned or partially burned gasoline and the, in parentheses, I've indicated that it isn't, listen, you don't have to have it coming from traffic. If you burn wood and get partial decomposition, you can also get hydrocarbons. Hot fires can produce nitrogen oxides and of course the tropics are an abundant source of sunlight. And what we have found, the tropospheric ozone in Los Angeles was found around 1950. And at that time it was suggested that maybe it was some special characteristic of the area around Los Angeles that made it susceptible to having ozone formation. What we found out in the last four decades is that it doesn't really make much difference where as long as you have a lot of traffic, then you will get tropospheric ozone and that's now a characteristic of cities all over the world. The equilibrium between nitrogen and oxygen and nitrogen oxide is one which is always in favor of the nitrogen and oxygen. But if you can set up the equilibrium, the amount of nitrogen oxide that's in equilibrium with the N2 and O2 is very different depending on the temperature. And you can see that it doesn't change the amount of N2 and O2 in the first two significant figures, but the amount of NO has gone up by a factor of 10 to the 10th in this range. And the function of the catalytic converter in your carburetor, on your automobile, is to take the gases that have already been burned and have been given you the power and have produced a certain amount of nitrogen oxide and then you convert them back to some equilibrium at a lower temperature that has less nitrogen oxide in it. And that was one of course, one of the major advantages in trying to keep down the nitrogen oxide concentration of the atmosphere. We're looking here at smog at the Eiffel Tower in Paris. We were measuring the concentrations of hydrocarbons in Paris a few years back. And this is a standard technique that we use called gas chromatography in which the time goes on this axis and the time identifies the individual molecule. And the concentration goes on this axis and several of them here are off scale. But you'll see molecules in here like ethane and propane and the number of hydrocarbons. And this sample was taken on the top of the Eiffel Tower at midnight. And here was a sample taken a couple of days later down in the traffic. And you can see in the midst of traffic you'll get very intense concentrations of hydrocarbons, all of which contribute then to the chemistry of the atmosphere there. This is a picture of Santiago Chile. It's actually a picture taken from a postcard. This is the way it looks on occasion. I want to call your attention to the existence of a major traffic artery that runs from the lower left to the upper right in this picture because I'm going to come back to that in a minute. This is the look of Santiago much of the time because they now have very, very bad smog problems. And so with the aid of about 40 students from the Catholic University of Santiago, we had them all over Santiago with empty canisters at five o'clock in the morning and then again at nine o'clock in the morning. So we have a before and after the major traffic pattern for Santiago. And this is what we saw now. The major artery runs again as I indicated across here. The red indicates higher concentrations of carbon monoxide. And so what we see at five o'clock, that there was a little bit higher carbon dioxide here which is downslope from during the night. And then in the four hours in between during the rush hour, there's a tremendous amount of carbon monoxide that's put into the atmosphere in Santiago. And that's surely then as a contribution from traffic. So we measured a very large number of compounds at the same time and two different kinds are shown here. This is acetylene or ethine and it looks very much like carbon monoxide. That the concentration was a little bit higher at five o'clock and then the traffic pattern gets superimposed on it. And that's because acetylene is one of the major products coming out of the traffic in the morning. On the other hand, we have a gas like propane which was there at five o'clock and it was there at nine o'clock and doesn't look as though there's much difference. And the reason for that is that propane is not associated with traffic. Propane and the two butanes are associated with liquefied petroleum gas which is what's used for heating and cooking in many cities and certainly in Santiago also in Mexico City where we saw similar effects. And this is simply leakage of liquefied petroleum gas and that's taking place all the time and that becomes a major contributor to the ozone problems in Santiago and Mexico City and elsewhere. And then you have molecules like perchloroethylene here which is a cleaning solvent not associated with traffic and ethylene dibromide which is a molecule put in with leaded gas and which was still in use in Santiago. And again, you see the traffic pattern of this molecule leaking out there. So we have a lot of information out of experiments like this that tell us which things are coming from traffic and which things are coming from other aspects of the environment. What you're looking at here are the Petronas Towers in Kuala Lumpur in Malaysia. They are the two tallest buildings in the world and this is what they look like occasionally and this is what they look like last August when they, the fires with Indonesia. And where it was sufficiently bad in Malaysia that the government of Indonesia apologized to the government of Malaysia for the fact that their air pollution there was so bad and it was coming from the burning of agricultural wastes primarily in Indonesia. And this, now we're looking at a space shuttle photograph of Western Europe and what you can see, what you're really looking at is the nighttime emission of radiation in the visible region and this is what it looks like and you can pick out all of the major cities of Western Europe here because that's the major source of that radiation being picked up from outside. But if you take it on a global basis, then you can ask a question in the same way, what is the source of all this radiation? Well something like 90% of this radiation comes from the cities just as you saw with Western Europe. But here in Africa and over here in South America and in occasional places in Australia, you see substantial amount of light being emitted from areas in which there are no concentration of large cities and that's the biomass burning that Paul was talking about before. This is a photograph taken in Botswana. This is a fire that was 60 miles, 100 kilometers long. It burned for a week and so you could see a huge amount of material would have been burned in that time producing smoke, producing hydrocarbons, producing nitrogen oxides. This is taken from the space shuttle looking northward from Namibia over Angola. So you're on the west coast of Africa in the southern hemisphere in the tropics and you're looking north and what you can see is that fires are just everywhere. It is a very common practice for clearing off the agricultural waste to burn them off in preparation for the next year. So you see huge amounts there. This is taken from the space shuttle in September over Brazil and you can see from the curvature that this is a very large area. It's in fact all of Brazil. It's the equivalent of the entire eastern half of the United States was covered by a smoke cloud coming again from the burning of agricultural wastes and from forests. This is a World Cup time and if you're going to, this is a Brazilian child or a young Brazilian, he's kicking a soccer ball despite the fact that September smog was there that if that's the way it is you've still got to practice. Now if I go back to this dual satellite picture here, what one sees is that as you, now we're again, we're looking at tropospheric ozone and now it's September and October and this indicated there was a substantial amount of ozone out here over the South Atlantic and the obvious logical source for this was biomass burning. And so in 1992 there were two programs, one of them called Safari and Paul showed some of the results from that. Safari was largely a European investigation but at the same time there was a U.S. investigation based on the NASA DC-8 to see what the chemistry was that was going on in the formation of this tropospheric ozone. Here is a fire map for September of 1992 when the airplane was flying and these are the fires that were seen in Africa at that time and so there's a lot of fires in Namibia and Angola and Zambia there. The aircraft that we were using is the NASA DC-8, it is equipped with a flying laboratory and maybe 10 or 12 different research groups will be on it. This is Don Blake, my colleague and an undergraduate working with us and this is our air intake, we bring the air into the airplane and inside the airplane we have a large number of the canisters which we have here and this is Senior Research Associate Nicola Blake filling these canisters and again we send them back for analysis to see what the hydrocarbon composition was and what we found was by comparing our hydrocarbon measurements and other people's nitrogen oxide measurements and the ozone measurements from still another group that yes it was biomass burning that was producing that extra tropospheric ozone, about two-thirds of it was coming from the fires in Africa and blowing out at low level and some of it was coming back from Brazil coming in at high level. So it was a substantial amount of increase in tropospheric ozone that could be attributed directly to this burning. Now in 1996 we were flying in the Pacific with two airplanes and this was just on a flight from Gayaquil in Ecuador to the Galapagos Islands and this is the entire flight along here and this is just carbon monoxide measurements and the carbon monoxide in the background should have been somewhere down around like that but you can see on that flight which lasted for a number of hours that about half the time they were running into high carbon monoxide concentrations instead of being down around 50 or 60 up as high as 300 and that's again that's biomass burning coming across the Andes and out into the over Ecuador and into the tropical Pacific. On the DC-8 aircraft this represents a LIDAR measurements and LIDAR is simply a technique in which you shoot a beam of light down that is scattered by a particular molecule or material. The upper one would show particles the bottom one shows ozone. When the light is scattered and comes back up all you have to do is divide by the velocity of light to find out at what altitude and so the airplane was flying along like this and down below it for that whole time this shows half an hour but it was there the whole flight. There is a very intense plume and then later on the airplane flew through it and the plume was 130 parts per billion of ozone. This was about 500 miles north of Fiji at an altitude of 5 miles the 130 parts per billion of ozone violates EPA standards in any US city but this was out over the tropical Pacific and the question was where did all that come from and one can trace it back by doing backward trajectories knowing what the wind directions have been for some time you can trace it back and when these were traced back from Fiji they came back across Australia and then clear over here into Africa in about 10 days and from the composition we believe that most of the ozone forming materials actually started in Africa and had still held together as a plume all the way over to Fiji. So it says that pollution episodes at a long distance away from the start are becoming very common. Now if I return to this slide that I started with I want to point out that in fact we don't we're looking at something which started out as being urban ozone that is things formed in cities like Los Angeles but the general impression one gets is not that there are a lot of cities that there's a city or two producing something here but that the tropospheric ozone is present everywhere. And that means we need to think about exactly what it is that was going on and this is now a summary then of what's been happening during the 20th century. The beginning of the 20th century these were the 13 largest cities in the world. It took one million people in order to be one of the 13 most popular populace cities in the world. Population of the world at that time was about 1.6 billion. Now in 1995 population of the world is was about 5.8 or 9 billion. We expect to be 6 billion so an increase of a factor of 4 during this century and a 6 billion will be reached sometime next year and then you get places like Tokyo Yokohama with almost 30 million people living in one metropolitan area. This is a major change in the way that the world's population operates. There are far more people and more and more of them are living in cities. This map here shows 37 different cities that have populations now of 2.5 million people or more. And of course all of the people living in all of these cities want the same kind of lifestyle that those of us in this room want. And there's no reason that they should that we should have it and that they should not. So the question that we have is how do we accommodate this growth in cities, the growth in total population and at the same time take care of the environment. This is the expectation on the global population that we will be around 8 or around 9.5 or 10 billion people by the year 2050. And there's some error bar on that. There's not much error bar on the fact that there will be 8 billion people by the year 2025. And so most of those 2 billion people that are added in there are going to live in cities. So the problems of the pollution of cities are going to be increasingly important. But when I have so far I've emphasized population and that gives a misleading aspect because it's really population times affluence. And affluence is probably more important than population. And just to illustrate I've shown here a map of the United States with the forested areas in the year 1620 when the Europeans had just begun to come over in any appreciable number. And essentially the eastern half of the United States was virgin forest. By 1850 a lot of the eastern part of it had been broken up. The western part was still more or less the same. But now as you get into the here we see that virgin forest is essentially gone in the United States. So it isn't that it's when we look at all of the forest burning that's going on in the developing world now what we're looking at is the same thing that was done in the United States in the 18th and 19th centuries and what was done in European centuries before that. So it's a process that has been part of mankind for a long time and now we've just found out that there are some severe penalties on a global basis from that. One of those penalties is the amount of carbon dioxide in the atmosphere shown here in the measurements of Dave Keeling where you see the seasonal photosynthesis and decomposition but you're seeing it on this background of growing amounts of carbon dioxide in the atmosphere that the major contributory factor to that growing carbon dioxide is the burning of fossil fuels. Here are the energy uses in the world in 1995. Traditional energy means fuel would charcoal crop waste and so on. And these units so 1.88 terawatts. And then in the industrial energy what we see is that coal, gas and oil dominate. 85% of the industrial energy and 75% of the total energy is wrapped up in the fossil fuels. Nuclear and hydro play significant but not major roles and all of the others wind power and solar photovoltaics and so on add up to less than 1% on this. So at the present time and for the foreseeable future the fossil fuels are going to be our main source of energy in the globe and they carry with it the penalty of carbon dioxide. If you put the carbon dioxide emissions in terms of the per capita use of energy then the United States is way out in front and what you see is that India, Nigeria, Indonesia that there's very little emitted per person. So in the present circumstance it isn't really the growing population, it's the growing population becoming more affluent and wealthier and moving up the scale of production of energy and the use of that for their standard of living is as it's true in United States and Germany and so on. That in other words if we think about the pressures on the environment it is partly population but it is very much more the fact that we have over a long period of time been using energy in a way that assumes that there are no environmental problems associated with it. One thinks about the 19th century at the beginning of the 19th century we were just, the world was just starting to substitute steam power for animal and horse power and then in the late part of the 19th century they discovered fossil fuels and in the first half of the 20th century one found that the fossil fuels could give you enough energy to do many things you couldn't do before and thereby raise your standard of living and that has led to this existence of the developed countries as we know them. And in the second part of the 20th century we found out there was a penalty that was associated with that and that penalty is the pressures that we placed on the environment and which we see with the tremendous amount of carbon dioxide that is going into the atmosphere, its accumulation in the atmosphere and the problems that that will raise. Global temperature in the last century has gone up about 7 tenths of a degree centigrade a little over 1 degree Fahrenheit. The measurements for 1997 showed it as the warmest year ever and for 1998 the first five months each month was the warmest month, the warmest January of all time was in January of 1998, the warmest February was in February of 1998 and so on. And the intergovernmental panel on climate change said in 1995 that the balance of evidence suggests a discernible human influence on global climate. That is that the climate is changing and that we are at least partially responsible for its changing. As we go into the new millennium one of the major problems that we have in the world is how we take care of the burgeoning population, their legitimate desires for greater affluence without paying a tremendous environmental penalty and that means we're going to have to do an enormous amount of work on a variety of pollution problems not only with the atmosphere but also with water and in general just how we handle things when there isn't any place to throw things away. Thank you very much.
|
Migrating smog has begun to pollute the skies over oceans in the southern hemisphere, resulting in tropospheric ozone levels near remote islands that would "trigger a first-stage smog alert" in Los Angeles. Tropospheric ozone thus may be reffered to a major atmospheric problem for the 21st century. Long-lasting plumes from biomass burnings -- the practice of burning to clear woodland or brush from the land -- travel across Africa and Australia to bring higher smog levels within rage of remote loca7ions in the southern oceans, such as Fiji. Tropospheric ozone is a key, harmful part of the photochemical smog found in major cities throughout the world, often as the result of congested vehicular traffic. However, in some cities such as Mexico City and Santiago, Chile, use of liquefied petroleum gas for heating and cooking also can contribute significantly to ozone formation. At elevated levels, it can cause breathing difficulties, increase the risk of asthma attacks, and adversely affect the growth of trees, shrubs, and cash crops ranging from vegetables to orchids. Wheter you're in a congested city such as Los Angeles or the seemingly pristine environment of the south seas, the chemistry behind tropospheric ozone remains the same: You need hydrocarbons, nitrogen oxides and sunlight. In the tropics, burning forests give off hydrocarbons and the high temperatures create nitrogen oxides, and there is plenty of sunlight. The data reported stem from a variety of studies, many of which have not yet been published. Some surprising findings have originated from comprehensive NASA aircraft experiments involving a dozen different research groups. In locations more famous for their isolation than their air pollution -- such as Easter. Island, the Galapagos Islands, and Ascension Island -- the NASA researchers detected significant ozone concentrations that can be traced back to biomass burnings on distant continents, indicating that the smog created by the burning is long-lasting and migrates great distances. In 1996, for example, two research planes flying in the South Pacific encountered ozone from biomass burning on 50 percent of their flights. One airplane flew through a plume of Smog about 500 miles north of Fiji in which ozone readings reached 131 parts per billion (ppb). The pollution had traveled over Australia, with the major contributors of ozone likely coming from as far away as Africa. Yet, by the time it reached the south seas, its ozone concentration was high enough that you would say this is a violation of the EPA regulations, if it occurred in the continental U.S. Harmful ozone levels remain higher in the northern hemisphere around the world, compared to the southern hemisphere.
|
10.5446/55142 (DOI)
|
Ladies and gentlemen, I find it very hard to believe, but I believe the record shows that this is the ninth time that I have been to Lindell, starting in 1959. I was lucky in my career in that I did something useful when I was fairly young and therefore got a Nobel Prize while I was a lot younger. In fact, it was 30 years ago. I wasn't eligible to attend the first Lindell Conference of the Physicists in 1953, and so I wasn't asked. I might have been eligible to attend it in 1956, but I was on my way from California to England at that time, and either didn't get invited or was overlooked. In 1959, I did get to go, and I have only missed one conference of the Physicists in the intervening years, and that was something came up that kept me from coming. I'm sorry to have to read it to you, but my eyesight is not as good right now. In a month, I will have had an operation that makes it much better. 1985 is the 60th anniversary of the birth of quantum mechanics. 60 years is a long time. Quantum mechanics was founded in 1925 by Van der Heisenberg of Erwin Schrodinger, Paul Dirac, and a year later, Max Born added the essential probability interpretation for the wave function. With Dirac's death a few months ago, all four of the founders are gone, but their work will be remembered as long as science is a part of our culture. Heisenberg attended eight of the physics conferences, and I met him at several of those. Heisenberg also went to some of the other conferences in between, so he was here at times that I could not have been here. Max Born attended five. He also was here more, but to talk to chemists and medical people. Dirac attended every single physics conference starting in 1953, a total of 11 all together. I wish that he were here today. Now, when I have told you about the founders of quantum mechanics, I feel I should mention something about other people who made enormous contributions to the subject. But first of all, let me say a little bit about Schrodinger. As far as I can find the record, he did not come to any of the conferences in Lindell. However, some years ago, his wife was here, and I had always assumed that Schrodinger had been here, but I can't find it in the record, so perhaps he wasn't. Now, in the course of my lecture about Schrodinger's cap, you may get the impression that I am critical of Schrodinger. And so, I would like to put the record straight by saying something about what kinds of things he did do for which I have no criticism whatsoever. First of all, he found the Schrodinger wave equation. He got that from the variational principle. He didn't use the variational principle very much except to get the wave equation. But of course, starting from variational principles is a good thing to be doing. We heard that as recently as yesterday. Among the things that Schrodinger did was to find a relativistic form of quantum mechanics, and he made great contributions to it. It wasn't as good as that which was made by Dirac, but it was still a very important thing to be doing. Schrodinger found solutions of the wave equation that were very instructive. He found the wave packet for a free particle. He didn't like the result of the calculation because the wave packet spread, and he would have preferred that a wave packet should represent something more classically visualizable as a moving particle, and it didn't come out that way. Consequently, he didn't like the probabilistic interpretation of quantum mechanics very much, but he was not alone in that. Schrodinger wrote, I believe it was in 1955, a book of essays called What is Life, and he made very insightful comments in that book. I'm sure I would criticize some of it if I had the time, but it's a remarkable book. Schrodinger gave the first quantum mechanical derivation of Heisenberg's Uncertainty Principle. It may come as a shock to some of you that in Heisenberg's paper in 1927 on the Uncertainty Principle, he did not use one single feature of quantum mechanics. It was done entirely with non-quantum mechanical physics. Good physics, but not quantum mechanical. And it was Schrodinger who gave the derivation using quantum mechanics. Now, here is a remarkable thing in my view about Schrodinger that hardly anybody would know about. In about 1915, he wrote a paper on the vibrations of a chain of particles, a nonlinear chain. In other words, a problem in nonlinear mechanics, which is a popular subject nowadays. Had he pursued the solution a little further, he would have discovered full atoms, full atoms of solutions of nonlinear wave equations that have disturbances that move along and stay together. In the wave mechanics that he worked on later, the wave packet that he had came apart. It came apart because of what we would call dispersive effects. If you combine the dispersive effects with nonlinear effects, then you can make a wave packet stay together. And it always seemed to me a very fortunate thing that Schrodinger did not come to that realization, because if he had, it would have set the course of physics back by perhaps 10 or 20 years. But fortunately, he didn't see what he could have done to make a particle stay together in quantum mechanics. Now, besides the work of the four founders of the theory, who after all have to be chosen a little arbitrarily, there were a number of other people who contributed mightily to the theory. Three sets of contributions involve the density matrix. It turns out that not all of quantum mechanics is to be described by wave functions. But in some cases, you need statistical aggregates of wave functions that can be described by some mathematics given the name density matrix. These were introduced into physics, I think first by Lev Landau in 1927, the same year by John Johann von Neumann, as he would have been called in, in 1927. And then Duroc in 1930 independently discovered the same kind of things. And the density matrix plays a very great role in quantum mechanics, but unfortunately, it didn't play a great role in the thinking that people did about the subject until much later. In 1927, Duroc gave a quantum theory of radiation and developed a relativistic Duroc wave equation. And the quantum theory of fields was perhaps spelled out in Fullis-Glory by Heisenberg and Pali around 29 or 30. When you consider such a problem, you have an infinite number of degrees of freedom. And that leads to difficulties which have to be superimposed on the difficulties that quantum mechanics provides free of charge. Now, perhaps I could jump all the way up into the late 40s. Richard Feynman introduced the formulation of quantum mechanics which involves things called path integrals. And the path integrals, as we heard yesterday, are currently very much in that thing. Perhaps I can give you a little prehistory there that will amuse you. There was a Hungarian physicist in 1925 named Cornelius Lanzos. And he invented, by what methods I don't know, a formulation of quantum mechanics involving integral equations. There were the Heisenberg-born Jordan-Mittfix mechanics and the Schrodinger wave mechanics and Dirac saw how to put that all together and make quantum mechanics. But Lanzos made use of integral equations. And he wrote an account of this and sent it to Pali, and Pali discouraged him from publishing it. Therefore, this is a hardly known part of the history. But that isn't the end of it. In 1927, an American physicist named Canard, who taught for many years at Cornell University, wrote a paper in the Anandar physique in which he solved all of the soluble, simple problems in quantum mechanics by using Green's functions. And Green's functions are very, very closely related to Feynman-Path integrals. And essentially, except for one little detail, Canard discovered the Feynman-Path integral. The one little detail was that he divided the motion up from time t1 to time t2 into three parts, two parts. And whereas Feynman took little slices, and if you just redo the Canard theory using infinitesimal time slices instead of big leaps, you get with some rather trivial mathematics, the Feynman-Path integral. And incidentally, Dirac had come to very similar things in the later editions of his book. All right, well, now I think I have done enough about the explainings of historical remarks. I have to work on the direction of Schrodinger's cap. In 1925, I was 12 years old and in the eighth grade of school about ready to begin the study of algebra. I delivered newspapers in those days, and I always read the headlines very carefully, but somehow I missed the announcement of the discovery of quantum mechanics. This was in California. By 1935, I had learned enough quantum mechanics to start doing research in that subject. My thesis advisor was Robert Oppenheimer, who took his doctor's degree with Max Born in Goettingen in 1927. Well, in a very real sense, my father was Robert Oppenheimer, and his father was Max Born. So during one of the Lindell conferences, I went up to Born and tried to explain this to him that I was his grandson. And he didn't take it too well because I don't think he liked Oppenheimer very much. And so why should he like one of the lesser offspring? Well, anyway, I learned a lot about quantum mechanics in Oppenheimer and a great deal of quantum mechanics from other theoretical physicists, too numerous to name here, who worked on quantum mechanics after 1925. And with all of their help, I did not learn as much as I wanted to about the physical significance of quantum mechanics. I have had to try to fill in some gaps for myself and hope that some of the younger and older students in today's audience may find my remarks helpful. And if they don't, I hope they will tell me why not. My title today, Schrodinger's Cat, is a timely reference to a book published last year. Unfortunately, I'm not the artist. Now, I don't know whether you can read the fine print, but I'm going to read this to you so you don't need to read that. I have to have the lights on so I can see. Now, this book of John Gibbons was called Inferts of Schrodinger's Cat. Gibbons took a PhD in astrophysics from Cambridge University. I don't know whether he ever heard any of the lectures on quantum mechanics that Professor Durak from time to time gave. And unfortunately, I have many easy way to find out except to ask Gibbons. I haven't done that. Gibbons is now a consultant editor for the New Scientist magazine, published in England, and a writer of semi-popular books on science. The subtitle of his book is, and I'm reading some of the fine print, Quantum Physics and Reality, and the cover claims that it is a fascinating and delightful introduction to the strange world of the quantum, absolutely essential for understanding today's world. I can agree with some of that rather immodest claiming. Gibbons book is well written and it is quite entertaining. However, it has a number of non-trivial errors. My opinion that there are flaws in his physics seems to be supported by two reviews of the book that I have read, one by Sir Rudolph Perls in the New Scientist magazine, and another review by Russell McCormick, who has written a remarkable book about the history of an imaginary German classical physicist who grew up in the time between Maxwell and Heisenberg. Well, in 1957 a student of John Wheeler's, Hugh Everett, introduced the many universes interpretation of quantum mechanics. I have no time to deal with such an inconvenient fantasy, which as far as I know is not shared by many physicists, but I have to warn you that Gibbons rather favors this approach to quantum mechanics. It may be more appropriate for the astrophysics of the Big Bang than for problems dealing with a smaller part of the universe, but it is quite possible to read Gibbons book and simply skip over any references that he makes to the many universes, so it's still an interesting book to look at. To me, it's a strange fact that many famous physicists did not like the dominant role of probability theory in quantum mechanics. Among them were Albert Einstein, Louis de Broglie, and Schrodinger, and in later times David Bohm. It is well known that Niels Bohr and Einstein had many arguments on the subject of quantum mechanics, and they considered many the Duncan or thought experiments, including the two-slip defection experiment, which has many forms. Fifty years ago, in other words, ten years after quantum mechanics got started, Einstein, together with Boris Podolski and Nathan Rosen, published the famous EPR paper, which is named after the initials of the authors. This paper deals with a system of two particles, which are somehow joined together initially, and then for some reason come apart and are separated. That leads to a paradox that appeals to some people. My feeling is that it's easily dealt with and one doesn't need to lose any faith in quantum mechanics at all. In fact, one should have his faith strengthened, if anything, but I can't talk about that today. However, in that same year, 1935, Schrodinger published his famous cap paper in Nautra Wissen-Schafen. Griddens book claims that Schrodinger was a German, perhaps he was, but I think he lived in Austria, and was there at that time. Later he became Irish. I have a firm belief that none of these columns, none of the paradoxes, were properly treated according to conventional quantum mechanics in the discussions of these distinguished people. And to save time, I'm only going to deal with the cap problem, but the arguments perhaps will give an indication of the approach that I would take to any of those experimental situations, imaginary experimental situations. Let me first remind you of some features of quantum mechanics. Quantum mechanics is a generalization of classical, non-relativistic mechanics. The first thing to do in this subject is to define the system of interest. That means to introduce the coordinates and velocities, or momentum, if you're being more sophisticated, of the various parts of the system. There is usually a certain degree of choice here. If the system is taken too large, the mathematics will be too difficult. If the system is taken too small, some important features of the problem will be neglected. In quantum mechanics, a physical system can, I underline the can, has states which are described by wave functions. These things may be complex quantities, in other words, they contain the square root of minus one, the imaginary quantity. Wave functions can be used to calculate probability distributions for various functions of the dynamical variables of the system. Wave functions are functions of the coordinates which describe the configuration of the system, and they depend on time in a way described by the Schrodinger time-dependent wave equation for the system. For a reasonably isolated system, there are some so-called stationary states, but a much larger number of non-stationary states are possible. If the system is not isolated, or pretended to be isolated, some extra terms will have to be included in the Schrodinger wave equation to allow for external disturbances, or the notion of the system will have to be enlarged to take more of the universe, but hopefully not all of it, into account. Any observation or measurement made on the system represents a disturbance which should be taken into account in the analysis of the problem. The four founders of quantum mechanics did not tell us much about how measurements are to be made, and what influence, if any, the result of a measurement might have on the subsequent history of a system. In his 1930 book, Principles of Quantum Mechanics, Dirac postulated that a measurement of an observable would lead to a result equal to one of the eigenvalues or characteristic values of that observable, and would leave the system miraculously in a new state, whose wave function was the eigenfunction of the observable corresponding to the measured eigenvalue. von Neumann in his 1932 book, which was based on some articles he wrote in 27 in the Gertlinger Nockricht and was a mathematical philosophical class. In that book, he expanded on Dirac's idea, or perhaps preceded Dirac, I've never been able to tell who came first. He distinguished between two ways in which wave functions could change with time. A, carfully, in accordance with a time-dependent Schrodinger equation, and B, a carfully, as a result of a measurement. Both of these great men left many answers, many questions unanswered. I lectured in Lindeau in 1968. I lectured over you, I came, but this was one particular lecture. I lectured on the preparation of states and measurement in quantum mechanics. I had both born in Heisenberg in the audience. I won't tell you what they said. And I lectured again in 1982 about my unhappiness with von Neumann's wave function reduction hypothesis. If I return today to the measurement problem, it is because I learn a lot from thinking about the problem and from talking to a perceptive and critical audience. A very important principle of quantum mechanics was recognized by Wolfgang Pauley in his 1933 Hantbüchter Physique article. This is the superposition principle. For reasons which will be apparent shortly, we denote by L and D. So I hope we'll show up a little bit on that. Yes, it's probably going to show up a little bit. We denote by L and D are two possible, fluidly normalized wave functions for the system. And L and D will be taken to be orthogonal to each other, which means in a non-technical sense that they are very different functions. The superposition principle states that an unlimited number of other possible states of the system will exist, which are of the form called the superposition states. And the wave function there is taken to be a linear combination of L and D. In other words, you multiply the L wave function by a constant factor, factor A and the D part of the wave function by a factor B, an amplitude B. The A and B can be any two complex constants whose sum of squared absolute values adds up to unity. If the wave functions L and D happen to be wave functions for stationary states, then F will in general represent a non-stationary state, which is neither L nor D, but something in between with what's the quotation marks around the in between. Consider a system which is in state F. According to the conventional discussion in quantum mechanics, if an experiment can be devised, which asks a question, is the system in state F that experiment is supposed to give the state the answer yes. For a system in that same state F, if you ask a different question, you get a different answer. And the answer that you will get is that if you, if the experiment that you use is devised to ask the question whether the system is in state L, then the result for any one measurement on the system whose wave function is really S, may be either yes or no. In a series of repeated measurements for an ensemble of systems, each being in the same state S, the probability for getting the answer yes to the air question is the absolute value of A squared. The result, absolute value of B squared would be obtained for the experiments that we're trying to find out if the system was in state D. At this point, we should not worry about how a system is to be put into a given state S. Furthermore, we should not worry about how experiments are to be designed to get information about the state of the system. I gave some examples of these things in my 1968 lecture, which incidentally got published in Physics Today a year later. And I will later discuss some simple examples of such measurements which are relevant to the cap problem. Now we finally come to the cap problem. Do you have any time left? A little. Imagine a box that contains a radioactive source, a detector that records the presence of radioactive particles, a Geiger counter perhaps. A glass bottle that contains the poison of the gas such as hydrogen cyanide, a hinged hammer, maybe that's not shown well enough in the picture, held above the bottle by a thread which will be cut by a knife, if and only if the counter clicks. A living cat in good health is placed in the box. The apparatus in the box is arranged so that the detector is switched on just long enough for there to be a 50-50 chance of a click of the Geiger counter. There's a radioactive nucleus that decays and the lifetime of that radioactive nucleus is such that for the time of the experiment there's a 50% chance that there'll be a radioactive decay. If the detector records the decay, the thread is cut, the hammer falls, the bottle breaks, and the hydrogen cyanide gas kills the cat. Otherwise, the cat lives. We have no way of knowing the outcome of this experiment until we open the box and look inside. We somehow couldn't hear the click of the counter. What is the problem? We're told that Pandora opened the box. Did her curiosity kill a cat? We are not told. There's no problem for a classical physicist. He might say that what happens is the will of God, or perhaps in the language of an American comedian, he might say it is just the way the cookie crumbles. If the classical physicist is a gambler, he has experienced excitement without calling on quantum mechanics when a card is turned over or a ball fellers down after the spin of a roulette wheel. The problem exists only for a person who knows a little quantum mechanics, a little, not too much, just a little, and who believes that quantum mechanics is universally applicable, and applies it to the cat experiment. It's not unlike the situation of somebody who believed that classical mechanics was universally applicable in the years after Newton. A lot of people did believe that, and it changed their whole philosophical and theological outlook in various ways, which were not fully justified as it turned out. Well, such a person who believes those things that I mentioned believes that when the box is opened, the box cat system is described by a wave function like s, which is the sum of two other wave functions like l and d. One of these, namely l, represents a living cat, also a undecayed nucleus, an unclipped geiger counter, an uncut string, and so on, and a hammer still. And the other, the d part, represents a dead cat with a decayed nucleus and a broken bottle, etc. The constants a and b of the superposition are each taken to have absolute values of one over the square root of two, so that the probability of finding a live cat is 50%. Until the observer looks to see whether the cat is alive or dead, the wave function is a superposition or linear combination of the two wave functions. The cat may be alive or the cat may be dead. If the observer determines that the cat lives, he may be happy to accept von Neumann's sudden change of the wave function, or says that the wave function becomes l corresponding to a living cat. Well, I've told you what the paradox is. If you think it's a paradox, then maybe I can help you. I now return to the application of quantum mechanics to the cat problem. First of all, I have to tell you some things about quantum mechanics that I omitted before. I told you that dynamical systems can exist in states. However, in general, this means that some process has prepared the system to be in that state. That's a non-crivial matter, and you don't find it happening easily. It is very unlikely that a given and complicated system will have a definite wave function. The best that we can do is try to describe the system by a statistical distribution or ensemble of states. The phrases pure case and statistical mixture are used here. The density matrix ought to be used in discussion. Not one word about density matrices was used in the Schrodinger article, and Olof von Neumann's book is full of discussions about density matrix. Unfortunately, he didn't apply it to the measurement problem adequately. A pure case is all too easily converted into a mixture by any very small erratic disturbance. While a mixture can never be put back into a pure case, except by some selection process of a desired member of the ensemble, and that completely wipes out the memory of the past. One is reminded of Lewis Carroll's Humpty Dumpty, who once broken could never be put together again. In the case of the cap problem, the system should certainly include the atoms, nuclei, molecules, and macromolecules contained in the box and all of its contents. The model should make provision for the opening of the lid and the disturbance of the system brought about by observations on the opened box and its contents. We will have at least a million, million, million, million degrees of freedom. No such problem can be solved analytically or numerically except in the most trivial cases. A living cat is not an isolated system, hence the cat box system is not isolated. And even if it were, it could not be assigned a wave function, but only a mixture of wave functions. Even if it could have a wave function, the observation made when the box was opened would disturb the system enormously. At the very least, it would randomize the complex phase angle of the ratio of B over A that characterizes the linear combination wave function over there. The combination of D for dead cat and L for living cat. Such a mixed state is an incoherent mixture of the two states of living and dead cat. And if I were to try to talk about a problem of coin tossing using quantum mechanics, I might be tempted to talk about wave functions that were superpositions of wave functions for heads up and tails up. The same kind of information about phase angles would be important to. I will make a few more remarks about the cat problem. It is a messy, rather pointless problem. Furthermore, it's in bad taste for at least four distinct reasons. First reason, the contemplated treatment of the cat is inhumane. Second, the possible death of the cat serves no useful scientific purpose. Third, there's no mention of at all of the well-known fact that cats have nine lives. But in my view, the overwhelming sin is that, and it's a different kind of bad taste if it is bad taste, and obviously it's a matter of taste, is that to propose a complicated problem when a simple one would do is that there's a simple problem that has all of the essential physical features of the of the swimming of paradox and doesn't involve any of the complications. Now, let me tell you one way to simplify the problem. Simply take out all of the pardon the expression garbage in the box. You don't need the radioactive nucleus, you don't need the spring, you don't need the hammer, you don't need the knife, you don't need the hydrogen cyanide, you don't need the glass bottle, you do need a cat, perhaps. And there's nothing important about having the 50% probability. A probability of one part on a million would be enough. So suppose you go to your actuarial life insurance tables and determine that a night in a box is going to lead for a typical cat to a probability of one part of 100,000 that the cat will die. That is, for instance, how many of you would bet with certainty that I will appear for the tenth time in Lundell? It's possible, but by no means certain, I'd like to. But, well, anyway, so you don't, all you have to do is put the cat in the box and not have any cyanide at all. If the cat dies, it's because the cat died. Well, there's some kind of a random process going on there. And it isn't quite the same as the random process that led to the radioactive decay. But as far as the present state of theory goes, it might as well be the same. However, that's still a very complicated problem because I don't like living cats as a problem to deal with in quantum mechanics because the system isn't well defined. It isn't isolated. At the very best, you'd have to use density matrices and all sorts of techniques that would be very hard to carry out for 10 to the 24th degrees of freedom. We can easily get into a simpler situation, which would leave Einstein and Schwerdinger unhappy. The above expression for the wave function, S equals AL plus BD, has the appearance of the kind of wave function considered in the so-called two-state or two-level problem. In the cat problem, one is foolish enough to think that L and D represent pure case states of the cat. In the center case, the wave functions L and D are not wave functions of a complicated system at all, but of a system with one degree of freedom. The theory of a number of very important problems can be reduced to the theory of two-level systems. A particle with a spin of one half provides one such example. The one particle system may in fact have other degrees of freedom, such as translational motion, but for some applications, only the spin degree of freedom is important, so therefore the other degrees of freedom don't matter. There is a very great deal of experimental work on two-level systems, and it is clear that the probabilistic interpretation of quantum mechanics works very well indeed. The whole theory of nuclear magnetic resonance is an example. That's an application of a two-level problem. The recent development of nuclear magnetic resonance tomography could easily win a Nobel Prize or two. The theory of measures and lasers makes important use of two-level problems. The Nobel Prizes of Otto Stern, I. I. Robbie, Felix Block, Ed Purcell, Polycarp Kirsch, C. H. Townes, Nicholas Bazoff, Alexander Prokow, Arthur Schollow, and Nicholas Blumbergen are all related to the two-level problem. I would not be here today without it. Ms. Bellart-Tautis on Monday how the two-level problem plays a very useful role in elementary particle physics. Let's go back to an early atomic beam experiment of Stern, Stern and Gerlach. They heated silver in an oven, and they made a beam of silver atoms that they passed into a vacuum chamber. I will add a few experimental facilities that they did not have in the 20s, but they could have them now. The atomic beam is directed along the x-axis into a magnetic field. The magnetic field is mostly in the z direction, as much as Maxwell will let it be, but it increases in magnitude as z goes from negative to positive. The beam of atoms is split by this inhomogeneous magnetic field into two beams, one moving slightly up in z and the other moving slightly down. Quantum mechanics provides a simple explanation of the experimental findings. The atoms in one beam have a spin orientation quantum number for the z direction of plus one-half and in the other beam of minus one-half. Individual atoms can be detected, and it is certain that whether an atom goes into the plus one-half beam or the minus one-half beam is determined by just the same kind of random process that Schrodinger invoked for his radioactive decay. No experiment can be devised to travel forehand what an individual atom will do. So this in my view is the problem that Schrodinger should have been considering, the cap problem just dirty the water. It's bad enough as it is. Furthermore, double or compound sterngalock experiments can be made. The atoms of the upper beam might be passed into another inhomogeneous magnetic field that will fit that beam into two new beams. The numbers of particles in the new beams will depend on the various angles involved in just the very peculiar way that quantum mechanics will predict. In 1949, shortly before he died, Einstein explained quite clearly why he did not like probability in quantum mechanics. He wanted a theory of the radioactive nuclei which could predict ahead of time when a given nucleus would decay. Perhaps he had the hope that a hidden variable theory could be found. He would have at least as hard a problem with the silver atoms of the sterngalock experiment. Now through the experimental work of John Bell and the experiments of Allan Aspect in English, but I could pronounce it in French if I took a little more time. We know that it can't be done. Quantum mechanics really does work in simple cases and the probability interpretation is essential. I think it is high time that we recognize that it's inevitable that that should be the case and that we learn to enjoy it. Of course, we have been, we've had a genetic code being developed for who knows how many million years, which has made us able and maybe it's a good thing to be unaware of the microscopic phenomena that are underlying our existence. Well, just a few more sentences. All of those paradoxes that I mentioned and more can be discussed in a very good way with quantum mechanics, but it is very important to have a certain degree of good taste. The problem should be made as simple as possible. Everything that's in the problem should be taken into account. If in midstream you want to change the situation a little bit, you've got a new problem to solve and you'd better be very, very careful about doing that properly according to quantum mechanics. Obviously, there are many unsolved problems. For instance, quantum mechanics has to be, it's amazing how little of the fundamental structure of quantum mechanics has changed in going from 25 until today. But one of the problems that interest me most is the problem of how do we deal with continuous measurements of a quantum mechanical system that is pretty quantum mechanical, but on the verge of being classical. Now, technology isn't quite up to this point yet, but we're getting close. I mean, people are getting close. For instance, the gravity wave detectors that are contemplated will, to some degree, have to be treated with quantum mechanics. And yet they are also highly classical systems. Electrons can be trapped in a kind of macroscopic atom of made of electric and magnetic fields, and it's possible to follow the career of one electron for 10 months in a small apparatus like this. And it would have been longer had somebody not turned the wrong switch. That's in the work of DeMelt and Gabrielle since Seattle, Washington. And the problem of continuous measurements on a macroscopic system is one that I am working on currently, and I think with enough success to please me, but I haven't done anything like the amount of time to tell you about it today. So thank you very much for listening.
|
This is a general comment to the long list of Lindau lectures on quantum mechanics given by Willis Lamb. He was one of the many Nobel Laureates who really fell in love with the concept of the Lindau meetings and participated in no less than 19 meetings. Beginning his long series of lectures in 1959, he continued lecturing almost until the very end. I remember acting as chairman for his last lecture, which was given in 2001 and it was quite clear that he regarded himself as at home on the stage in the Lindau lecture hall. If I had not had a meeting program to follow and therefore had to stop him, he would easily have spent another hour giving his lecture ”Quantum Mechanics Revisited”. Because he was really interested in quantum mechanics and wanted to explain it in detail to the young audience, just as a teacher wants to explain something to his students. Of his many lectures, no less than 8 are about quantum mechanics, including lectures on Schrödinger’s cat, quantum mechanics for philosophers, and super-classical quantum mechanics. But his range of interests and topics was even wider, including experimental atomic and molecular physics and several other areas of physics. The text he read for his 1982 lecture, e.g., was first entitled “On the Use and Misuse of Quantum Mechanics”, but was changed to “Quantum Mechanics: Interpretation on Micro Level and Application on Macro Level”. This is a topic, which has historic relevance, starting with the discussions of Albert Einstein and Niels Bohr at the Solvay conferences around 1930, continuing with Erwin Schrödinger’s cat paradox and continuing with the renaissance of quantum measurement theory during the 1960’s and 70’s. Actually, it is still a hot topic today, mainly due to the enormous progress in experimental technique. In 1982, the direct detection of gravitational waves was discussed. According to Einstein’s theory, two heavy stars rotating around each other will give rise to gravitational radiation that will carry away energy from the system and make the rotation slow down. Such an indirect effect was discovered by Russel Hulse and Joseph Taylor in 1974 (Nobel Prize in Physics 1993). In his lecture, Lamb was critical of the theory behind one of the detectors planned to see a direct effect of gravitational waves. Since this effect would be a microscopically small change in length of a macroscopic beam pipe, the plans involved using a technique named quantum non-demolition measurement. Lamb argued that this technique would not work and that the detector would not reach the quantum limit, as proposed. As of today (2014), no gravitational waves have been detected. In 1985 Lamb lectured on “Schrödinger’s Cat”, another topic of historical interest. As is well known, Schrödinger invented his cat paradox to show that the probabilistic interpretation of quantum mechanics led to very strange results that he didn’t believe in. Lamb was critical of this particular aspect of Schrödinger’s work, but since he admired other aspects, he also gave a long list of positive things that Schrödinger had done. One can maybe understand Lamb’s interest in quantum mechanics and appreciation of the probabilistic interpretation better by noting that he described himself as grandson to the inventor himself, Max Born. The reasoning goes as follows: Robert Oppenheimer was a student of Born and a teacher of Lamb. Apparently Lamb had approached Born at the Lindau Meeting in 1959, introducing himself as being Born’s grandson. One can maybe understand that Born, then around 75 years old, was not so amused by suddenly finding a new grandson around 45 years old! Anders Bárány
|
10.5446/55143 (DOI)
|
Ladies and gentlemen, one of the exciting things, as I hope you're becoming aware as we've heard from lectures today, one of the exciting things about studying astrophysics is that you need to consider physics under conditions which are very extreme, conditions that you simply cannot achieve in a terrestrial laboratory. And this of course makes the physics exciting. And when my research led to the discovery of pulsars around 25 years ago, I had little idea of the wide range of really wonderful physics that would be needed to understand them. In my short talk now, I would like to describe as simply as I can some of the main interesting ideas. Of course astrophysics still has many problems. The basic phenomenon of a pulsar is that you receive with a radio telescope a regular succession of pulses. Here is a rather extreme example. This is a pulsar that was discovered around 10 years ago and has the remarkable property that the periodic time between one pulsar and the next is around 1.5 milliseconds. It is in fact an extremely regular pulsar and it's possible to measure that periodic time rather accurately. You've seen this morning some very accurate physical measurements and I think this one ranks alongside those others. One can measure the periodicity to about one part in 10 to the 14 with an uncertainty here of about plus or minus 3 at the end. Now, it's not quite right in the introduction you were told that we decided when the discovery was published that we were dealing with rotating neutron stars. I have to correct that slightly. We considered seriously the possibility of neutron stars. That seemed to be the most likely solution to this problem but we hadn't tumbled to the idea that they would be rotating. I at that time thought that they would be vibrating like the crystals that Professor Ramsey spoke about. However, that turned out to be wrong. Now, in understanding neutron stars and pulsars you need a wide range of physics and I've illustrated some of it here. The model we have to produce pulses of this kind is that you need a star, a highly compressed star where you can fit as much material as the sun contains into a sphere of radius around 10 kilometers. There will be a powerful magnetic field as I shall explain and if the star has a magnetic axis which is oblique to the rotation axis then you can have a beam of coherent radiation and the pulses simply represent the beam rotating past the observer and it is really exactly like a terrestrial beacon or a lighthouse for navigating ships. The physics that you need covers a wide range. I have sketched a few titles here. We need to consider the compressed matter, the physics of the matter which is inside the neutron star. That involves you with superfluids. In understanding the radiation you need to come to terms with relativistic plasma physics around the star. Also radiation theory to account for the intensity of the radio waves you require something like laser type emission which is very unusual in astrophysics. The radio emitter has to be a highly organized system as I shall explain. Finally having come to terms with the physics of a neutron star you can then begin to use as Professor Ramsey already mentioned. Use the high stability of pulsar timing to do physics experiments and you can make useful experiments as has already been done and I shall mention in general relativity. You can I believe show very clearly that gravitational waves must exist and finally the measurements have a bearing on cosmology. I'm sure there will be many other features which will be discovered as time goes on. In order to understand the basic physics of a neutron star let me just remind you of some elementary physics and let us consider in the simplest possible fashion what would happen to ordinary material as you subject it to steadily increasing pressure. So we start with normal matter here on the top. Normal solids as we understand them in a physics lab here on earth would be a close packed arrangement of atoms like this with the mass content mainly in the nucleus composed of neutrons and protons and the separation of those nuclei is set by the electron orbits. You can squash material in a terrestrial lab until the orbits are just about overlapping and then you have a solid and a typical density of course about one gram per cubic centimeter. Now the state of matter here is set by the most elementary quantum consideration the particle wavelengths which as you all know is Planck's constant divided by the momentum of the particle. The particle in the box has been discussed several times already by previous lecturers. So if you squeeze matter down let's suppose we can squeeze it without limit and use quantum mechanics to work out what the configuration of that matter will be. Well if you compress the material until it's reached the density of around 10 to the 6 grams per cubic centimeter that's to say about one ton of material in a cubic centimeter then because you're fitting the electrons into a smaller and smaller volume their quantum wavelength has to reduce correspondingly and the electron energy simply rises. It becomes great enough to free the electrons from a particular nucleus so you don't anymore get standard atoms. You get an array of nuclei with electrons moving randomly as conduction electrons do in a metal but in a degenerate state of this sort all the electrons behave like conduction electrons and move randomly through the material. That is degenerate matter and we can find it in astrophysics. Go on squeezing the matter down and what happens then? Well there's an interesting reaction takes place. Under normal conditions in a terrestrial physics lab if you create neutrons then they decay in about 10 minutes or so into a proton, an electron and an antineutrino. That's a very well known and well understood reaction. But under high pressure, under very high compression you see the electron in order to fit into the allowed space has to have an extremely high energy. The Fermi energy has to become several tens of nev's if you squeeze matter sufficiently and this means the electron begins to acquire relativistic energy. The relativistic mass has to be considered in this simple particle relationship here and under high enough pressure the relativistic mass will dominate which means that the normal beta decay of a neutron goes in the reverse direction. Because of the high energy of the electron here when you sum the energies on this side in fact this arrow goes in the reverse direction so if you have a box containing protons electrons under high enough compression you will end up with a box of nearly all neutrons. That can be predicted and in fact was predicted soon after the discovery of the neutron particle. So that one has matter almost entirely in the state of neutrons. Well I'll be saying a little more about it but its simplest properties could perhaps be best understood if I had a sample of neutron matter to show you. I can't quite do that it would be rather uncomfortable if I did. Anything this little lump of material which I have in my hand here was a lump of neutron matter about that physical size. That in fact is a lump of sugar but the density of the material would be as I've shown here something like 10 to the 14 grams per cubic centimeter that's to say something in the region of 10 to 100 million tons in a piece this size here. Of course I would be wrapped around it rather quickly by gravitational retraction if that was actually a piece of material and it would not simply sit on the table like this. It would rapidly pass through the earth, oscillate for some time and come to rest at the center of the earth. So that when we talk about solid state and the condensed matter physics, condensed matter as we understand it in a physics lab is almost a pure vacuum to the kind of condensed matter you have to consider inside a neutron star. So that is the basic physics of compressed material and this has a big relation to what you would expect stars to do as they run short of nuclear fuel. Could I have the slide please? The compressions I've been talking about can be obtained by gravitational forces inside stars and these states of matter are highly relevant to the death throws of a normal star like the sun. Our sun let's suppose is up here when it runs short of nuclear fuel ultimately it will be compressed down to what we call a white dwarf star a lump of degenerate matter with a density of around one ton per cubic centimeter. It would have a size then about the size of our planet earth. This is well known and white dwarf stars of course are very well understood and we can see that physics going on in the sky. A slightly heavier star when it runs short of nuclear fuel and gravity begins to compress it as ultimately it must will give rise to the formation of neutrons and in that case the stellar mass ultimately can be contained in a sphere of only some tens of kilometers in radius and the final collapse under gravity is rather violent. You would expect a massive star of this type to form a neutron star near the middle and the energy and the neutrino flux from that reaction that inverse beta decay reaction will blow the remainder of the star which hasn't yet reached the neutron configuration into pieces and will blow it off into space to form a supernova. Star stars still are not necessarily stable even as a neutron state of matter they will become black holes. Well that in a nutshell is the importance of this new type of matter, compressed matter with regard to stellar evolution. So the old age of stars in fact when they run short of fuel becomes really an exciting period and it's not right to think of stellar old age as being a dull phase. Could I have the next slide please? You've just been told about the crab nebula, here is a picture of it and these ideas about neutron stars as the explanation of pulsars were confirmed most beautifully by the discovery of the pulsar in the crab nebula. I haven't time to tell you about all this work but this nebula is the remains of a supernova which was seen by oriental astronomers in the year 1054 with just visible observations. The explosion gave rise to what looked like a stellar object which could be seen even by daylight so it was a rather dramatic event in the sky but we now know that the, you see these two little stars right at the very middle here it's the bottom right hand star there which is the remains of the star which actually erupted to form that nebula. Well this picture was made long before pulsars were discovered but after the pulsar discovery within about a year of it it was discovered that this bottom star here was in fact flashing light. The light from that star flashes regularly about 30 times each second so that there you have what must be a neutron star in the place a neutron star would be expected and it fits the whole theoretical concept very well indeed. There is much more supporting evidence here which I would like to explain but time prevents it. So there is one of the few examples of a neutron star you can see. There are a handful like that much weaker, emitting much weaker light than this crab nebula and they're very hard to see but they can just be detected with optical telescopes of high power. Normally neutron stars emit radio but in this case they emit from radio right through to high gamma ray energies. Well now what would a neutron star be like? It's wrong to think of a neutron star just as a ball of neutrons. It involves a lot of interesting physics. There's a huge pressure gradient as you go from the outside of a neutron star into the middle and this causes the structure to vary in an interesting fashion as you go from the outside of the star to the center. So as you come downwards the neutron star, well this one is 15 kilometers it may be somewhere between 10 and 15 kilometers depending on the total mass of the neutron star. Here you have degenerate matter which would probably be the most stable nucleus that would exist which is iron 56, Fe56 so you meet first a very rigid crust where the nuclei, Fe56 form a regular cubic lattice like that and it's degenerate because the electrons already have high enough energies that they're not bound to a particular nucleus and they move randomly through the material. As you come down there's an increase in compression and somewhere down here you get inverse beta decay, you get the protons and electrons effectively being crushed together to form the more stable state neutrons and that begins to happen here. As you come further down you get more and more neutrons with just a trace of electrons and protons. Now, remarkably enough solid state physics predicts that neutrons which are Fermi particles will actually form Cooper pairs and that this state of neutron matter although it's so enormously dense will in fact be a liquid. It will be a quantum liquid, a superfluid, the neutral material analog of a superconductor. This will occur for any temperature much below 10 to the 10 degrees Kelvin. So we simply have the standard results of low temperature solid state or rather superfluid state physics and that is what would be predicted to happen. So as you come down in fact much of the material of the star is in the form of a quantum liquid, a superfluid. The trace of electrons and protons that still reside there, the protons would be a superconductor and the electrons probably normal. Closer to the middle of the star we don't actually know enough yet about the behavior of fundamental particles to know what goes on here. There might be a quite exotic core near the middle where you have stable quark material or perhaps stable pion material we simply don't know. So that in general is what simple physics would predict for the structure of a neutron star as you go from the surface to the middle and remember that it's going to be threaded by a very powerful magnetic field. All stars have magnetic fields and if you compress a star like the sun down into a ball a few about 10 kilometers across then the residual magnetic field will still be there. One of the things you learn in astrophysics very quickly is the scale of the physical objects and the conductivity of the material is such that you can't get rid of magnetic flux. It takes longer than the history of the universe to destroy magnetic flux. The eddy currents looking at it from simple physics simply flow forever so you can't get rid of magnetic flux and you would predict that something that the magnetic field would be something like 10 to the 8 teslas, 10 to the 12th gas, an enormously strong field. This does modify atomic structure in an interesting way near the surface here. It turns atoms into more needle like things and I'd like to talk about that but unfortunately there's no time. Well now we do have observations which relate to this. You can do physics which relates to the structure and begin to check it out. Could I have the next slide please? While that while it's there I'll just show you the actual flashing from that pulsar in the crab nebula. I showed you the two stars in the middle of the nebula. It's the bottom right hand one which is the pulsar. By clever photography and using stroboscopic techniques with your telescope you can make the pulsar quite visible on a photograph and effectively if you take frames about 16 milliseconds separated in time you can sometimes see the pulsar turned on and sometimes see it turned off here when the neutron star is not pointing towards you. Could I have the next slide please? Now quite early on in the history of pulsars, this was back in 1969, one of the fairly rapid pulsars detectable only in the southern sky mostly showed a remarkable effect. What is being plotted here is the periodic time of the pulses as a function of date and you see here's the period steadily increasing corresponding to a systematic slowing down of the rotation. Neutron stars can rotate at high speeds but they must slow down because they're losing energy and the kinetic energy they're born with is all the energy they have so if they're losing energy in the forms of radiation or other types of energy particle emissions they must slow down. We expect that to be happening and here you see it. However, there was a discontinuous change around about late February when the period suddenly decreased. In other words, it looked as though the neutron star was suddenly spinning a bit faster. Well what does that mean? Everybody knows in physics that the angular momentum of an isolated body and you can't have a body more isolated than an astrophysical neutron star has to maintain angular momentum but as sudden it goes a bit faster. Well the answer is it has a complex structure. If you have angular momentum distributed through the star and there may be differential rotation then you can couple angular momentum from one part to another and reproduce effects of this sort and that is what we believe is going on in this case. So how do we actually understand this increase of periodicity? Increase of spin rate? Well some elementary facts about quantum liquids which are very well understood from studies of materials like liquid helium close to absolute zero of temperature. If you have a quantum liquid it has zero viscosity. You simply can't stir it. When you have a cup of tea or coffee you simply put a spoon in and swivel it around and give the material angular momentum. That's not possible with a quantum liquid. With a quantum liquid, a quantum liquid can't contain sources of angular momentum. Put a little more mathematically this means that the fluid flow, the velocity of the flow V, curl V must vanish. Which means looking at it rather crudely you can't stir a quantum liquid. Well what happens in the neutron star? Of course it has high angular momentum before it's enduring the collapse and as it becomes a superfluid what then happens? Because it is beginning, it is rotating before it becomes a superfluid. Well the physics of that is that you get the creation of vortex lines. You get little vortices of microscopic scale containing normal fluid which has angular momentum which is quantized. You have quantized little quantized tubes of angular momentum and if you just had an isolated tube the flow around it would have to be the flow velocity, the circulation would have to be proportional to the velocity would have to be proportional to one over r and such a system is allowed because the curl of that vector field vanishes. So if you originally had a spinning liquid and it becomes a superfluid you get an array of vortices like this set up within the liquid. These such phenomena have been observed of course in terrestrial physics laboratories. Magnetic flux is also contained in magnetic flux tubes something like this. Well in a neutron star you have the superfluid interior and the rigid crust. Now these vortex lines can pin just in the same way as solid state physics they can pin to nuclear sites. So as the star gradually slows down you must have the array becoming less and less dense. You need less and less vortices and that you could reproduce by having the vortices move steadily outwards. If these vortices move gradually outwards as the star spins down then that is what you need to reproduce macroscopically what is going on in a massive rotating star gradually slowing. However, as it slows if the vortex lines are pinned rigidly to the surface here then of course you set up a strain. These vortex lines begin to bend they want to move outwards but they're held back by the by the pinning and it could happen for example that a batch of vortex lines wanting to migrate steadily outwards actually breaks this surface here or it could come uncoupled in other ways. What I've drawn is a rather approximate model of course these vortex lines actually thread up into the lattice here and it's a more complicated situation than I've sketched. But you can have a sudden discontinuous migration of vortex lines outwards and of course when they reach the outside shell they impart their angular momentum to that and can actually spin it up a little bit they can speed it up. So it's processes of this kind which actually tell us that we are dealing with a complicated rotating object and if you look in detail and as much work has been done by solid state physicists particularly in the United States on this problem you can account and understand most of the observational phenomena. It's rather difficult to explain this in detail but when you watch carefully the timing of pulsar then you get these periodic you get these changes of pulse rate suddenly but then they tend to creep back they relax back to a state near the original spin rate not quite but but but they may be near it. This is a relaxation phenomenon it has a time constant time constant from vary from from days to years according to the actual structure of the neutron star and these time constants can be measured by physical observation. They come out about right without a superfluid in the center of a neutron star you would not reproduce the right relaxation effects. Coupling would be instantaneous if you had anything but a but a but a quantum liquid so that you have very good evidence here that this actual structure of a neutron star exists and you can use it to study how superfluids at a density of say 100 million tons per cubic centimeter would actually behave. So there's a great deal of useful physics you can do here. Of course not all pulsars do this and I'll have more to say about the really steady rotating pulsars later on because you can use them as clocks. Now that part then is fairly nicely confirmed by observation but why is it that neutron stars actually emit radio waves. We don't really understand this as well as we should. I think we're near a solution now but we don't understand it terribly well but we have to produce a beam of coherent radiation given a rotating ball of neutrons with a powerful magnetic field. Well some elementary physics helps here you know that if you take a normal cylindrical bar magnet in the in the physics lab and spin it then you'll get a voltage between the center of the of the magnet and and the outside. If this is a slipping conductor here then with a with simple lab experiment you can you can generate a millivolt or something of the kind. Do the same thing with a with a neutron star you've got a magnetic field of 10 to the 8 solar you can spin it up to speeds of up to several hundred revolutions per second and you get an enormous potential difference between the pole and the equator can be as large as 10 to the 18 volts maybe even slightly more depending on the neutron star. So what does that what does that do that potential difference well if you had a symmetrical case like this you would get charges forming on the surface of the star and that gives rise to a magnetosphere. Charges are literally flung into space by these forces could I have the next slide please. You in fact will get around the star a charge separated magnetosphere of this kind so extending many perhaps up to several thousand kilometers according to the spin rate from from the neutron star you're going to get a charged magnetosphere in which some charges are positive and some charges are negative and they will be trapped in the magnetic field except near the magnetic poles where particles can escape to infinity along field lines which which never returned to the star. This happens because you must pass through at a certain distance something called the velocity of light cylinder where if the magnetosphere continued to curotate like the magnetosphere of the earth or the magnetosphere of the planets you would be trying to force material at speeds greater than sea and that is not allowed so you must get some sort of a wind and particles breaking off into space just like that. Well within that general scheme you can begin to understand why pulsars should should radiate. If you take a rather more complicated model where you tilt the magnetic field sideways you've got that magnetosphere around the star then mostly the magnetosphere actually short circuits the huge electromotive force that I described but close to the magnetic poles this won't happen because material is escaping along open field lines and it doesn't as it were short circuit all that potential difference. Well in such a region you have a powerful acceleration and charges finding themselves in that region will be accelerated in the presence of the magnetic field they will emit gamma rays the gamma rays will form electron positron pairs the electron positron pairs will be further accelerated they will emit gamma rays and this process is a cascade a kind of avalanche and you can get copious electron positron pair production above the poles which forms an electron positron plasma. Well it will of course be a relativistic plasma and one has to consider a relativistic mass of particles and so on it's not easy to work this out but a wind in that plasma this plasma is probably flowing outwards at somewhere near the speed of light plasmas are notoriously unstable phenomena as we all know from terrestrial physics charge bunching can take place and these charge bunches can emit coherent radiation by the process of curvature radiation they are guided along the magnetic field and they cause radiation because of their acceleration. Well there are many details you can fit on to a model of this kind I've only dealt with that very very crudely but that in principle is how we get radio emission from a pulsar. One of the more interesting pulsars which was mentioned by Professor Ramsey this morning is rather famous now it was discovered around 1974-75 by Professor Taylor and his colleagues now at Princeton it's a binary pulsar in which you have two neutron stars orbiting each other with a period of about seven hours just over seven hours the orbital radius here is about the same as the solar diameter so it's a very compressed compact object and it's the wonderful tool laboratory tool for testing general relativity. One of the things you can observe very clearly because this pulsar is an accurate clock you can use it for timing and calculations of this orbit with very high precision is orbital precession one of the classical tests of general relativity was the precession of the orbit of the planet Mercury. Mercury precesses because of general relativistic effects by about one arc minute every hundred years a very small effect but detectable. This pulsar orbit here is changing by about four degrees per year it's an enormously more powerful effect and the orbital precession is highly measurable and that checks general relativity very very accurately. Doing the full relativistic calculations such a system ought to be radiating gravitational wave energy that is a prediction of general relativity but as you know gravity waves have not yet been detected by actual gravity wave detectors however if that system is losing energy by gravity waves then the orbits the binary orbit ought to be shrinking because it's losing energy and therefore getting a little bit faster. Well Professor Taylor has been measuring this orbit ever since the discovery of that pulsar in 1975 and here you see the timing measurements where you're looking at the difference from the systematic behavior such as would be accounted for by gravitational waves. There's an orbital phase shift telling you the orbit is actually shrinking the pulsing the binary orbit is getting more rapid. Now those points are the observations and the curve line through it is the prediction using Einstein general relativity theory if the system is radiating gravity waves according to general relativity and the fit you see is absolutely superb. If you're a physics professor looking after students who are learning physics in the lab and they fit points to a curve and they come with results like this you're a little bit suspicious you think maybe they're too good they've been adjusting the points a little to come down onto the curve here but Professor Taylor is a skilled observer he's not cheating and this I think is really certain I would regard it as 99.99% confidence that we really have to be detecting gravity waves. Well Professor Ramsey this morning was talking about pulsars as accurate clocks and I would just like to show one more overhead here how good really are they if you squeeze them hard. Well the first pulsar I showed you on my view graph the millisecond pulsar is an extremely accurate clock and you can start comparing it with atomic time with the banks of cesium clocks that Professor Ramsey was talking about and here you see over the years for which that pulsar has been available the comparison between pulsar time if you like and atomic clocks and if you compare the pulsar with one set of atomic clocks maintained at the national bureau of standards in Boulder you can see the scatter here you can see the residual errors in comparing those two clocks and there's a certain scatter here and you can see systematic variations and in fact they're not agreeing perfectly as you might expect the scatter is random. If you compare the pulsar with the world's best clock that's to say you put together all the cesium clocks in the world and adjust them to give the best mean rate of time as Professor Ramsey explained this is carried out in Paris that's the world's best clock and if you plot the millisecond pulsar against that I think any physicist would agree that the scatter of points here is more systematic and more satisfactory than comparing than the comparison with one set of atomic clocks. So it looks as though the pulsar is at least as good as the best atomic clocks available when you put them all together. Well is it better? I wish we knew I think we will know in a few years time. We need more millisecond pulsars if you have clocks the only way to know which clock is better than another one is to stick them all together and see which ones agree and which ones disagree. If we find the pulsars are giving better residuals a greater accuracy than the atomic clocks then we know we're winning. Of course atomic clocks are getting better as you heard from Professor Ramsey but the neutron star clock the pulsar clock has a certain advantage it will be up there for a million years. It is not subject to funding problems and who knows maybe next year we shall be setting our watches by looking at the sky. Well I've outlined some of the really interesting exciting physics you have to consider when you're looking into the sky at what goes on in astrophysics. There are many challenging problems waiting to be solved out there. It's a field where we want the best scientists with new ideas for both observation and theory and I hope that some of you out there in the audience will in the future be helping in this wonderful quest. Thank you very much.
|
This lecture was Antony Hewish’ third lecture in Lindau, and by this time almost twenty five years had passed from his discovery of the pulsar, for which he and Martin Ryle won the Nobel Prize in Physics in 1974; the first Nobel Prize in the field of astronomy. In this lecture, Hewish builds a convincing case that astrophysics is exciting because of the extreme conditions it takes place in, and reveals the properties and processes of dying stars. A pulsar is a quickly rotating neutron star with a powerful magnetic field, emitting radio waves at a stable frequency. The beam of coherent radiation sweeps around the axis of the pulsar at very precise intervals (the pulsar discovered in 1967 had a sequence 1.33 seconds apart). Particularly since the discovery of millisecond pulsars, the concept arose of using these celestial objects as accurate clocks. What would happen if we squeezed a lump of material here on Earth? The atoms of the material would become densely packed, until eventually their orbitals would overlap. If the material is compressed more and more, without end, the quantum energy would rise and the electrons would start to move randomly. At the scale of extreme physics, this degenerate matter will form neutrons (hence the name neutron star). When a star becomes low on nuclear fuel, it explodes, and, depending on the mass of the star, forms white dwarves, supernovae and its resulting neutron stars, or black holes. Neutron stars have an unimaginable density of 10 to 100 million tons per cubic centimetre, the mass of the Sun crammed into a volume smaller than the Earth. As Hewish notes, old stellar age is not a dull phase. What do you need for extreme physics to take place? What is a neutron star like? Why do neutron stars emit radio waves? Hewish answers these complex questions so that they can be understood by a general audience, and creatively describes other-worldly phenomena, such as matter that has needle-like atoms, quantum liquids that cannot be stirred and plasma flowing at the speed of light. Hanna Kurlanda-Witek
|
10.5446/55134 (DOI)
|
Music So ladies and gentlemen, let me give you the menu of the talk. Before I begin, I will make a personal remark. I've left the field of the Mossbauer effect some 25 years ago. So I'm talking about Neutrino physics tonight because today, because I left, as I said, the field some, I've fooled around for 15 years in the Mossbauer effect and then I've had it. I moved to France and I was director of German, British, French, high flux reactor, the biggest in the world. And after that, I decided to change my field because I was tired with the competition. First of all, there were hundreds of laboratories involved in this. There are even today 2,000 publications per year in the old field. So I left it and I went to Neutrino physics. I first looked at the neutron experiments. We had done thousands of experiments in the meantime and none of them appealed to me sufficiently well. And I was only, when the neutrinos came up, I immediately caught on fire. And that's why I talk about Neutrino physics today. So let me first talk about the Neutrino sources. There are essentially five sources we have. First is the nuclear fission, the nuclear reactors. The second is the nuclear fusion. It's the sum. You use this first equation and just turn it around. You put it on the left side, what you have on the right side there. But this doesn't work. It works only in nuclei. It doesn't work in individual nuclei because the proton is lighter than the neutron is. And for that reason, it doesn't work the ordinary way, but it works within nuclei. The third time of neutrinos I will also talk about is the atmospheric neutrinos, which are made by the Japanese in the supercomucandic experiment. You go first through pylons in the atmosphere and then you end up with neutrinos. The fourth and fifth I won't talk about is the supernova explosion. I can easily talk for two hours on the neutrinos. So it's within half an hour. It's only a gist of what you in principle have. The supernova explosions give you all kinds of neutrinos. It's electron neutrinos, it's myoneutrinos, and it's a tauoneutrinos. But they are rather rare. It's about 400. Every one is about a distance of 400 years, so they are rather rare. The high energy accelerators are more likely to give neutrinos. They essentially give muon and muon bars, anti-neutrinos. This means, but I won't talk about them. So the second description I want to make is a few words about the detection of neutrinos. How do we detect them? Look only at the red ones, which come from the right. The left one is the ordinary neutrino decay, a neutron decay, but the right ones are more interesting. It's largely the chlorine and the gallium experiment, which are responsible. The chlorine is only working at higher energies. The gallium is working also at lower energies, which matters for the talk a little later. The chlorine is very cheap, the gallium is very expensive, therefore you do it rather rarely. In the gallium you do it quite frequently, it frequently means twice it has been done so far on the chlorine case. You also can detect neutrinos by Cherenkov radiation, but this means you have a high threshold there, a very high threshold, and you do it by the Cherenkov radiation, either by detection of solar neutrinos by neutral current reactions or by charged current reactions, as you see down here. The disadvantage in this is the high energy threshold. It's about 6.5 MeV at the moment, which is a disadvantage, a drawback. The advantage is you get an instantaneous measurement, you really see the sun, while in the other experiments here, you have to wait for a long time and you collect only. You don't know where the sun is, it's really collecting from everywhere, and that is a drawback. Let me make also a few historical remarks for reasons which I come to. 1930 Wolfgang Pauli was introducing the Neutrino hypothesis. He found that if you only use electrons and going from a nucleus to another nucleus by the emission of an electron, you don't have enough. He found, since it's a continuous spectrum rather than a single line, he found that you have to add something, and he discovered the electron neutrino, which we now call the electron anti-neutrino for reasons I won't go into. In 34, Fermi was already using this old Pauli theory, using a vector theory of the weakened action, which is still valid today. If we limit ourselves to lower energies, which means nuclear energies, and also if we limit ourselves to the parity violation, it was discovered only in 1957, so it was not known by then. So if we use instead of the vector theory the A-vector theory today, it's still valid today. But not at the very high energies, which we nowadays use very often in the higher energy accelerators. In 1956, Reines and Cowell were the first to detect the neutrinos directly in a direct experiment by using this reaction, which is shown here, by using neutrons and positrons as a result of the incidence of neutrinos on protons. It was the first discovery in 1956, only after 30, after 26 years, you see after Wolfgang Pauli had introduced the thesis, the neutrino hypothesis. He never thought at the time when he was introducing the neutrinos that they could ever be measured, but they were measured in 1956 by Reines and Cowell. Today we measure them routinely. In 1966, 1962, Ledermann-Schwarz and Steinberger then discovered there was a second type of neutrinos, which is shown here. Today in 1992 an experiment was done. It's shown we now that there are three types of neutrinos, which we call today the electroneutrino, the muonutrino and the tau-neutrino. In 2000, let me skip that, in 2000 the Fermi lab was also discovering probably the tau-neutrino for the first time they have seen it by an experiment which I won't go into. So let me first say a few words on the masses of photons and neutrinos. In photons we have a gauge invariance principle for electromagnetic interaction. The photons therefore have zero mass in vacuum. With a neutrinos such a recipe doesn't exist and therefore it could well be that the neutrinos have a mass or they don't have a mass. If they have a mass it's a two-fold particle or the four-fold particle which you can have. In the, you either can have a spin up and spin down in some direction which makes a factor of two and you can have another factor of two if you have the electroneutrino and the muonutrino identical. This means you only have a factor of two or if they are non-identical you have a factor of four. Therefore we call the two cases either Majorana neutrinos which are two-fold, two-component neutrinos or Dirac neutrinos which have four component neutrinos. A few examples are given here. You have the chlorine which is bombard by the neutrinos. The chlorine has the advantage that it's very cheap. They usually use the solution which you put your cloths into to clean it. So you could easily make a measurement but the disadvantages that you have are rather high energies. This is well known that it works with neutrinos. This is unknown that it works with antineutrinos. So with chlorine it works very well. With antineutrinos it works. It was never observed. The main reaction in the sum is also written down here. It's essentially the same then here. You have the baryon and the lepton number conservation. The baryon is not interesting here but the lepton number conservation is conserved here. Two pluses and two minuses. This means two times one and this minus two times one. So the lepton number is conserved here. The protons are anyhow conserved. So it would not be for the mass what I am saying here but the mass is very small in the neutrinos and therefore we can essentially forget about them. The theorists prefer Majorana neutrinos. The two component neutrinos that means photons neutrinos and antineutrinos are the same. They prefer Majorana neutrinos because in the grand unified theories in the goods you have the case where you have Majorana neutrinos and not Fermion neutrinos. Whether this is right or wrong it has to be seen. Nobody has been up yet at the high energies. Let me now talk about neutrino oscillations in vacuum. By this I have here made a simple model in which I have two neutrinos only. Usually we have three. I mentioned the electroneutrino, I mentioned the muon neutrinos, I mentioned tau neutrinos we haven't seen anything yet. We get away usually with two neutrinos. It starts with blue ones for instance and they become red ones and then they become blue ones. That's what we call neutrino oscillations, become red ones and so on. So all you have to do is you put a place a detector somewhere and pull it away from the neutrino source. Either from the sun which is far away or from a reactor which is rather close by. So you go from neutrinos to anti neutrinos or from one type of neutrinos to another type of neutrinos. Come back to the same neutrinos, go to the other types and so on. So this is just a model for two neutrinos here. You can in principle do the same thing for the three. You have neutrino oscillations by going from one to the other in the two neutrino approximation. And you have an appearance experiment which involves three parameters. It's the difference in masses squared of an electron volts which the whole thing is here made. You have the real formula here and you have the approximate formula there. It's the masses of the neutrinos which go in. It's the distance which goes in. This is the distance between the source and the detector, either the sun or a reactor as I said. And you have the energy which goes in. So you can easily measure the three quantities. You measure two of them and the third one is then left over and therefore it has to fit. You also, instead of the appearance experiments which are here, where you go from one neutrino to another one, you have the disappearance experiments where you go from the same neutrino to the same neutrino. Of course this is at time zero and this is at some other time. So you first begin let's say here and later on you make your experiment there. Therefore you get the disappearance experiment, you go from electron neutrino to the same type of neutrinos or from electron anti-neutrino to the same type of anti-neutrino of the electron type. This is very simple if you have a two neutrino approximation. Now to give you a little theory, the mass eigenstates which are those here and the weak interaction eigenstates which are those here are correlated by a linear matrix. It's a linear relation simply for quantum mechanical reasons because in quantum mechanics the unitary matrix by unitary matrix bar equals one. We know we can only measure the probabilities in quantum mechanics but in reality we have the linear relation between the masses of the mass eigenstates and the weak interaction eigenstates. The mass eigenstates incidentally are stationary states. They are the states which propagate freely in vacuum according to this formula and the weak interaction states are just those which are decaying the ordinary way, the new electron neutrino, the mio neutrino and the tau neutrino. What you measure is of course not this relation directly but the probabilities between the electrons and the electron of alpha type which is electron, mu and tau and beta type which is again electron, mu and tau. Now we have done two experiments. We have first done them at Reactus. I was only involved in the Gösken experiment. They blew up things later on. They made them, everything is bigger, it means more money which you go away. We had only a distance of between 38 meters that was the closest we get on the outside of the reactor to 64 meters which was the limit of the fence which was around the reactor. So in Gösken at Chew's and Paolo Verdi they blew up things, they made them bigger. They use instead of one reactor, they use two or up to four reactors. Its Chew's is on the Moselle and Paolo Verdi is somewhere in the states. It's really the Chew's experiment which is more important. The states were a little late in doing this. It's really the Chew's which is important. They blew up things, instead of one reactor they use four and instead of going out, they use instead of a few tons of material, they use 2,000 tons of material so they can go further out. But they have never seen a result in Reactus. That's the important thing. We also made the Sun experiment in the Gallix project in Italy, close to Rome. This is in the Apennine mountains. You have to go underground because neutrinos overground you get lots of wrong reactions. You have to go really deep underground if we have about 1,800 meters underground which means 3,500 meters water equivalent. This is the Gallix experiment which was done in Italy. It's also the Saici experiment in Russia, in the Vaxan Valley, in the Caucasian mountains. They did a similar experiment. It's nearly the same but not quite. Let me not go into the details. But the home steak and the super cameo cande, the home steak was the first, Ray Davis was really the first neutrino pioneer which did first experiments. He used several tanks of chlorine, he used the chlorine reaction, therefore he could only get the high energy branch of the whole neutrino game. The same is in the super cameo cande, it's even higher. It's a 6.5 MeV threshold as I said. So you measure above 6.5 MeV, you can only get the boron reaction in these two cases while you get the entire reactions in these two cases here. So first two experiments were done only on the boron by shooting the neutrinos on chlorine. This was relatively cheap. The third and fourth experiment here were shooting that on gallium which was very expensive. You shoot the neutrinos on the gallium and you get the germanium and the uninteresting, we have not found them yet, electron on the side. We have to give you a gist of what Riegende action really means. We have about 30 tons of gallium, a total, we use it in the form of gallium chloride. So we have about 100 tons of that material and we get one reaction per day with a flux of nearly 10 to the 10th or 10 to the 11th neutrinos falling per square second and per second on the side. We get only a reaction of one per day in the 100 tons of material which we have. This is the galaxy experiment which, and the Russians have the same, about the same reaction. Now we all find in the solar neutrino case a neutrino deficit. Why this is a question? Why do we have that? There are two reasons for that. One is the nuclear physics. Well, there's another effect which has to do with the mass. Let me not go into this. The astrophysics effect is the standard model correct as we use it today. Well, it's probably correct because we have the heliosasmology. We have an additional way of oscillating the Sun. The Sun is oscillating and they have made thousands of measurements of that oscillation and everything is fine. So the standard model is probably correct and therefore we cut out the nuclear physics reaction. And we use this nuclear physics effect and we cut out the standard model which probably is correct. So there are two successive experiments which are in the line which are Boraxino and the Sudbury Mine. There has been a lot of the Sudbury Mine, the snow experiment as we call it in Canada, in recent days in the newspapers. Forget about that. That's essentially because they want to collect money. They have to measure for years before they get, because in the case of the snow experiment, they want to get the total flux of the neutrinos which leaves the Sun. They get this irrespective of oscillations. And this is a crucial experiment for that reason they need the money, they need the money anyhow. But it's a crucial experiment in which they can only measure the neutral current. They measure the neutral current and therefore they get irrespective of oscillations. They get how many neutrinos are being created in the Sun, how many are leaving the Sun. The number of creation in the Sun, in the center of the Sun and the leaving of the Sun is about the same. A big star like the Sun, the neutrinos enter on one side and leave on the other side, and they hardly feel that it is there. So if a big star like the Sun has a phenomenon like this, you can imagine that it is very difficult to measure with a small detector of a few hundred tons, which you can at most achieve on Earth to measure that phenomenon. We only get one per day reaction in spite of the fact that enormous amounts of neutrinos are coming from the Sun to us. So that's going for the Sudbury Neutrino experiment, which needs much more time to do. The second one is the Boric-Ceno experiment. It has to do with some side reaction in the Sun. It's a Beryllium-7 reaction, which matters there. We have from the Boron-5 reaction and from all reactions which we have in the Sun, the process I showed you before on the Sun is only an average process. It's breaking down in many, many pieces, many, many details. From the Boron experiment and from the Neutrino experiment, which we did, we know already that hardly anything is left for the Beryllium-7, which before you go to the bore, you have to go through the Beryllium-7. So since hardly is left for the reaction there, it means that it must oscillate to zero, really to zero. And therefore, the Neutrino experiment is probably being done. The Boric-Ceno is being built up in the Gran Sasso since many years. The Americans are somewhat late. The Germans were faster, but they have a smaller part. The Americans have a big amount of money which comes in. Each year it's postponed by one year, so I don't know when it starts. It's supposed to start now next year, but whether it happens or not, we will see. So the Boric-Ceno is looking at this reaction in a fancy experiment. The Sudbury-Mine experiment is going in, as I said, measures the neutral current. It tells you something about the Sun. The Boric-Ceno is telling you, well, essentially also about the Sun, but hardly anything is left there for the Beryllium, and therefore they will have to do this. Let me briefly mention the Japanese experiment, the Super-Kamiokande. NDE means nuclear decay experiment, and SUPER means it's a blow-up of the old Kamiokande experiment. So Super-Kamiokande comes by hitting the atmospheric protons in the atmosphere, and this way you get neutrinos, or you get largely immune neutrinos. You get also some electron neutrinos which are coming down. There are no tau neutrinos in the experiment. You measure the ratio of the data which you took divided by the Monte Carlo calculations, because by that ratio drops out a lot of uncertainties which you otherwise would have. So this ratio is one, if you have no neutrino oscillations, it is smaller than one if you have neutrino oscillations, because the neutrino oscillations show just up in the data and they don't show up in the Monte Carlo experiments. So the ratio presently is, so let me not go through the details, or maybe one detail here. The cavity, the atmosphere is very small, it's about 20 kilometers. So they compare, for instance, one experiment is done this way by using the neutrinos coming down and using the neutrinos coming up through the Earth, and they have seen something. The neutrinos coming down essentially mean nothing. The neutrinos coming up through the Earth, it's 20 kilometers, which the atmosphere is around here, compared to the size of the Earth, which is rather large. It's about 6,000 kilometers radius, diameter is twice as much. So coming down, you assume nothing happens. Coming up, you assume a lot happens. And the result of the experiment, I don't even have that here for strange reasons, the result of the experiment is that you have nothing with the electrons. You have essentially, myoneutrinos, as I said, coming down in the cavity, which is below, it's about 1,000 meters below the Earth's surface, so you have nothing coming down, but you have a depression coming up, and it's either you get from the myone to the tau neutrinos, or you get from the myone to the sterile neutrinos, which I didn't mention yet. The neutrinos are normally only left-handed ones, if you have neutrinos, and the right-handed ones, if you have antineutrinos, the sterile ones means the other possibilities, which exist, which don't show up on Earth in the normal way, but they show up in the abnormal way in the neutrino oscillations. So that's about what I wanted to say. I have much more to say in principle, but I have to stop here now. Thank you.
|
Nine days before Rudolf Mössbauer held his eighth and last lecture on neutrino research in Lindau, the Canadian Sudbury Neutrino Observatory (SNO) had published its first results to “explain the missing solar neutrinos and reveal new neutrino properties”, as it stated in a press release[1]. “There has been lot of the Sudbury mine, the SNO experiment as we call it, in recent days in the newspapers”, Mössbauer commented. “Forget about that, that is essentially because they want to collect money. Their experiment needs much more time.” SNO’s first results, however, proved to be right, and therefore its project director Arthur McDonald shared the Nobel Prize in Physics 2015 with Takaaki Kajita from the Superkamiokande (Super-K) in Japan “for the discovery of neutrino oscillations, which shows that neutrinos have mass”. Electron neutrinos produced in the sun can change their identity midflight and arrive on earth as muon or tau neutrinos. The first Super-K results had been published in 1998, and Mössbauer dedicated the last minutes of his lecture to this Japanese experiment, vaguely indicating that “they have seen something”, but leaving the decisive question open whether neutrinos do have or do not have mass. It was too early to give a definite answer. The SNO experiment, in fact, had just confirmed and specified Super-K’s previous results: “Super-K told us just the bank balance, but SNO could actually see the record of deposit and withdrawals.”[2] Rudolf Mössbauer was a brilliant speaker and an excellent teacher. So this comprehensive account of the history and current status of neutrino research (his only one in English) is well worth listening to as all his Lindau lectures on this topic since 1979. At the age of 32, Rudolf Mössbauer had been awarded a Nobel Prize in Physics 1961 for the discovery of the effect of recoilless nuclear resonance, which bears his name and allows for the spectroscopic measurement of extremely small frequency shifts in solids. “I fooled around for 15 years in the Mössbauer effect and then I’ve had it”. For more than 25 years he worked at the forefront of neutrino research, which he puts into its historical context and strategic perspective in some technical detail in this lecture. In 1977, Mössbauer initiated the first European experiment to detect neutrino oscillations at the high-flux research reactor of the Institut Laue-Langevin in Grenoble. With a higher sensitivity, he continued with such experiments at the Gösgen nuclear power reactor in Switzerland during the 1980s. In the 1990s, he became involved in the Gallex experiment, whose aim was to observe solar neutrinos with a detector located inside the Gran Sasso mountain in Italy. He briefly describes these experiments and also mentions Gallex’ successor experiment Borexino. When Mössbauer turned to neutrino research in the 1970s, it was still regarded an exotic and somewhat esoteric field. His interest to join it was sparked by the calculations of John Bahcall and the experiments of Raymond Davis. Both had revealed the solar neutrino problem in 1968 by demonstrating that only one third of the electron neutrinos produced in the sun arrive on earth. Could it be that neutrino oscillations were the cause of this phenomenon, as Bruno Pontecorvo suggested in 1969? This was a fascinating question for Mössbauer. “Ray Davies was really the Neutrino pioneer who did the first experiments”, he said in this lecture. One year later, Davis would finally share one of the two Nobel Prizes in Physics 2002 for this achievement. He was 88 years old then and at that time the oldest Nobel Laureate in Physics. Joachim Pietzsch [1] http://www.sno.phy.queensu.ca/sno/first_results/ [2] Ed Kearns, quoted by Jayawardhana R. The Neutrino Hunters. Oneworld Publications, London, 2015, p. 107
|
10.5446/55146 (DOI)
|
Well, thank you for your warm reception and I should like to thank, like other speakers, Countess Sonja Bernadotte and the Curatorium for inviting me, but also the ladies of the staff, Froschilin who had to write me at least a dozen letters to organize it all and the very kind ladies who look after us here with motherly care and to make sure that every wish, every little need of ours is immediately fulfilled. I think they contribute very much to the success of this meeting, so very many thanks. Now in 1949, Linus Pauling, Etana and Wells published a sensational paper in science. The title was, Sickly Salonemia Amolecular Disease. The paper was about inherited blood disease, which reflects mainly blacks, sickly salonemia and they showed that this was caused by a change in the electric charge of the hemoglobin molecule. Sicklers suffers from the disease, had a hemoglobin which contained two fewer negative charges than normal hemoglobin. They established this by a new technique which Armitus Aelius had invented in Uppsala, electrophoresis and for which he had gained a Nobel Prize. They didn't have such a machine in order to investigate the problem. Pauling got his collaborators to build the first Aelius machine in Pasadena and with that they made the discovery. Now in Cambridge, I was working on hemoglobin, I was very excited by this discovery and there was then joined by a young biochemist, Vernon Ingram, who decided to have a go at this problem and see what the change in electric charge is really due to and after a few years' work he discovered that it was due to the replacement of a single pair of the 574 amino acids in the hemoglobin molecule, a single pair of glutamic acids was replaced by a pair of valines. And that meant you have to realize that meant the replacement of two oxygen atoms by two carbon atoms led to a lethal disease, an extraordinary discovery at that time. Imagine this enormous molecule containing 10,000 atoms and you replace two of these atoms and that leads to a fatal disease, fatal because this change made the hemoglobin crystallize inside the red cells and made the red cells so rigid that they get stuck in the capillaries and clogged up the circulation. Now Vernon Ingram, this was the first time that the cause of a genetic mutation was discovered. You see we had no idea before what a genetic mutation is. And then we realized the genetic mutation causes the replacement of an amino acid and of course this posed in acute form the question of the genetic code which was still unknown then and was solved a few years later by Crick and Brenner. So the discovery gave rise to a huge enormous amount of research and since then hundreds of thousands of different amino acid replacements in proteins have been found which may or may not give rise to diseases and of course they are all due to faults in the DNA which occur in replication. Now at that time Ingram had to analyze the amino acid sequence of the protein but now thanks to the recombinant DNA technology this is no longer necessary and you just sequence the DNA of the gene and that's much simpler and you discover the cause of genetic mutations much faster. But there was one severe inherited disease which resisted all attempts to find its cause and that is Huntington's disease, St. Feizstands of Deutsch and a severe dominantly inherited neurodegenerative late onset disease which first manifests itself by uncontrolled movements, mood disturbances then leads to dementia and finally to death. It's a terrible disease which afflicts people in the middle age. Its frequency is about four in a hundred thousand among European populations and a few years ago its cause was totally unknown. Then by tremendous collaborative effort of 61 molecular biologists, biochemists, geneticists, medical people, by 61 scientists in eight different universities in the United States and Britain the gene was discovered and was published in a great paper in Cell, signed by the Huntington's disease collaborative research group and they found that the gene stretched over a tremendous length of DNA. It covered 67 exons about which Gilbert talked yesterday. 67 exons spread over 180 kilobases of DNA and the gene codes for one of the largest known proteins, a protein of over 3,140 amino acids in a single chain. What is it? Six times larger than hemoglobin. So, they found the gene, so what was the difference between the gene of normal people, the gene of the patients of Huntington's disease. It was in the length of a repetition of codons in normal people starting at the codon number 18, there was a long repeat of only COG, so it goes COG, COG, COG, COG, but anything from about 10 to 35 of these and in the secret cell, in the Huntington's disease patients, this was extended. So here you see a slide of the base sequence of the DNA and written underneath the amino acid sequence in the protein in the single letter code, which you may not all be familiar with, but never mind. So there are 17 mixed amino acids and then here the amino acid is 18, you see Q, Q, Q, Q and Q stands for glutamine. So there's a long stretch of only glutamines coded for by this repetition of COG, COG, COG and then there's a sequence of proteins and then the amino acid sequence continues in that large protein, the sequence shows no homology with any known protein, so it gives you no clue whatever what its function might be. But the amazing discovery was that the only difference between the gene in the normal people and those in the patients consisted in the length of the COG repeat up to about 37 repeats. People remain healthy and with more than 40 they get the disease and the longer they repeat, the earlier the disease that's in and the more severe the disease. So you see here we had a completely new cause of a genetic disease, not an amino acid replacement or as often happens in recessive diseases the deletion of a gene or the putting out of action of a gene, but the elongation of a repeat of a single amino acid. Now what does it mean? You know I'm known as the hemoglobin man and yes I haven't said that this paper appeared in cell in March 1993. So I'm known as the hemoglobin man and a little before that I worked on an obstructive hemoglobin, the hemoglobin of a parasitic worm, ascaris, which interested me because it has a very high oxygen affinity. A group in Antwerp determined its amino acid sequence and found it is made up of eight subunits and each of these subunits there was a peculiar sequence of a kind that I hadn't seen before which meant that along a straight strand of polypeptide chain there would be an alternation of positive and negative charges. So on one side it would be plus minus plus minus plus minus and on the other side also plus minus plus minus. So this was a polar zipper. So then I wondered what other, there must be some other proteins that contain polar zipper and because of that I came across some proteins in Rosophila in the fruit flyer about which we shall, we shall, we'll speak later today and then there I found several proteins which had long repeats of only glutamine and so I wondered what does this mean and just out of adult curiosity I built an atomic model of a polyglutamine chain and I brought this along to you for you. I'm afraid it will be hard to see from the back but people at the front at least will see it, you see what I've got here is two polypeptide chains marked out by these white strands and here on the right and left are the glutamine side chains and what I found was that if you have two chains of only glutamines you can tie them together by hydrogen bonds between the main chain amides which are along here and also on each side where there's side chain amides so that such a chain of only glutamines also acts as a polar zipper and I wrote a little paper about polar zipper and polar zippers sent this in and this was impressive and I thought that was the end of the story just the curiosity you know and then I read this paper in cell and it suddenly occurred to me that my observation of my model might possibly provide the clue to the molecular mechanism of the disease and I got very excited about this but I mean nobody would pay any attention if I just say I built a molecular model and look this is what it would be like. We had to do some experiments so I asked a chemist in our laboratory to make me a synthetic polyglutamine so he made a chain of 15 glutamines in a row and but this would have been insoluble so he put two aspartic acid residues one end and two lysines the other end they have electrically charged so they made this soluble and then I we looked at a solution of this 15 met with a spectroscopic method circular dichroism which actually tells you what kind of form such a chain would take whether it forms straight chains linked together like this or whether it forms helical chains depending on the shape of the chain you get different spectra and they've all been characterized and are well known. Now when we looked at the spectra of this solution we found that indeed the chains have the form that my model predicted they form it forms straight chains which tend to tie together like that and now that was it in acid solution at pH 2 when neutralization at pH 7 this polymer slowly precipitated in the form of tiny little worms and an x-ray picture of these worms showed that indeed the protein has this structure but the chains don't run along the length of the worm they are they are wrapped around the length of it like that. Now we have look at the next slide here then I have an atomic model of this model sorry I'm computer drawn a computer drawing of this model which will help those of you at the back and couldn't see this very well so you see here you have the yes here you have these straight chains you see four or four such chains of glutamines and the dotted lines show the hydrogen bonds which hold them together so here would be CO on this side and NH on the other side and the CO combines with the NH to form a hydrogen bond which has a strength of between three and five kilo calories and these bonds you see hold the chain together and then you must imagine sticking out from the screen on one side would be the side chains and they also form hydrogen bonds which you see there and going back into the screen would be another lot and they again form hydrogen bonds like that so that was the predicted structure and the chains you see written here are 4.8 angstroms apart which is important because that's the signature of the structure and now having got so far I decided I could stick my neck out and published a paper in the Proceedings of the National Academy suggesting making this suggestion extension of the glutamine repeats may cause the affected proteins to agglomerate and precipitate in neurons symptoms may set in when these precipitates have reached a critical size or have resulted in a critical number of neural blocks well I made this suggestion but there was not the slightest evidence in support because people of course had cuts in sections or the brains of the patients who died from the disease and had examined them with immunostaining so they made antibodies against the hunting proteins and they stained the sections with the antibodies and they found that the protein was in isolated dots in the cytoplasm and there was no evidence of any aggregates whatsoever. So I thought maybe this is all nonsense but then in August 1997 suddenly there was a complete turnaround. Jillian Bates at Geis Hospital in London had succeeded in producing the disease in mice. She had made transgenic mice. She introduced the fraction of the human gene into the fertilized eggs of mice and she introduced two kinds of genes that is, first of all let me explain, she didn't introduce the whole gene but only the first exon and the first exon codes for the glutamine repeat for the protein repeat and for some of the adjoining acids, amino acids. So she introduced this gene and she introduced it with 18 CIGs and with about 150 CIGs and she managed to breed these mice into her astonishment. The transgenic mice with the 150 glutamins began to show the symptoms of the human disease. The first showed the uncontrolled movements and seizures and the general loss of weight they didn't develop as well as normal mice and they died prematurely. On the other hand the transgenic mice with the protein with only 18 glutamins remained healthy and showed no abnormal symptoms whatever. So this was a terrific discovery because this was the first time that the human disease had been reproduced in an animal which gave one hope that one might possibly find a treatment. Now she handed these mice to a lecturer in anatomy at University College London Stephen Davies. And he cut thin sections through the brains of these mice and one morning he came to see me and he burst into my room and he was so excited that he began to tell me his results before he had even shut the door because he had found in the brains of these mice the next slide please. There we are. Yes. He found that in the cell nuclei of these mice he found aggregates of protein which stained with an antibody against this N-terminal peptide of the Humtiken protein. So clearly they are composed of it and we consisted of fibers of and little granules. So he found what he called is the intra-nuclear inclusions in the neurons of the cortex and the stratum in the brains of these mice. And he also found stains in the nuclear pores and in neurites these are sort of little extension of the nerve fibers which extend in different directions. So he was terribly excited about this as he may well have been because this suggested that maybe there was something in this prediction that I had made. Now his discovery stimulated people at the Harvard Medical School, Marion D'Figlia and Neil Aronin to look again at the sections of the human patients and they found that these nuclear inclusions were there and that they had been overlooked. In fact a paper appeared in advances in neurology in 1979 so many years earlier by another group of Americans who showed that in the cell nuclei of hunting patients there was some protein which they could stain with urinal acetate and they found it at this sort of structure but they didn't have any immunology of course the gene wasn't known, the protein wasn't known, they didn't know what the protein was made of but the paper was forgotten and because you know it was a discovery before its proper time and of course afterwards people remembered that it had been there. So now yes the next slide shows you this human nuclear inclusion which D'Figlia and her associates discovered and again you see if you look closely you find that it's a mixture of granules and little fibers. Now since that time seven other neurodegenerative diseases have been discovered all due to extension of glutamine repeats in different proteins. They give rise to the generation of different neurons for instance there's one Kennedy disease which is due to an extension of glutamine repeat in the receptor, in the androgen receptor and which affects motor neurons while hunting disease affects neurons in the cerebral cortex and the cerebral stratum. These proteins have nothing in common but there's one feature an astonishing feature which they all have in common in all but one of the cases repeats with fewer than 37 glutamins are harmless the people remain healthy repeats with more than 40 glutamins produce disease. So this means that the extension of the glutamine repeats must be associated, must cause a change in structure of this repeat and that change of structure starts the process which finally leads to aggregation. Now what seems to happen is the first stage seems to be that it makes the protein susceptible to proteolytic attack. Diffilia also used immunological methods to find out what's there and she tested this with an antibody against the first 17 amino acids of the hunting protein and this aggregate was stained with it and then she tried an antibody against a peptide about a 6 of the way around amino acid 500 and it failed to stain it suggesting later confirmed that only a fragment of this large protein gets into the nucleus and the fragment is associated with ubiquitin which is a signaling protein that attaches itself to proteins that are unstable that are in the process of unfolding and prepares them for digestion by other enzymes in the cell which then split it into individual amino acids. So this protein also the mouse protein stains with antibodies against ubiquitin. It also stains with antibodies against chaperones and the proteosomes so complexes that digest proteins and probably various other proteins may be associated with it. Meanwhile, Jillian made stimulated a group at the Max Planck Institute for Molecular Genetics in Berlin to take the problem, take up the problem in a different way. Erich Wanker and Schatzinger and their colleagues introduced this same exon into Kohlibarterium and they actually manufactured this protein in Kohlibarterium by recombinant DNA technology. Now they made it with a varying number of CIGs so a varying number of glutamines and with 20, 30, 51, 83 and 122 glutamines. And what did they find? They found that the protein with 20 and 30 glutamines remain soluble but the protein with longer glutamines repeats aggregated and form precipitate states. So there is a precise correlation between the lengths of the glutamine repeated causes the disease and the lengths of glutamine repeat that causes the protein to precipitate when you make it in vitro. But now other questions arose. The Detlov at the University of Alabama asked himself if any protein with long glutamine repeats would produce neurological symptoms. So he had the ingenious idea of taking a protein which normally occurs in brain, hypoxantin, phosphoribosyl, transferase, a terrible mouthful but it's not such a big protein. So to this protein he attached a long glutamine repeat, a repeat of 146 glutamine repeat and then he made mice transgenic for this protein and indeed the mice developed seizures, behaved abnormally, had shortened life spans and when he sectioned their brains he found in the pyramidal layer of the cerebral cortex about a third of the cells developed the same kind of nuclear inclusions. So showing you see as really that it doesn't matter what sort of a protein, any protein that it expressed in brain with a long glutamine repeat will produce these symptoms, produce neurodegeneration and these aggregates. Nancy Bonini at the University of Pennsylvania in Philadelphia had another ingenious idea. Why not introduce such a protein into the geneticist's favorite pet into Drosophila, the fruit fly. So she did this, she took not hunting disease but another one of the Spino cerebellar taxia proteins and introduced a fragment of this into Drosophila and the good thing about the Drosophila is that so much is known that you can actually target this gene for the particular organ. So she targeted it to the eyes and promptly the eyes still began to show malformations, nuclear inclusions, late onset degeneration and cell loss. Well this is not only interesting in the study, you sort of ask yourself so what, but of course it's marvelous because you, the Drosophila would be the ideal animal for testing possible therapies of the disease. The, yes, what about possible therapies? The various approaches have been sort of, but you might be interested that the group in Berlin at the Max Planck Institute for Molecular Genetics, Wanker persuaded Merck to put at his disposal their entire library of 160,000 organic compounds and he built a robotic machine with which he can test whether any of these compounds inhibit the aggregation of the protein made in collar bacteria so that he can test the hundreds if not thousands of such compounds very quickly within a few days. And when I saw him in April in Berlin he had actually got one compound which did show such an inhibitory function so that there's hope that something might be found which prevents this kind of aggregation. There's one other very interesting and exciting feature about this discovery and that it has brought a remarkable unity into neurodegenerative diseases because Alzheimer's disease is due to proteins precipitating in the form of neurofibrillar tangigs not actually within the neurons but between the neurons and Parkinson's disease is due to aggregation of another protein synnuclein into what are called levee bodies after the German medical man who discovered them earlier in the century. So these are the little balls of this protein forming in this nuclear and then you know you heard a lot about prion proteins, Crutchfeld-Jakob disease and bovine spongiform encephalitis BSE recently and they are due to the aggregation of prion proteins in a mysterious manner which we do not understand. So you see each of these diseases is due to precipitation of proteins which make me think that most if possibly all neurodegenerative diseases are due to protein precipitation as indeed are many other diseases you know the one sick cell disease with which I started this talk is due to precipitation of hemoglobin in the red cells. So the next the great problem which faces us in the next century is to find ways and means of preventing this aggregation and I think this is one of the great challenges to you the young people who are attending this meeting. Thank you very much. Thank you. Thank you so. Thank you.
|
Huntington's chorea, formerly also referred to as St. Vitus' dance, is a severe hereditary neurodegenerative disease occurring in approximately four out of 100,000 persons. Ordinarily it does not become manifest until a person's middle years. It begins with uncontrolled movements, changing to variable moods, dementia and death. Six years ago, a group of 61 American and British researchers at eight universities discovered the gene responsible for the disease. It codes for an enormous protein of more than 3140 amino acid residues in a chain. In normal protein, this chain contains a series of up to 37 coupled glutamines. The only difference between the normal and the diseased proteins is the length of the glutamine series, which numbers fewer than 37 in healthy and more than 40 in diseased persons. The longer the glutamine series, the earlier the onset of the disease. By coincidence I found that long series of glutamines attach to each other like zippers, and I thought that this might supply the key to the molecular mechanism of Huntington's chorea. In my lecture, I will talk about the consequences of this mechanism in this and related hereditary diseases and their associations with Alzheimer's, Parkinsonism, and diseases caused by prions.
|
10.5446/55147 (DOI)
|
Ladies and gentlemen, in their hope to be able to offer something which will be of interest, not only to economists, but also to national scientists generally, I've chosen a problem to discuss, which although the most from my study of economic problems seems to me to apply in a much wider field, in fact, everywhere where the increasing complexity of the phenomena is which we have to deal forces us to abandon the hope of finding simple explanations of cause and effect and have to substitute an explanation of the evolution of complex structures. I like to speak in this connection of the two problems of the spontaneous formation of orders and evolution. So it is usually an evolutionary process by which alone we can account for, but account to a very limited extent, with the existence of certain types of structures. In this sense I can agree, as was Sir John Hicks said yesterday, that the degree to which in these sciences we can make predictions is very limited. What I like to say in this connection is that we are confirmed to pattern predictions to the likelihood of the formation of certain structures without ever being able to make very special prediction of particular events. In this sense, as Sir John Hicks indicated, we are scientists of a second order, but that we have in common with such an enormous field as the biological theory of evolution, which on the strict tests which Sir John has yesterday suggested will also not be a science, since it's not able to make specific predictions, and the same is true in our fields. Now the whole interrelation between the theory of evolution and our accounts of the existence and formation of complex structures of interaction has a very complex and paradoxical history, and I will allow myself, even if it delays the length of my lecture, to tell you a little about the historical evolution which in itself has had profound effects on our attitude to these phenomena. Of course in recent times the application of evolution to social phenomena has been rather unjustifiably discredited when social scientists had to learn from Charles Darwin and develop something known as social Darwinism, as if the idea of evolution were originally an idea of the biological sciences, while in fact there is a much older tradition of evolution in the study of society, and it can be demonstrated that it was Darwin who borrowed it from the social sciences and not the other way around. There's another deep connection which I want to say a few words. Our attitude to our actual phenomena, particularly our judgment of various moral views, is very closely connected with an age-old tradition which starts in antiquity with no less a person than Aristotle, who has given us a wholly a evolutionary conception of social institution which through its effect on Sanctomas Aquinas has become the attitude of a large part of Christianity towards everything which amounted to a growing development of civilization because he had developed the define as gold what was necessary to preserve an existing order. Is that ever asking himself the question how was it ever possible that if all our duty was to provide for the preservation of what is, that mankind ever greatly developed. It has even been asserted by a modern economic historian that Aristotle could not have seen the problem of evolution and the problem of the connection of evolution with a operating market economy because at the time when he lived the market economy as we call it, which is a result of evolution did not yet exist. Now in two points I can give you rather interesting brief evidence. Since my assertion that Aristotle did not possess any conception of evolution which prevented him from ever understanding social problems has remarkably been confirmed by the grand latest history of the biological sciences, one of the greatest history of any modern science which I've recently come across, Ernst Mayers, the Grossof biological thought in which he to my great satisfaction then says been part of an argument for a long time explicitly argues that the idea that the universe could have a development from an original state of chaos or that higher organism could have evolved from lower ones was totally alien to Aristotle's thought. To repeat Aristotle was opposed to evolution of any kind. Now that had the profound effect on his views about society which we have inherited from him. The view which I've always suggested that the idea that was good which served the preservation of existing institutions that he never asked himself how in fact in his very lifetime Athens had about doubled in size a large increased population had arisen but he detested the market as so many intellectuals did. But I will just give you another illustration of how lively the market at the time was which comes from a contemporary of Aristotle, one of these writers of comedies of his time of whom only fragments are preserved but that particular one is especially amusing because Mr. Oeberluss as his name was with even then common attitude of intellectuals to commercial affairs expressed his contempt for the role of the market in a few lines which have been preserved in which he tells us, you will find in Athens things of all sorts and shapes for sale in the same place figs, salmoneth, grapes, turnips, pears, apples, widows, sausages, honeycombs, roses, metals, chickpeas, water clocks, mortals, lamps, blueberries, lores, impeachments, lozens, carts, bee stings and the ballad box. Now that in a society in which the comedians could make themselves fun about some markets in such a form clearly the market was most active. Now why did Aristotle not see it and what had its effect had it? Well the fact is that at that time the idea of evolution had hardly yet arisen in any field except two. And the original insight of man and the fact that his institutions have gradually grown not as a result of intellectual deliberate design, that is a matter of a slowly growing tradition existed even then in two fields, law and linguistics. Those at least the ancient Roman students of law and linguistics were fully aware that these institutions had not been deliberately designed by the human mind but had grown by the process of evolution. And that was a concept of evolution that they meant for the next two thousand years. But in the eighteenth century things began to change. I mean the first remarkable instance is at the very beginning of the eighteenth century when a man called the Dutchman living in England called Bernard Mandeville began to study the formation of institutions and already pointed out the four paradigms, or paradigmatres as I say prefer to call them, of these phenomena, the two classical ones of law and language, but adding to them morals, money and the market. David Jung was a great figure who took over for Mandeville this idea and created the tradition of Scottish philosophers and particularly and basically relevant to what I shall have going to say, at the deep insight that human morals are not the design of human reason, an insight of double importance. A, it followed for him that if a human reason was not a design, if a human morals were not the design of human reason, it also followed that reason science did not allow us to judge human morals. We could never derive moral conclusions from purely factual statements. An idea which is nowadays mainly usually ascribed to Marx Weber, but which ever since the time of David Jung was well established. But in this connection of course he arose the problem what we are morals really due to and the conclusion from his principle is not that science has nothing to say about morals at all, but that the questions which we can legitimately ask are as a limited. A question which we can still ask, which we can demand an answer from sciences, what are the morals which we have inherited due to? How came it about that we developed those morals than others? And secondly, and clearly connected with it, a second question which is also a scientific question, what have these morals done to us? What has been the effect of mankind developing this particular kind of morals? As a field in which I as an economist had to pursue these problems, worthy of enormous importance, is a field of the morals of property, honesty and truth. They are all morals which are not the creation of human design, which on the human terms we cannot scientifically say whether they are good or bad, unless we look at them from the point of view of what effect they had on the development of humankind, of the number of humans and of their civilization. This remains a basic question. At the same time, we must be aware that the very tradition of several, or as we usually say, private property is that part of our morals which is the most disputed and disliked. And that is due to the fact, politically opposed. And that is due to the fact that it truly is a tradition which is neither national in the sense that this innate in our physical makeup, more artificial in the sense of being deliberately made by human reason. Because as the Scottish philosophers of the 18th century so clearly understood, man had never deliberately made his society. Indeed, when we look back at history, we find that these traditions never rationally justified or preserved in a variety of groups or communities because they were confirmed by supernatural beliefs, not scientific reasons, but beliefs which are, I think I should put respectfully called ceremonial truth, although not truth in the sense of scientific truth, demonstrable truth, but truth in the sense of making man actually do what was good for them, good for them in the sense of helping them to maintain even larger numbers of themselves, yet without being able to give the actual reasons why they ought to do them. Truth will stand between the national insights which are innate in us and the rational insights which we construct from our reason, which belong to the intermediate field of tradition, which is the result of a product of selective evolution in many ways similar to the selective evolution of which for the first time we got a full theory developed by Charles Darwin and Sir Darwinian School, but in fundamental respects different from it. I referred before that the idea, it was a great misfortune that the social scientists about a hundred years ago had to borrow the idea of evolution from Charles Darwin and borrowed with it the particular mechanism which Charles Darwin or rather neo-Darvinism later had provided as an explanation of this process of evolution which is very different from the mechanism of cultural evolution as I shall call it. Now that was a misfortune and a quite unnecessary misfortune due to the fact that it seems that by that time the social scientists had forgotten what was a much older tradition in their own field and weren't even aware that Charles Darwin developed his ideas largely by learning of the idea in the other field. I believe recently it has even been shown that the crucial idea came to Darwin's mind in 1838 when he was reading the book The Weather of Nations of Adam Smith, which of course was the classical exposition of the Scottish idea of evolution and which seems to have been the decisive influence even on Charles Darwin. Darwin himself admitted that he was influenced by the school but he usually mentioned Maltus as an influence which he recollected but his notebooks now show that what he was actually reading at the critical moment seems to have the weight of nations of Adam Smith. Now the result is that this first great success in developing an actual theory of evolution and the first field of biology made people believe that this example had to be followed. Well I might just insert here another illustration of my story which I've only recently discovered but which perhaps more clearly than anything else confirms my basic assumption that the conception of evolution derives from the study of society and was taken over by the study of nature. I can demonstrate very easily that the term genetic which today is an exclusive term for biological evolution was actually coined in Germany in the 18th century by men like Herder, Veyland and Schiller and was used in the quite modern term by Veyllum von Humboldt long time before Darwin. The Humboldt passages are so interesting that I believe in Quidthum. Humboldt spoke in 1836 about the fact that the definition of language can only be a genetic one, nürinetgenetische Sankt-Kron and goes on to argue that the formation of language successive from many stages like the origin of natural phenomena is clearly a phenomenon of evolution. All that was ready in the theory of languages 30 years before Darwin applied it to the natural sciences. Yet it had been forgotten or at least ignored outside the two classical instances of language law and they may now add economics including some market and money and when it was reintroduced by the social Darwinists all the parts of the explanation of the mechanism were also taken over. So my next task will be clearly to distinguish what the social theories of evolution and the biological theories of evolution have in common and what they do not have in common. This should begin with a much more important differences before I turn to the crucial but very confined similarity between the two. The differences are the following and they are now concentrating on the account of the mechanism of biological evolution given by neo-Darvinism. Darwin was on some of these points still himself not quite sure particularly on the first point I shall mention. Cultural evolution depends wholly on the transmission of acquired characteristics exactly what is absolutely excluded from modern biological evolution. If one were to compare cultural evolution with biological evolution one would have to compare it with Lava Namarkin rather than with Darwinian theory. Number two the transmission of habits and information from generation to generation in cultural evolution does of course not only pass from the physical ancestors to the physical descendants but in the sense of cultural evolution all our predecessors may be our ancestors and all the next generation may be our successors. It is not a process proceeding from physical parent to physical child but proceeding in a wholly different manner. Thirdly that perhaps is even more important. The process of cultural evolution undoubtedly rests not on the selection of individuals but on the selection of groups. But also still dispute I believe what all group selection plays in biological evolution. There is no doubt that in cultural evolution group selection was a central problem. It were groups which had developed certain kinds of habits, even certain kinds of complementarities between different habits within the same group which decided the direction of cultural evolution and that respect it is fundamentally different from biological evolution. Now this implies what I shall call number four perhaps it's already implied that of course the transmission of cultural evolution is not of innate characteristics but is all to be learned in the process of growing up. The contribution of natural evolution to this is a long period of adolescence of men which gives him a long chance of learning but what is transmitted in cultural evolution is taught or learned by imitation. Now that has produced a immaterial structure of beliefs and opinions which recently so-called popular has just given the name of world three. A world of structures which existed at any moment only because they are known by a multiplicity of people but with yet in spite of the immaterial character can be passed on from generation to generation. And finally cultural evolution because it does not depend on accidental variation and their selection but on deliberate efforts which contribute to it is infinitely faster than natural evolution can ever be. And in the time of 10 or 20,000 perhaps 40,000 years that modern civilization has grown up men could have developed all that he has developed by the process of biological evolution is wholly out of the question. In this respect the much greater speed of cultural evolution is decisive. Now having got here you will ask what similarity there remains in the thing of wholly different altogether. You know there are two fundamental similarities between the two which justify up to a point the application of the same name evolution. The first is the principle of selection is the same. In biological evolution and in cultural evolution what is being selected is what contributes to assist man in his multiplication. It assists him in growing in numbers just as those physical properties which helps the individuals to survive. So the cancel properties which are being selected are those which helps a group which has adopted it to multiply faster than other groups and this form gradually to this place and takes a place of the other. And there is a second close similarity which is very important but generally not understood and it may even surprise you at first when I mention it. Both biological evolution and cultural evolution do not know any laws of evolution, laws of evolution in the sense of necessary status so which the process has to pass. There is a wholly different conception of evolution which asserts since Hegel and Marx and similar thinkers that there are discover the laws or sequences of stages through which the evolutionary process must pass. There is no justification for such assertion. Much worse they are in conflict with the other ideas of evolution. This biological evolution and cultural evolution consist in a mechanism of adaptation to unknown future events. Now if it is an adaptation to unknown future events it is wholly impossible that we should know laws it must follow because this development is by definition determined by events which we cannot foresee and not know. Now that brings me to what ought to have been my central subject but for which I am afraid I am not as much time now as I would like to have. What is the essential subject of the cultural evolution to which I have touched such importance? As I indicated before there are two general characteristics which all civilizations which have survived and expanded have so far possessed and against which all revolutionaries have at all time protested. This is the tradition of several I prefer to call it or private I prefer to call it several property and the tradition of the family. Now I have not time here consider any further the tradition of the family. There are much more difficult problems because I believe there are changes in our factual knowledge which will probably lead to fundamental changes in the tradition of the family. So I reconfine myself wholly to the proposition of private property which of course is that tradition against which for two thousand years all revolutionaries have directed the efforts. Actually all religious reformers which very few exceptions invented a new religion which abolished several property and usually also the family. But none of these reformers or none of these revolutionary religions which constantly creep up have ever lasted for more than a hundred years and I think the most recent one of that type which you also must regard as such a revolution, a religion opposed to property and the family that of communism has not yet lasted for its hundred years and there very much doubt whether it will reach its hundred years. But all the great religions which have come to expand and to be held by an ever increasing part of the world have these two things in common that they are found private property and the family. None of these three monotheistic religions rather the two or three great eastern religions all agree on these two features and my contention is it is because they are firm and preserved those traditions in their groups that these groups were selected for indefinite expansion because they made possible the multiplication of the people who obeyed more who was dictated by them. Now such religious support was indispensable because if it is true what is my main and starting contention that the morals of private property and those of the family are neither natural in the sense of innate, more rational in the sense of designed, which is a great problem by any group should long enough stuck to a habit in order to give the process of chance of it to extend and select only groups which for long periods believed in what I have meant to call symbolic truths. I couldn't remember the word a moment ago. Many traditions which succeeded in making hold to certain symbolic truths would be loved to maintain morals whose advantages they never understood. It implies the assertion that the institution of private property was never due to the fact that it was a small proportion of a population who could see how private property benefited them defended their interests. It had only exist the much larger numbers and views who knew that they benefited from private property supported these feelings. And this was possible only due to religious beliefs which taught it to them. This is what I meant before by an I said. The old civilization to beliefs which in our modern opinion we no longer regard as true, which are not true in the sense of science, scientific truths, but which nevertheless wear a condition for the majority of mankind to submit to moral rules whose functions they did not understand. They could never explain in which indeed to all rationalist critics very soon appear to be absurd. Why should people respect private property if this private property seems to benefit only the few people who have it in societies where very soon very much larger numbers are existed than those in the primitive agricultural society still a majority who earned the instruments of their production. That creates a situation which is historically very interesting. Did mankind really owe its civilization to beliefs which in the scientific sense were false beliefs and further to beliefs which men very much disliked? Because they can really not very much doubt. And if the thesis is true, mankind was civilized by a process which is intensely disliked. They being made to submit to rules which it neither could understand nor liked. But I believe that this is perfectly true. And I believe I can claim that before the birth of the science of economics, before the 18th century began to explain why the market society could arise only on the basis of institution of private property, it would have been impossible for mankind ever to multiply as much as it did. And equally, it was only in the 18th century essentially David Hume, Adam Smith and his contemporaries who did clearly see that the mechanism of selection was that those groups were selected which thanks to the institution of private property were able to multiply faster than others. And this is of course a criterion which again has become very unpopular between the economists and only some of the economists understand. It's a present time, the general attitude the other is, to think that the multiplication of mankind is a great misfortune, that nothing we have to fear more than the too rapid multiplication of mankind. And we are constantly painted to horror of a society in the near future which would be a society of standing only. Now there are several things to be said about this. I must abbreviate it, or this could be a subject of another very interesting lecture. The first is, that the fear of an increase of population leaving to impoverishment is wholly unfounded and that there's never in history yet happened that an increase of population led to people becoming poor. The contrary impression is due to the fact that the concept of poor and rich is mentioned in terms of average, is in other terms of individuals. It is true that economic progress based on the private property and the division of labor leads to a faster increase of the poor than of the rich, which the result that average incomes may indeed fall as a result of the population. But nobody need to have to come poor for this reason. It only means that the poor have increased more than the rich, that therefore the average has pulled down, but nobody has been pulled down by the result of this development. As an explanation of this, both of the actual fact and the mistake which derives largely from malters is that with an increase of population human labor must also be subject of decreasing returns. That would be true in a world like the one on which malters was largely thinking, where human labor was uniform and all people were nearly all people were working in agriculture and in such a society increase, indeed an increase of population would lead to the reduction of the product per unit of labor. But the great benefit of an increase of population is that it makes possible a constant differentiation of human activities. An increase in the quantity of man is not an increase in the number of one factor of production, it's a constant increase of new additional and different factors of production which in collaboration can produce much more. It seems indeed that in a way the increase of population where it leads to an increase in civilization brings increasing rather than decreasing returns. Let me repeat, there is no evidence that ever in history an increase of population has led to a real impoverishment of the existing population. There are two or three special cases which I must mention. It has, of course, happened that when other circumstances destroy the source of income which made an increase of population possible, great poverty resulted. The classic case, of course, being Ireland in the 19th century, which owned the potato, had to increase its population to something like four times what it had been before, when the potato disease struck, removed the source of the income and led to the result that this greatly increased population could no longer be nourished. Another case which I must consider separately and that I think ought to give cause to serious reflection, that there are instances, and we are now creating instances, where an increase of local population is due not to an increase of that population to produce more, but to foreign help. And that, in instances, there probably will never be states or food for a larger home produced population in these places. I can give you as instances a much quoted instance of the region in the south of the Sahara, the so-called Sahel regions, which are clearly not able now to feed their population and which we are exhorted to help to feed. This is the result, of course, that we cause their further increases of population, which will be our responsibility, because from all one knows, they will never have a opportunity in the region to produce enough. I think that raises extremely serious problems for our present policy of help to some underdeveloped countries. Now, all this changes, of course, our aspects, our attitude to policy in a great many ways. But the crucial one is still the one towards the necessity and essential condition of the institution of several property and in particular several property in the means of production as an indispensable instrument of preserving the present population of the mankind. Half the mankind, at least officially, we are told beliefs in the opposite, beliefs that it is by abolition of the institution of several property that we not only can still maintain the present population, but that we can provide for it better than we did. Now, if what I have said is right, if it is true that I could only hint at that several properties are indispensable basis of that utilization of widely dispersed knowledge on which the market economy rests, it means that the opposite view, chiefly represented by communism, would lead not to an improvement of the population, would probably bring it about at half, something like half the present population of the world would die. There, of course, there are significant illustrations of this, quite a number of countries who were great exporters of food, so long as they were operated on a market economy, not only Russia, but also Argentina and others, are already no longer able themselves to maintain their own population, which has not increased a great deal, nothing like as much as the population in the West. But the final conclusion is therefore what seems to be a political conclusion, a conclusion about the consequences of two alternative ethical systems to which the two halves of the world now adhere. If it is true that we can maintain even the present population of the world, only by relying on that whole system of market economy resting on the several properties and the instrument of production, and that its abolition would lead to something like a large proportion of mankind dying of hunger, that would seem an undesirable result. Even if the scientist is not allowed to call it undesirable, I can say it is out which most people would not desire if they knew it. And the last conclusion, which I am afraid I will draw, even at the risk of totally discrediting this glorious meeting of scientists here, that the contrary view which believes that we can do better in maintaining the present population of the world by abolishing several property is well meant but very foolish. Thank you. Thank you.
|
Friedrich von Hayek received The Sveriges Riksbank Prize in Economic Sciences to the Memory of Alfred Nobel together with Gunnar Myrdal in 1974. It was an interesting combination of two ideological opponents, von Hayek representing classical liberalism and free-market capitalism and Myrdal a much more socialist view on economic questions. It is reported that Myrdal did not appreciate having von Hayek as a co-recipient, which might be a reason that Myrdal didn’t give his Prize lecture together with von Hayek, but only several months later. At the Nobel banquet in 1974, von Hayek gave a speech in which he voiced his doubts about the still relatively new prize in economic sciences. One reason, he said, is that a recipient of the prize “is even made to feel it a public duty to pronounce on problems to which one may not have devoted special attention”. But in Lindau 1983, in his lecture originally entitled “Entwicklung und spontane Ordnung”, von Hayek brings up questions that he had given much thought and also written extensively about. We know that Charles Darwin’s theory of evolution is accepted by most natural scientists. It describes how biological systems spontaneously tend to evolve towards more rational construction and behaviour by the mechanism of natural selection. But is a similar spontaneous evolution taking place in areas of social construction and cultural behaviour? Friedrich von Hayek argues that this is the case. He underlines the complexity of the social systems and gives an historic overview reaching all the way back to Aristotle. Since von Hayek not only was an economist but also a well renowned political philosopher, every word in his lecture seems to be of importance. But one has to concentrate hard, since he delivers the lecture in English with a rather strong accent. Anders Bárány
|
10.5446/55151 (DOI)
|
Good morning. I am very happy to be here today. This is the third time I come to learn dhal. I come here often for two reasons. First, this is a good opportunity to meet with young physicists and to discuss with them their interests in physics. And second, I think equally important, is to have an opportunity to meet with other lawyers. What I would like to do today is to give you a feeling what is high energy experimental physics about. People often say high energy physics is very expensive and you might be a lot of people and it is not clear what you get out of it. Now what you get out of high energy physics, at least to me, is to have a basic understanding what is the building block of nature. We have been looking for a building block of nature for thousands of years. A few thousand years ago, people think earth, air, gold are the fundamental elements. At the turn of the century, we have the periodic table and we view the building block of nature as the hundreds of soul elements. And then electrons and protons were discovered. At that time, we knew the building block of nature has two particles, electron and proton. Subsequently, positron were discovered, muon were discovered, pion were discovered, and a host of elementary particles were discovered. And then our concept changed again. We view the building block of nature a hundred to two hundred elementary particles. In the early 70s, through the work of Murray German, George Spike and others, we have the quark model. And then we view the building block of nature as from three quarks. From 74 on, we view the building block of nature as maybe five or six quarks with its corresponding leptons. And so, what is the truth? It really is a function of time. To start with, I will give you an example of our understanding of the structure of protons. Please switch off the light and I can now talk in the dark. One of the fundamental building blocks of nature is a proton. In the 20s, we view it as a small object in the heart of hydrogen atom. In the 50s, we view it as a large object with pymeism in its vicinity. In the 60s, mainly through the work of Professor Hofstadter and others, we view it as a large object with a structure, its denser at the center than at the edges. In the 70s, we view it as containing many small-point-like objects, protons or quarks. Nowadays, there are even people speculate that it may be unstable. At this moment, we view the work is made out of quarks so far, five of them discovered U, D, S, and C and B. And leptons, electron, mu and tau, and two neutrinos associated with electron and mu. These, at this moment, we view it as the building block of nature. The forces among the elementary particles are three kinds. There's a strong force, the force between quarks, that transmitted by gluons. There's a weak force, followed by a very important work of Weinberg and Salam and others. We view the weak forces transmitted by charge and neutral currents. An example of an electron pass-on collision at very high energy of close to 100 GV, the case to a mu ounce will be dominated by the transmission of the Z0 particle. We also have electromagnetic forces. The force between two electrons has transmitted by photons. Now you can ask some experimental questions. The first question you can ask on strong interaction, how many quarks exist? And you think at this moment in elementary, in an elementary particle, we have a U, you have a D, you have an S, you have a C and a B, and where's the 6-1, 7-1, 8-1. Currently, from the work in Hamburg, at Deijing, we know the sixth quark, if we call it the T, if it's charged two-thirds, it must have a mass larger than 22 billion electron volt. Second question you can ask, what is the size of a quark? The current limit is a size of a quark that's less than 10 to the minus 16 centimeter. One part in a thousand decides the binucleus. And there are some more detailed questions you can ask, such as what are the properties of gluons? The difference a bit between gluon and photon, the most striking difference is shown in the following. A photon, since a charged conjugation minus one, cannot decay into a pair of photons. A gluon, charged conjugation is not a good quantum number, can decay into a pair of gluons. So such as three gluon vertex does exist and will be a characteristic of so-called quantum chromodynamics. And such a thing has really not been identified conclusively. It will be very important in our understanding of strong interactions. Some questions, experimental questions you can ask on electromagnetic interactions are also very obvious. The first is how many kinds of heavy electrons exist? We know a high energy photon goes to the electron pair when the energy is high enough and go to the mu pair and when the energy is even higher it goes to tau pair which has a mass about 2gV. When you have a 100gV photon, how many heavier electrons exist? The current limit in Hamburg shows it must be larger than 22gV. Next question you can ask, what is the size of an electron? The current limit is again less than 10 to the minus 16 centimeter. That's also true for the tau and for the mu. Tau of course has a mass of twice mass of proton but its size is one part in a thousand of the proton. Next question you can ask, are there excited electrons? We know a pion together with a nucleon goes to n star. Is there excited electron which can go to electron plus a photon? The current limit is if such thing exists, the mass must be larger than 70gV. Then some experimental question you can ask, are we interacting with other following? The first is how many kinds of z zeros and w's exist from the work of Weinberg and Salam. So if you use the standard model, mass of z zero should be 94gV and that's been discovered. Experimentalists can ask the question, is this the only one? Could there be more z's and more w's? Let me remind you in 1940s when Pymeizan was discovered, most of the physicists thought we have understood everything and subsequently quite a few particles very similar to pion or phion. Second question you can ask is how many kinds of neutrinos exist? We know that the electron has its own neutrino, mu has its own neutrino. Whether tau has its own neutrino or not, we have not found the tau neutrino and with more leptons whether there will be corresponding number of neutrinos. Another very important question is do his particle exist which are responsible for the origin of masses? To answer some of these questions, a largest accelerator in the world has now under construction in Geneva, Switzerland. Let's first define this is the border between France and Switzerland and this is the city of Geneva and this is the accelerator which is a circumference of 27 kilometers. It's buried under the ground between 500 meters to about one kilometer. There, the electrons and positons are accelerated to a center mass energy initially at 100 GV, finally at 200 GV. At four intersection regions, number two, number four, number six, number eight, electron passes on collide and during this collision experiment was set up to answer some of these questions. A particular experiment I want to discuss with you today is experiment in area number two, an experiment which I am involved in. I want to go over a little bit of the nature of this experiment to give you a feeling what high energy physics is about and what its purpose. This is the detector of this experiment is buried 50 meters underground, electron pass-on collision occurred in here. There is a very precise device known as a vertex chamber which measures the carry of an elementary particle. Surrounded with a device known as electromagnetic detector which measures full-tongue and electron with very high precision, it's a special new kind of crystal, business germanite, otherwise known as BGL and then with 400 tons of uranium hydrion colorimeter. What this device do is to absorb all the pions, tions and hydrons and measure its total energy and the coordinate of the energy. What is left are the muons which we measured very precisely in a magnetic field of 5 kG provided by a thousand tons of aluminum coil with a return yoke of 8,000 tons. For comparison this is a standard physicist. This experiment is the first large scale collaboration between physicists from United States, Soviet Union and the People's Republic of China. Unfortunately it involves a lot of people. It involves a lot of people not because one wants to but because the complexity of this experiment. From United States, they are from MIT, from Harvard, from Northeastern, from Yale, from Princeton, Rogers, Johns Hopkins, Carnegie Mellon, Ohio State, Oklahoma, Michigan, Carnegie Mellon, and Hawaii, about 120 physicists from Soviet Union, from the State Committee for Utilization of Atomic Energy. There are 40 very good physicists working with us and from the Chinese Academy of Sciences and from the Ministry of Education, 40 students and all the Swiss universities, ETH Zurich, Geneva, Lausanne are working this experiment. This is from France, from Italy, from Spain, from India, a very good group from Aachen and from Segen are working with us and then from DDR, from Holland, from Hungary, and from Sweden about 150 physicists. Looking from this I think I cannot resist to make an observation. It has been easy for me to obtain a collaboration between United States and Soviet Union and Soviet Union and China then to have all the Swiss work together. Seems to be very difficult. Now with such experiment it's not only physics idea, only instrumentation. You encounter some logistic problems. So these are maps of physicists involved in this experiment and the total cost is somewhere between 120 to 150 million Swiss francs. What is important beside the physicists and the financial resources are the engineers and technicians who are involved in building such a detector. From the Institute of Ceramics in China there are 200 technicians and then from Aachen, from Holland, from CERN, from Switzerland, from France and from ETH, 300 technicians. Total about 700 engineers and technicians are involved. The contribution from ETH from Soviet Union is fairly large. It involves about 20 million Swiss francs and 300 technicians involving the construction of a 400-ton uranium calorimeter and three and a half tons of very high purity germanium oxide for BGO and provide 7,000 tons of low carbon steel for the magnet. And with equal amount of contribution from United States Department of Energy and from the rest of the European countries. Now the question you want to ask, how do you design such a detector? What is the criteria you use to design such a detector? It is important to realize in designing a detector of this type involves a lot of people and a long-time constant you have better not design a detector based on one person's model and one person's theory because it is easy for theoretical physicists to create a new theory and is much harder for experimentalists to change its detector. So let me report to you on the design consideration. The first thing we decided is there are many elementary particles you can measure. Whether you can measure all the headrons or you can measure headrons plus electrons and what we have decided to do is to concentrate on three particles four-ton electron and mu. We measure them very precisely with the momentum resolution of 1% up to the mass of 50 dB. What is the justification? The justification is basically an intuitive one and by making the observation in the last 30 years or so some of the most important discoveries in elementary particle physics were done by experiments measuring four-ton electron and mu. The work I have done measuring the J particle was made possible by observing a peak a 3.1 dB which is a detector measured electron pair with a mass resolution of one part in a thousand. The discovery of the B quark was made possible by measuring mu pair with a mass resolution of 2%. The discovery of the various transition states on the monium was possible because the detector using sodium iodide has a very good resolution. The discovery of Z0 by Professor Rubia was done on electron pair and this discovery of W plus W minus was done by measuring large momentum transfer mu. Now what you do with head ones with pi and k and so forth. Headrons in this high energy tends to come in a bundle like GX and so what we do is not to measure them individually but measure them collectively with a very good resolution of about 0.45 versus square of E. An experiment in 1979 carried out in Hamburg and the discovery of Gruon was done with such a simple technique. Theoretically when you have an electron pass-on-conline you produce a quark which fragment into a jet, anti-quark fragment into a jet, Gruon fragment into a jet and once you have you measure the total energy you will see a three-jet pattern and therefore was not necessary to measure individual particle but measure them collectively. Those are then the two design considerations for doing such experiments. The experiment now is under construction. Let me show you a few transparency how these things are done and these are two large holes where the detector will be lowering to it and the experimental hole is very much underneath. The hole size is 23 meters across. It's 50 meters underground. To build a magnet the size of this lecture hole is a very simple job. What you do is to build them with same way as you can factory house except you use more steel and less concrete and so what we will do is just first pull the concrete and then put long 14 meter or 1.2 meter by 10 centimeter 220 pieces of bars of different shape as a return, yolk and the power pieces again a large pieces at the end. The coil are made out of aluminum. The first aluminum pieces and then you use a electron gun to wear the aluminum pieces together. After you wear them together you will make them into half turns like this and you have a special crane you take them up and you store them outside. Of course during this time you make the necessary checks on current and on cooling. More than half of this are the finished. This is about half of the coil. That will be finished in the end of 1988 and to start experimentation and beginning of 1989. So beside the coil the inside part one of the major element is muon detector. To provide a mass resolution of 1% for a muon pair a mass of 100 GB means a 45 GB muon will bend 3.7 millimeter. To have a mass resolution 1% means the alignment for this detector, the mechanical alignment, the resolution for the chamber itself and the supporting stand must be order of 30 microns. And this is not an adjacent because the device is rather big and this is about 12 meter by 12 meter and you want to know this to 30 microns. And there are quite a few of them and this was worked out at MIT and these are some of the chambers and this is the inner chamber, the middle chamber, the outer chamber. The wires are going through here and you can see the electronics and in the cabling system. 16 such pieces are now under construction. A question which is very important for precise measurement is the calibration of your detector. Without a calibration of course you will not know where you are. Collaboration for muon are provided by N2 laser which has a laser in here and then there's a guide for the laser into a movable mirror which on command flip the laser into many positions and then go through the chamber with a position sensitive diode simulate infinite momentum muon. Without such a thing you cannot really proceed. And this shows one layer middle chamber, lower chamber, upper chamber and here's the guide for the laser to go through. For the N2 laser to go through. When the chambers finish we fire the laser and see what the resolution will get. With a thousand shots of laser we measure a straight line to 50 micron. That means for giving shot the delta P over P since the total subject is 3800 micron each individual shot is 50 micron that means delta P over P is 1.3 percent which means delta M over M is less better than 1 percent. For a thousand shots a synth twice known as 50 versus square root of N is 1.6 micron. That means the center is now known to 1.6 micron even though the distance is order of 12 meters by 12 meters. Beside the muon chamber inside is a 4 pi hydran calorimeter which provide energy resolution of 50 versus square root of E. I also measured the collective information on jets to two degrees and also enable you to track the muons go through the detector. This large hydran calorimeter is being constructed involves the first institute of physics in orphan and and survey unit. What is the principle of hydran calorimeter? What you need to do is to put very dense material let the hydran go through and let it interact a lot of energy and you essentially measure the total energy. And so they're constructed with 144 elements of uranium plates sandwiched with detectors which measures the charged particles. It's divided into nine rings each ring has divided again into sectors and total 144 sectors. 16 sectors in each ring. And here then is a construction map how the hydran calorimeter and the electromagnetic calorimeter are being built. The uranium plate is made somewhere deep inside the Soviet Union and then the support rings again made in Soviet Union. The raw material for Germanium oxide is near the Black Sea and they all ship to Moscow and then goes to Switzerland. These are the one of the 144 uranium plates and this uranium are very high quality very flat to correct size 60 pieces together. Major head ones between the uranium you have up at E-TAP where this calorimeter is now being constructed. The next item when you go from outside to inside is the device to identify electrons. The resolution energy resolution of the electron is done with this new crystal BGO. The pi E rejection from this BGO will have measured to be between 10 to the minus 3 to 10 to the minus 4. When you identify electron or major photon one of the very important things to reject pi 0 to 2 photon and so-called dollar spade and that you do it by the vertex chamber which we call TEC which measure the opening angle. The raw material for this crystal as I've said before come from Soviet Union. The people who are very good in growing this crystal is not in Western Europe or not in United States but it's in the People's Republic of China and the Institute of Ceramics in Shanghai. For whatever reason I have discovered the shortest way between Soviet Union and China is via Switzerland. So we have then set up a factory in Shanghai involving about 200 physicists under Professor Yin a very well-known crystallographer involving 11 research staff, 20 engineers, 26 technicians, 92 workers and 50 administrative people probably party members. And these are some of the very good crystal growers in Shanghai Institute of Ceramics which are making 12,000 pieces close to 10 tons of germanium oxide for us. And these are some of the crystals. It's 24 centimeters long, 2 centimeters by 2 centimeters in one end, 3 centimeters by 3 centimeters at the other end. And we have carried out a worldwide competition and the other ones in Shanghai produce the best crystals. When the crystal arrives we put them into the beam, measure its response. And this is a Vandegraph 4 MeV Vandegraph accelerator located in Liyang produced from radioactive capture process 20 MeV gamma rays and then we measure the response in this box which will locate the big crystals. And this shows when you have a full-time 20 MeV you have a resolution of 7.7 percent. This is a low energy competence gathering which you have to reject and from the center to the upper peak you see a 27.7. What does this mean? This means when you deal with large quantities, with large quantities of sodium iodide like the crystal ball experiment or large quantity of BGO you obtain the following comparison. In terms of full with half maximum these are the measurements of BGO and this is measurement of sodium iodide. For individual crystal I think sodium iodide resolution is better but when you put them together they are somewhat compatible resolution. The difference is BGO is non-hydroscopic so you can handle with your high-end and it's denser so more compact together. When you go from outside to inside beside the Muang chamber has one calorimeter with your crystal and finally at the end you measure the particle vertex and the vertex is done with the many physicists from Aachen, from CERN, from Swiss Institute for Nuclear Reactor Research and from University Geneva, from Ziegen and from DDR and also from ETH Surrey. The principle like all these things is very simple. I can visualize in the following way. In an ordinary particle detector you have a ground, you have a negative high voltage, you have an electric field and so you put gas in when a particle goes through, you lost its energy by ionization and so you have a clustered electrons which then truth into your ion. Normally the first arrival give the signal because your electronics is not fast enough and therefore you have a very large fluctuation. To obtain high precision what we have decided to do is in this detector we're putting a grid. Reduce the truth region velocity by factor of 10 and the amplification region you keep the original velocity. Because your velocity now is by factor of 10 lower, so your electronics have enough time to identify not only the first arrival but the second arrival, the third arrival, the fourth arrival so you have a complete history of all the cluster. With this then you will be able to get all the information of the history of passing through of the particle in this chamber and therefore a very good resolution. Of course to obtain a good resolution you detect the mechanical part of your detector has to have a compatible resolution and this is a model, a full-scale model that made an insulate 30 centimeter by 30 centimeter with the principle which I just described. When you put it into experimental test beam in Hamburg you will see at the mean flip length of 1 centimeter if it's 2 atmosphere a 30 micron resolution if it's 1 atmosphere a 30 micron resolution if it's 2 atmosphere it's 25 micron resolution. This is a very large device and it's not very easy in fact I think the first time people have done that with such a large device to obtain a 25 to 30 micron resolution. What is more important is this device has a property of simultaneously identify many particles and this shows what happens in one burst you have three particles go through the chamber when two of them are separated by 230 micron you clearly can distinguish them. But so much for the detector the next item I would like to discuss is computers. With so many physicists involved the first thing you have to do to make sure people analyze the data and communicate with each other and so have to establish our own computer net from all the physicists in United States and all physicists in Europe from different countries and they are linked together. Now you will ask what is the difference between this detector and the three other detectors now that's been built as soon and two similar detectors built in California where they have a 50 GV LENAC collide with a 50 GV LENAC. So I want them to discuss a little bit of the unique physics with this experiment which we call error 3 not covered by three other lab and two single path collider detectors. All these other five detectors are very good detectors involving very advanced technology but mostly concentrated on identified pion, k-ion and protons and head ones. All these different measures electron, muon and photons. So let me give you three examples. First at energy of 100 GV. A 100 GV because the BGO crystal has such a good resolution if you have an excited toponium state order of 70 GV which then decays like a toponium state to a single 200 mu photon plus a p-state decays to many hydrons your crystal can identify the single photon as a clear peak. And what happened when the accelerator let's say go to 180 GV. 180 GV let's say give you example 165 GV for one year of experimentation. If you do experiment of electron, positron, go to a Z0 plus Higgs particle which is responsible for the origin of masses and which decay of which the decay property is not known. What you can do is to measure the Z0 decays to mu plus mu minus or E plus mu minus and therefore identify the missing mass peak of the H particle. Because the good resolution for muon and electron you identify 20 events for muon if the Higgs is 50 GV and 43 events for electron because acceptance is a large for electrons you can clearly see sharp peaks. This is a very important example because you are able by measuring mu pair and electron pair to look for particle with property you really do not know therefore you have there's no way for you to design a detector to identify this so you have to use a missing mass technique. There are some plans in Hamburg and in Hessen to use the left tunnel to hire energy to do a proton, anti-proton or even proton, proton collider. If such a thing is visualized for example you can put a 5 TV anti-proton on 5 TV proton collision. In such a collision this detector without modification will provide the following properties. For a particle of mass 1 TV when it decays to electron pass-on pair you will provide a mass resolution of half a percent. For this 1 TV particle to decay to a mu pair you'll provide a mass resolution of 10 percent except this time because the precision of the mu detector you can measure the charge asymmetry therefore locate what is original property of this 1 TV particle. There are many theory now think the next mass scale is about 1 TV. For hydrogen jet again you can measure mass for 3 percent. Indeed we have carried out some study already to see if you have a PP bar produced a heavier z zero a mass of PP bar collision at 10 TV and then if you produce a z zero or mass 1 TV you'll get hundreds of thousands of events and clearly can be identified. Now if you would if you would view it based on our understanding of theory to view the purpose of this detector you can view it in the following way. Our current theory on elementary particle physics is based on two fundamental principles why is gauge invariance another is symmetry breaking. Gauge invariance leads to quantum electrodynamics quantum chromodynamics the theory of von Bergen-Sallam electrovec theory and this has been tested by many experiments. G minus 2 experiments electron-paceton to mu pair gluon jet neutrino scattering mu on scattering the discover z and w discreet. Indeed all the experiments are done up to now. I serve in Chicago I slack at Brookhaven all you must test the QED QCD in electrovec. Clearly with new accelerator you can look for more quarks more z zeros more w's study three ground vertex and clearly more tests are necessary and it's very important but what is more important is understanding at least to me understanding of symmetry breaking. Symmetry breaking is thought to be responsible for masses of all elementary particles. So far there's really no experimental test. There are many predictions, Higgs particles, supersymmetric particles, technion, technicolor particles. I'm sure by the time this detector is finished and there will be many many more predictions. By measuring by design architecture measuring full-town electron mu on precisely the main aim is try to find by missing mass technique any of these particles which we don't have to know its property you do the missing mass technique and thereby try to understand the origin of masses of our particles. Thank you.
|
For his third lecture at one of the Lindau Meetings, Samuel Ting had chosen a title which he in principle would keep as a running title for a total of three lectures, 1985, 1988 and 1991: “Search for the Fundamental Building Blocks of Nature”. This title, as Ting explains in his introduction, is the driving force of his continued work in high energy elementary particle physics. The main reward of the costly experiments he performs is a better understanding of these building blocks. Before the quark model appeared, there were hundreds of particles in what looked like the periodic table of elements. Then the quark model brought this number down with a factor of about ten and brought with it something very similar to the understanding of the periodic table of elements through Rutherfords discovery of the atomic nucleus and Bohr’s model of the atom. In a pedagogic way, Ting follows our view of the proton from the small object of the 1920’s, through the large object of the 1950’s, the large object with structure of the 1960’s to the large object built of point-like quarks of the 1970’s. As he points out, in 1985 there were already on-going experiments to determine if the proton can decay. After listing some other open questions, e.g., how many different kinds of heavy electrons and neutrinos exist, Ting then moves on to his main theme this year, the construction of the new 27 km accelerator ring LEP at CERN and its detectors. In this ring, electrons would circulate one way and positrons the other. In certain places the two beams were brought to collide and Ting himself is involved in a collaboration building one of the huge underground detectors at such a collision point. The collaboration consists of an international team of several hundred physicists and technicians from all over the world. Ting spends considerable time on the design of such a detector and even goes into some technical detail. Anders Bárány
|
10.5446/55115 (DOI)
|
Thank you, Ernst, for your kind introduction. I thought in this lecture that I would take the opportunity to show you all that in spite of people winning the bell prizes and so on, that they can still dress in a more or less civilized way, so I warned Ty. So I should also mention that I've changed the title of my speech, and for the next half hour I intend to discuss matters of nuclear power, human rights, nuclear weapons, and the like. I really think they'd rather get this, you know. Okay, what I will talk to you about is not so much a large amount of detailed information, although I will make illustrations, of course, from our own work and from other people's work, but a feeling that I personally and many other people also have about the directions in which the preparation of proteins will go. The development of the DNA recombinant methodology, of course, has made it possible in many cases to prepare large molecules, proteins, hormones, etc., and this is now a tremendous industry throughout the world and very successful. At the same time, while this DNA recombinant methodology has improved so enormously, there has been, in a sort of quiet, secret way, a great deal of improvement in the methodology of protein chemistry, particularly in the organic synthesis of peptides, even quite large ones. And I really feel that over the coming years, many of the proteins that are now being made or are being tried or are being attempted by DNA technology may be ultimately made by protein synthetic methods. The technology, for example, of purification has improved enormously. The HPLC, the high-performance liquid chromatography techniques, are not only much more specific and discerning than the old ion exchange methods and so on, but they could be made on a very large scale by just visiting the biotechnology offshoot of the Carlsberg laboratory in Copenhagen last week. And I am impressed with the way that the large HPLC technology has come out from millipore waters in Boston and is now present in many industrial and university laboratories, large enough to take kilos in single-runs kilos of synthetic materials for purification with great success. And then, of course, we have my own baby, which is affinity chromatography, which has also improved to a point where, in many cases, one can take a crude mixture of biological material or of synthetic material, pass it through an affinity column, that is to say an immobilized ligand column, which would catch only one type of substance, and go very often from a great mixture of materials to a single component in one step. So between these methods we have now excellent purification techniques for proteins and polypeptides. The most important, of course, is the development of the synthetic methodology itself. We have all been using the Merrifield solid phase method mostly in spite of the fact that I think the purest, I think the purest large polypeptide that's been made synthetically was made entirely by hand, that is to say, not machines. I'm thinking of Wunch, Wunch who made glucagon, which is 50 amino acids. It took about three years, of course, and he had 27 graduate students crystallizing and so on, but it was very pure when it was finished. But that's too much trouble. So the new methodology improves solid phase synthesis in the following way. Instead of using the most common protecting group, which is a T-Bock tertiary butyloxycarbonyl group for each monomer, and which has to be removed at each step with acid, the newest likely monomer will be the so-called F-Mock amino acids, 9-fluorineal methoxycarbonyl, which can be removed with alkali, and this means a much more gentle stepwise addition of monomeric units. And also it has the advantage of permitting the synthesis of peptides containing tryptophanes, because tryptophanes is unstable to acid, with F-Mock one can make tryptophanes peptides and have no problem. So I do believe that without further complication, one could at the moment look forward to the synthesis of polypeptides of certainly 50 amino acids, I think, in length, and possibly 100 or so, by purely synthetic and completely automatic methods. I know both the applied biosystems laboratory in California and Millipore Waters in Boston are at the moment about to release to the public for large amounts of money, machines that will use F-Mock and allow the synthesis of fairly large peptides. Well, I'd like to suggest another extension of protein synthesis, protein organic synthesis, which suggests that we might be able to get even larger fragments. We have been working with a technology which could be called stitching, by stitching I mean sewing together two large pieces. It was developed originally by Laskowski and Hohenberg in New York somewhere, Buffalo, I think, but we've used it in our own laboratory with several protein systems with considerable success. It's a very, very interesting situation. I'd like to show you what I have in mind. I might just show you first the protein that we began in studying, which is a protein called staphylococcal nuclease, a nuclease produced by SAF aureus as an extracellular protein from cultures. I believe the first words were to sing. Here we have staphylococcal nuclease, 149 amino acids, one tryptophan here. I should point out lysine lysine at 48, 49, and another one up here at 5, 6. It has no disulfide bonds, so there are no problems with SS formation from thiogroof. The molecule, if denatured completely and allowed to re-nature, folds up in three dimensions with a half time of about 200 milliseconds, very rapid folding. The next slide shows the staphylococcal nuclease molecule in a simple folded-up diagram taken from the crystallographic work on the three-dimensional structure. This three-dimensional structure was worked out by Dr. Cotton and his colleagues at MIT some years ago, and you will see here a chain which suddenly becomes an anti-parallel pleated sheet and helix, another anti-parallel pleated sheet, and two more helices closer to the carbosyl terminus. Perhaps one more slide using a notation developed by Jane Richardson and Duke, which is convenient in showing the helices as little twisted bundles and these anti-parallel pleated sheets as arrows. If you attack this molecule with trypsin, which of course splits between lysine and arginine, it splits following lysine and arginine, and if you protect the rest of the molecule by adding calcium in here and thymidine diphosphate, 3'5' thymidine diphosphate, the two ligands protect the rest of the molecule and keep it in a tight conformation. You can see here sticking out into the solution a loop which contains lysine-lysine 4849 that I showed you before. When trypsin is added to this, you cleave that lysine-lysine bond and also this 5'6' lysine up at the end, and you get two pieces, one fragment from 6 to 49 and one from 50 to 149. The two pieces separately, the large piece and the small piece, have no structure of their own, but if you mix them together in spite of this cleavage, they combine, they form a folded structure which is completely isomorphous with a native molecule, so that the same structure is achieved without that bond. Now the important point I want to make now is that if one could make large peptide fragments synthetically, it's possible to stitch them together and regenerate the original protein. In this particular case, if you take these two pieces in the presence of calcium and thymidine bifosphate and add trypsin, but now not in water, but in 90% glycerol, if you decrease the water in the solution and increase the glycerol concentration, the equilibrium of the trypsin reaction is shifted towards synthesis instead of cleavage. This bond can be reformed almost in 100% yield, goes to completion, and you regenerate the molecule in quite good yield and activity. So we had then the possibility of making something quite large by taking advantage of this stitching technique. I'll show you in a moment when we're trying on another system now. Using such a... the next slide perhaps. Here we have another photograph of this trypsin cleavage giving this small piece and this large piece. Now as a study in chemical modification by synthetic methods, we proceeded to make quite a large number of derivatives of this smaller fragment, which is 43 amino acids, and could produce... it could replace many of the residues here with substitutions that still permitted full activity when this bond was made. For example, around this calcium, this calcium atom which is required for activity, is liganded by four carboxyl groups, which is a standard situation with calcium. We found that the substitution of any one of those four carboxyl groups by transforming glutamic acid to glutamine or aspartic acid to asparagin, destroyed the ability to reform the protein and the activity was gone. The polynucleotide chain runs up through this groove in the molecule and there is an arginine residue ordinarily here in the chain, which complexes with a phosphate group. If this arginine is replaced by acetylene, once again the activity is destroyed. So we found a number of changes that were permitted and a number that were not permitted. So in a sense, these are what you might call chemical site-directed mutagenesis, not involving DNA. On the other hand, of course, the DNA technology is so powerful that at the moment this kind of thing can be done much more quickly. I borrowed a slide from Dr. David Shortle at Johns Hopkins, which shows what sort of success he has had. Here is the same Staphylococcal Nupiae molecule. These Xs, along here, represent mutations that he and his colleagues have introduced by random mutagenesis and then insertion into E. coli, which would express each one of these separately. They developed a technique for picking out active and inactive single-immunoacid mutants. He has, I think, out of 149 amino acids, he has something like 94 different varieties. It is a very powerful technique. We have recently begun to study another protein using the same methodology, and I should tell you why. This is a molecule known as phyroredoxin. It's a very common molecule in nature, particularly in large amounts in a number of bacteria, salmonella, E. coli, etc. It's involved in catalyzing disulfide bond formation in the maturation of bacteriophage, the conversion of the oxyrebenucleotides, has a number of different functions. For us in particular, it's a nice protein to work with because it's quite small, you see. It's only 109 amino acids. Its structure has been worked out quite accurately by Bundin in Sweden. It has a single arginine residue here. What we have been doing, and I should mention first of all why, phyroredoxin, down the hall from me in Hopkins is a man called Dr. Ludwig Brand, who is, I think, quite well known in the field of fluorescence. He has extremely fine equipment now with laser fluorescence and the proper computer technology for fishing out the four or five decay half times and so on and so forth. What we are starting to do now is to take this molecule, which contains two tryptophanes here, one tryptophanes here, which of course are the ideal amino acid for people interested in fluoroscopy. He has so far been doing studies on the native molecule, and you get some idea of the half life decay times for these tryptophanes. Since it is a university and we have students who have to learn as much as they can, it is a great opportunity to combine some protein chemistry and some genetics. In my laboratory at the moment, three students are working on the psych-directed mutagenesis of this protein using the standard CDNA, plasmids, lambda, phage, vector, etc., which I don't know much about, but I'm learning. They have made one mutant containing phenylalanine replacing tryptophanes, and another one with phenylalanine here, and Lenny Branding assures me that by studying these he can learn a great deal about the conformation in that region. I on the other hand am anxious to expose these students to some organic chemistry, some peptide chemistry. So what we have done is to block all of the lysine residues along the chain with citriconeal groups, which can be added very easily. Leaving only the arginine residue as the residue sensitive to trypsin, the molecule can then be treated with trypsin and split so that we get this piece and this large piece. Then the citriconeal can be taken off very easily. The two pieces can be added together, they have some activity without being joined, but at this particular moment we have synthesized this fragment using Tboc, and we're going to repeat it now using Fmoc in the native form, and also have begun some studies on the introduction of various fluorescent amino acids in place of some of these C-terminal ones that are sticking out here, hoping to be able to get not only additional fluorescence centers, but also centers for the study of energy transfer between the tryptophanes here and something in the C-terminus. So as a teaching situation it's quite nice, I think that whether they like it or not they have to stop making mutants and synthesize the peptide. So it's a good training situation. Well all in all, let me just finish this business, I really think that with this stitching technique and a little careful designing and with the improved synthetic technology that we really should be able to make proteins that they decide, I think, one way or another in the next two years. I think this can be developed, it will never be equal the DNA technology for very large proteins of course, but I think in terms of medically important substances, the many many many peptides that are now being used and discovered in medicine, that synthesis will become the important one. I mentioned Copenhagen before, I visited this Carlsberg biotech company last week which is entirely devoted to this kind of thing, they are concentrating entirely on organic synthesis and have a number of peptides that they have made in rather large quantities and very clean because of this large-scale HPLC method. They are using not just trypsin and carboxypeptidase and a few other things, but they are using a whole variety of enzymes including capain and other pancreatic proteins, calyphtrypsin and so on and arranging the C-terminus of the portion to which the smaller piece should be added with different blocking groups that are more specific for the enzyme in question. I think they are doing quite well and I am starting to sell some clean things that went through the FDA. Finally, I would like to say a word about another area that I really think will become very important in medical science, in industrial medical science for that matter, and that is the whole idea of synthetic vaccines. If you make small, let me give a slide maybe first. One is interested in whether or not small pieces of proteins can fold into the three-dimensional structure that is characteristic of that piece when it is part of the protein molecule. For example, with the Staph Nucleus, the one I showed you before, you have this pleated sheet area and three alpha-helical portions. You can chop the molecule in different ways. The one I will mention now specifically is this smallest piece, it is 99-149, which is prepared with cyanogen bromide cleavage of a methionine residue. You ask yourself, will such a piece in solution by itself take on some three-dimensional structure that resembles what it used to have when it was part of the protein? Putting it in immunological terms, if this is an antigenic determinant, if you make antibodies against nuclease, and this happens to be one of the determinants against which an antibody is made, can you use such a little piece as a competitor or a recognition molecule for the antibody? Will it remember what it used to look like? What we have done with Staph Nucleus is to simply begin on this kind of studying this question is to look at this C-terminal portion running from 99 to 149. This would be the random form without structure. We ask the question, is this piece of nuclease in equilibrium with a structure that resembles the native conformation? If this were an antigenic determinant in the native molecule, then this piece would be recognized by the proper antibody. To test that, we made antibodies against native nuclease by injection into rabbits and produced, of course, a great mixture of antibodies against each antigenic site on the surface of the globular protein. To clean up the system a little bit, we attached Staph Nucleus to it and cut the total antibody population on that column of nuclease sephirose. Then that peak, which was eluded with acid, was then passed through another column to which had been attached the peptide of 99 to 149. Of this whole large population of antibodies, a much smaller population was caught, namely anti-99 to 149. However, this antibody preparation here turned out to be polyvalent. That is to say, if we added this to nuclease, we got to precipitate, so it had more than one antigenic site. So we simply took this peak, put it through still another column containing only 99 to 126, the peptide attached to the column. And then we got out this one. We attached to the column 127 to 149 and caught the antibody that we did not want and got anti-99 to 126, which is a non-precipitating antibody. It does, however, inhibit, binds to the protein and inhibits the activity, destroys the activity. So we had then a specific antibody that would bind and inactivate nuclease. By doing some kinetic measurements on the activity of nuclease against DNA, it's substrate, with increasing the amounts of anti-99 to 126, we could calculate that that piece, that is protein 99 to 126, existed in a form sufficiently similar to the native structure to bind to the antibody about 1% of the time. It's not a lot, but at least about 1% of the time it looked like it should, and that's good enough for this kind of synthetic vaccine work that I want to mention. And then we've done this with other bits of proteins, and it turns out that you can isolate a single determinant which can be used for the purpose of vaccine preparation. This was an old experiment with lysozyme many years ago, where we took the loop, there's a loop of amino acid sequence that sticks out from lysozyme, which could be synthesized and attached to a large carrier molecule injected into rabbits, and that synthetic mixture produced an antibody that would inactivate lysozyme. So it was like a vaccine against lysozyme, essentially. More reasonably is this situation, which was worked done by Ruth Arnone and her colleagues at the Weissmann Institute, where the coat protein of MS2 virus, it's a colophage, this is the monomer of the coat protein, could be broken up into three pieces. Turn out that this second piece was the antigenically active piece, and this could be synthesized, attached to a carrier, and could neutralize MS2 virus like any normal vaccine. This principle I think is being used now quite widely. I know certainly the Scripps Institute in California at Weissmann Institute and a number of other places. People are looking into making pieces of coat proteins, it's mostly viral coat proteins, that are large enough to form a bit of structure that's recognized by the antibody against the total virus. I know that work is actively going on with hoof and mouth disease, influenza, cholera, a few other things. And it's based entirely on this idea of the ability of small peptides to remember what they once looked like. And I think it may become an important aspect of vaccine production. So perhaps I won't talk about nuclear weapons and human rights after all. This is all this stuff, Mr. Hitler.
|
Genetic engineering was a dual challenge for the eminent protein scientist Christian Anfinsen. He „was a thoughtful critic of the potential misuses of biotechnology and genetic engineering at a time when many of his colleagues were swept up by their promise“[1]. And he remained skeptical towards its ability to synthesize all kinds of proteins. He rather emphasized the prospects of classical protein chemistry. „At the same time while this DNA recombinant methodology has improved so enormously, there has been in a sort of quiet, secret way a great deal of improvement in the methodology of protein chemistry, particularly in the organic synthesis of peptides, even quite large ones,“ he remarks at the beginning of this lecture and briefly mentions the advances in protein purification technologies before discussing current protein engineering activities. „We have all been using the Merrifield solid phase method mostly“, says Anfinsen, paying his respects to Bruce Merrifield who had received the Nobel Prize in Chemistry just two years before in 1984 for his invention of solid phase peptide synthesis, which had culminated in the synthesis of the enzyme ribonuclease A with its 124 amino acids in 1969. Anfinsen also reminds his audience of „the fact that the purest polypeptide made synthetically was made entirely by hand that is to say not machines“. This is a tribute to Erich Wünsch who succeeded with his group in Munich in synthesizing the hormone glucagon with its 29 amino acids in 1967.In vitro peptide synthesis - as opposed to in vivo synthesis through genetic engineering - is difficult because each amino has two functional groups, which can enter into a peptide bond. To synthesize proteins in a controlled manner, i.e. in an intended sequence, functional groups that shall not be involved in the reaction must be shielded by protecting groups. In liquid-phase synthesis, the separation procedures, which are therefore necessary after each synthetic step, cause a considerable loss of product and diminish the peptide yield dramatically. Only small peptides with less than ten amino acids can be produced this way. The solid-phase procedure facilitates synthesis and increases its yield by attaching the first amino acid through its C-terminus to a solid polymer. This method is suitable for automation. The N-terminus of the growing peptide chain is protected by either t-boc (tert-butyloxycarbonyl) or Fmoc (9-fluorenylmethyloxycarbonyl). Anfinsen briefly discusses the characteristics of both groups and then suggests „another extension of protein synthesis“.He calls this method „stitching“ and means „sewing together large pieces“ of proteins to synthesize an even larger one. He reports how in his lab at the Johns Hopkins University the stitching technique had already been used „with considerable success“ when applied to pieces of staphylococci nuclease. He reports on similar experiments with thioredoxin. „If one could make large peptide fragments synthetically, it’s possible to stitch them together and regenerate the original protein“. Appropriate chemical site-directed mutagenesis could support this process. Anfinsen’s suggestion may have sounded anachronistic in the decade that witnessed the launch of the first genetically engineered insulin and other recombinant drugs. Yet even the genetic code has its limitations and not all proteins or their analogues can be engineered by harnessing the forces of bacteria, yeast and the like. Chemical synthesis is here to stay. Anfinsen’s suggestions were far-sighted, foreshadowing, for example, the introduction of native chemical ligation as a means of synthesizing large proteins in the mid-1990s. Joachim Pietzsch[1] The U.S. National Library of Medicine: Profiles in Science: The Christian B. Anfinsen Papers; http://profiles.nlm.nih.gov/ps/retrieve/Narrative/KK/p-nid/19
|
10.5446/55117 (DOI)
|
Ladies and gentlemen, I think we must all be very grateful to Count Bernadotte and his associates in Lindau. I am very grateful to be able to speak on a subject that relates to that idealistic spirit. My talk is about the nobility of science. I know that you are a very noble audience and you have been inspired by the previous speakers. I am aware that you are also probably getting a little tired and hungry. I want to reassure you that I think I shall be finished within the 30 minutes. My talk is about the nobility of science. By noble, I mean morally elevated, magnificent and admirable. We have heard at this meeting many very impressive and interesting accounts of important developments in modern science and scientific medicine. But I think most of us would probably hesitate to apply the term noble in the sense of morally elevated to most scientific research today. If we did this, it might seem inappropriate or even a sign of false high-mindedness. But in the past, people have thought very differently that in the 5th century BC in ancient Greece, when science was just beginning to take shape, Euripides said the following. He said, Blessed is the man who has laid hold of the knowledge that comes from the inquiry into nature. But surveys the ageless order of immortal nature of what it is composed and how and why. In the heart of such as he, the study of base acts can find no lodging. Well, it's quite reasonable that such idealistic talk may make some of you shake your heads or smile regretfully. That you may believe that science is a noble activity, but individual scientists have their human failings. They may get caught in the rat races of careerism and in intrigues for research funds. It's not easy to be noble when there are many difficulties for science today, money shortages and commercial and military pressures on science. And it's not easy to feel noble when the public criticizes science for having created bombs and pollution and other undesirable problems. And as a result, many scientists today are rather muted in their optimism about science, although we have here, of course, some notable exceptions who are less muted. And they will not like to press the claim that science really is morally elevated. Now, 100 years ago, it was very different. There was enormous confidence in science as a progressive social force that Lord Rayleigh, the great physicist, referred to science as the noble struggle. Lord Kelvin spoke of the lofty work of science. Sir John Herschel wrote enthusiastically of the intellectual and moral as well as material relations which science created. Thomas Henry Huxley described science as a great spiritual stream. Applications of science in war was not such a big problem in the 19th century because wars were then much less inhuman and destructive than they are now. This optimism about science persisted into the 20th century and when I was a boy in the 1930s, I was brought up to look at scientists as heroes. Walter Reed sort of risking his life to save mankind from yellow fever. Humphrey Davy with his miners lamp. William Herschel grinding his telescope mirrors in his bedroom and making telescopes and discovering island universes. And in a schoolboy essay, I expressed quite conventional views when I wrote that science is a tool to be used for the development of civilization and the benefit of humanity. And to make clear that this didn't refer only to material benefits, I added that science must be truly regarded as an art and philosophy. Well, was it ironic that a few years later, there I was busy on the Manhattan nuclear bomb project. Some of this type of veneration of science derived from a naive trust in all powerful reason. That the French phyllis of the 18th century had believed that Newton's triumphant laws of mechanics would soon be followed by similar laws of morality, which would lead not only to the improvement of man in society, but even to their perfection. Now today, people do not have such a simple minded trust in reason. But the veneration of science has another basis which we can continue to accept. Science is based on certain ideals and these ideals are valid even if in practice, science by no means always lives up to those ideals. In a similar way, religion or religious practice doesn't always live up to the ideals of religion. Ideally, religion is concerned with eternal truths. For example, it faces quite directly such questions as what is the meaning of human existence? How can we lead a good life? And it tries to give answers in spiritual as well as in psychological and material terms. But in practice, of course, religion has often been influenced by word enforcers. For example, it's being used to exert political force rather than to encourage spirituality and moral improvement. But such debasing of religion does not invalidate its ideals. I might mention that when I refer to religion, I'm thinking mainly of Christianity, although I'm not a Christian myself. In Newton's time and up till this century, scientific knowledge seemed absolute and final, and therefore it could claim an eternal quality like, in some respects, religious knowledge. But when classical physics was replaced by relativity and quantum mechanics, it could be argued that scientific knowledge was merely temporary knowledge, which was useful for a while but had no permanent significance. Or to use religious terms, it was temporal and not eternal. And I think most working scientists feel in their bones that scientific knowledge does, at a particular stage, give a partial view of how things really are and therefore does derive from some aspect of final truth. Though the partial view of the scientist may be somewhat like the blind man's view of the elephant when he can only touch one particular part of it. It contains truth nonetheless. Of course, most scientists today are much too busy enjoying the challenges of discovery and pragmatically making their science work to be bothered very much to think about truth with a capital T. Yet the extraordinary way science has worked over the centuries gives scientists the feeling that scientific knowledge does contain permanent truth. This might seem obvious enough. I wish to emphasize that the eternal element in scientific knowledge means that it has something in common with religious knowledge and therefore it is proper that scientists should regard scientific knowledge to use Einstein's words with profound reverence. On the other hand, scientists should beware, Sir John Eccles was emphasizing, that reverence doesn't lead to pride and hubris. For example, like the 18th century, Phyllisoph, the social Darwinists of the 19th century, and today the molecular sociobiologists who make much too great a claim that their particular kind of science can comprehend totally human problems. Talking about pride, I should not forget to mention the scientist's humility. Darren Kaczalski, a great man, had a nice story about this, that there was a rabbi who was retiring from his position and they were having a meeting to send him off. There was a man making a speech about the rabbi and saying what a very wonderful person he was, such an intelligent man he was, such a kind man he was, and listing all these admirable qualities. Then he noticed as he went through this long catalogue of good qualities that the rabbi sitting there was looking increasingly unhappy so he turned to him and said, is anything the matter? And the rabbi said, yes, he said, you have forgotten to mention my great humility. And I think Kaczalski said, this is what the scientist's humility is somewhat like. Science has in its ideal form many aspects which give it dignity. Based on consensus, cooperation and sharing of knowledge, secrecy is contrary to its true spirit. The work of the individual is only given its meaning when the community of scientists receive the individual's work and use it and develop it. The community of scientists knows no national boundaries in the ideal. Science demands honesty and integrity to a rather greater extent than is required in many other human activities. And of course, scientists are proud of their tradition of freedom of inquiry. Bruno was burnt at the stake for that ideal. Also, he's not the scientist's ideal of freedom from prejudice, very similar to the religious ideal of freedom from the self, from the ego. I'll return to that point later. Certainly, the scientist is justified in feeling that these are fine ideals. And also, the kind of knowledge which scientists seek reflects the harmonious, cooperative way they would work in the ideal state. And this knowledge brings unity into our understanding and reveals order and harmony in nature. Thus, it is against the spirit of science to apply scientific knowledge for destructive, exploitative or socially divisive ends. But although scientists know that science is based on admirable and noble ideals, they are very cautious today about claiming that science is a noble activity. And I think this is mainly because scientists are depressed, quite rightly, by the morally doubtful applications of science. And some scientists try to dispose of this problem by drawing a clear separation between scientific knowledge and the applications of that knowledge. And also between pure science and applied science. They claim that morality, the morality of doing science, is quite distinct from the morality of applying it. But this type of approach of trying to avoid responsibility, I think, fails to give back to science the dignity it had in the past. And another approach is to draw attention to the great many very desirable applications of science in medicine, for example, and to balance these against the undesirable applications. But this unfortunately is really not much use when we have to face the possibility that application of science could destroy civilization and, of course, science itself, including George Wald's physicists. What is the use of having all the fine science described at this meeting, say, if science ends up by destroying itself? If we wish to recover the dignity and nobility of science, we really have no alternative but to face directly the unpleasant and difficult problem of morally questionable applications of science. And application in war is the most important example. We've heard at this meeting many inspiring accounts of the application of science in medicine. It would be understandable if some of you preferred that the subject of war simply were not mentioned. But here at Lindow, where Alfred Nobel's spirit lives on, we can hardly forget the great concern he had for preventing war. He was attending international meetings on peace and disarmament through Europe and founding his peace prize. And I feel it would be somewhat disrespectful to the memory of Nobel if I were in any way to apologize for discussing the problem of war here. The high ideals in science should restrict application of science in war, but in practice they have not prevented it in a similar way, in the case similar with medicine. That medicine has high ideals of healing and of regarding human life is almost sacred. And it was these ideals which caused, for example, Edward Mellonby, the secretary of the British Medical Research Council, to avoid work on chemical and biological weapons in World War II. Even so, a great many medical scientists do such work which uses medicine to devise new ways of injuring and kidding people. And the common justification for developing chemical and biological weapons is that they are needed as a defense or as a deterrent against attack with such weapons from other nations. But as is the case for nuclear weapons, there is no way to prove that this type of deterrence is immoral, it may be claimed that the mere threat of having chemical and biological weapons will keep the peace and that those weapons would never actually be used. But I feel these deterrence arguments are partly spurious because chemical and biological weapons have been used in many wars and the continual development of chemical and biological weapons produces international tension and reduces chances of achieving a state of so-called stable deterrence. But in any case, I hope I'm right in believing that a majority of medical scientists today have a sufficiently strong sense of moral revulsion against chemical and biological weapons to reject arguments for doing research on them. The position of non-medical scientists is similar. But how can we make clearer the ideals of science so that the force of these ideals may increasingly prevent scientists from doing weapons research and development? Let's consider in some greater depths how scientists ideally need to approach their work. Earlier, I drew a parallel between the scientists' lack of prejudice and the selfless ideal of religion. I wish to pursue a little further this parallel between science and religion, though I realize that according to some ways of thinking this may seem a strange parallel. If we take the idea of God as in some important respects as being equivalent to the scientific concept of universal order in nature, there is a similarity between, say, the Quaker, the friend, who sees God in every man, and silently and attentively listens to men. There is a similarity between him and the scientist who has faith that order exists in nature and without prejudice observes nature. A scientist ideally has no undue attachment to preconceived ideas and has an open, inquiring mind which is responsive to what is observed. This open mind can consider all possibilities, but it needs to have a sense for the order which exists in nature so that it can discriminate properly between one possibility and another. And normally, of course, science makes progress by severely restricting the possibilities to those which may be expressed in general and abstract concepts in terms of such concepts. According to religious ideas, a person ought to be free from undue attachment to his self, his self-image, his ego, and the needs of the self. And through this freedom from the self, the religious person acquires the ability to respond to the creative potentiality of other people and of himself and enables him to help to realize that potentiality in others and in himself. Such a person is in a state of love, using love in the religious rather than the possessive sense of that I want to love and possess something. This idea of non-possessive love can't be defined. It's not used in science, or at least very little. But to make my meaning possibly clearer, consider your own attitude towards someone you love. Do you not have an open, inquiring, and encouraging attitude towards the person you love? And does not this open, inquiring attitude have much in common with the attitude of open scientific inquiry? The poet Kolaridge is said to have claimed that a scientist must love the object he studies, otherwise he could not respond to its true nature. And I believe that Kolaridge's idea of love expresses the ideal scientific attitude as well or better than the idea of curiosity, which has been part of the scientific tradition of value-free, objective, scientific inquiry. See, love includes curiosity, but curiosity need not include love. Now, there's one important difference between love and the open-minded attentiveness of the scientist. Love is open without limits, for it resembles the religious idea of God the Unlimited. Whereas science normally has set limits by concentrating on the abstract concepts and also by ignoring religious, political, and other factors which are regarded as being outside science. And these limits which scientists have imposed on science have helped to keep science free from interference from scientific influences, but they have also led to science being criticized for being too narrow. Such criticism of science and also technology has, of course, been stimulated by environmental problems which require for their solution consideration of a very wide range of interacting aspects, the scientific, technological, economic, social, political, aesthetic, spiritual, all these different aspects. In fact, to deal with these problems, you need a holistic approach and out of such an approach, ideas like love arise in quite a natural way. Thus, I would contend a science needs to make its open-minded inquiry more open than it has traditionally. More so that it becomes less like curiosity, more like love. All problems of undesirable applications of science create a need for a broader science which has some kinds of moral dimensions. And there seems to be no reason in principle why such extra dimensions shouldn't be brought into science. They have in the past been excluded for expediency. But now the reverse is the case. It is expedient to include them. And in doing science, we need, of course, to focus very narrowly on selected areas in which we work and make use of abstract concepts. And science can't work by directly trying to embrace the whole of a problem. But I would contend that while we focus down, we can have, also have a broad perspective at the same time in a general way, bearing in mind our motivations for doing the science and the possible implications of the work. That it is possible to be both narrow and broad at the same time. This is related to the complementarity that George Ward mentioned. I suggest we really have little alternative but to bring this moral dimension into science. That scientific research today is producing more and more means by which humanity can destroy itself. And that scientists cannot close their minds any longer to the horrifying amount of war research which is being carried out. Everywhere weapons scientists are working but we seldom like to mention them. It's a bit like sex in British Victorian society. Wide-spread scandalous but not-for-polite conversation. That scientists need to examine what the scientific profession is doing to make the world more and more dangerous. Socrates claimed that the unexamined life is not worth living. And I would say with the danger of destroying everything that we are nearing the point where we might say that the unexamined scientific life is not liveable. Because as a result, no life is liveable. Let us therefore spend a few minutes examining weapons research today. Roughly half of government funds for research and development in leading Western countries is spent on problems connected with war. There are few data from the east than block but we can expect the situation to be similar there. Regarding dangers from nuclear war, of course you'll be aware of recent environmental studies which point to the possibility that nuclear war could not only destroy civilization but possibly all life on Earth. There's massive research on new kinds of weapons of all kinds and in the main all these new inventions make the world more dangerous rather than safer unfortunately. But since we are mainly biological scientists here, let us look for a few minutes at the problem of chemical and biological weapons. It's a sad fact that the first big use of chemical weapons which took place in World War I was enthusiastically prepared by a great scientist Fritz Haber who was awarded the Nobel Prize for Chemistry in 1919. And much work continued after the war because the Geneva Protocol of 1925, though it banned first use of chemical and biological weapons, did not ban research and development or stockpiling. Then in World War II studies on insecticides led to the development of nerve gases in Germany but these were not used. These gases cause irreversible inhibition of acetylcholine esterase and in the various forms successfully increased effective de-thality in the field compared to mustard gas by factors of four, fifteen, thirty after the war, one hundred and fifty times. And such very lethal substances are both difficult to manufacture and to store. But recently these difficulties have been overcome by the invention of binary weapons in which two non-toxic substances mix and combine to form nerve gas only after the weapon has been launched. And since 1980 the United States government has requested eight billion dollars for a program of manufacturing binary weapons but the Congress has been unwilling to support this program as only given a fraction of that money. If the manufacture of binary weapons can be stopped a new chemical arms race can be avoided and there is a real chance of disarmament in this area. And fortunately it really is quite encouraging that both the United States government and the Soviet governments they both show interest in new agreements to ban chemical weapons completely. Biological warfare is a strange perversion of medicine. Somebody called it public health in reverse. And in World War II a great body of scientists on both sides worked on this. The British got quite far in preparing to bomb German cities with anthrax. And the Japanese another example in their unit 731 killed thousands of prisoners of war in experiments on typhus, typhoid, anthrax, cholera, plague, salmonella, tetanus, botulism, bruselosis, gasgain, gangrene, smallpox, tichensephalitis, tuberculosis, tolamyria and glandis. I don't know how to pronounce all those. And after the war the Americans, the Americans accepted the data of the Japanese said we can't do these experiments on prisoners of war. And they gratefully released the Japanese workers without any punishment at all. Many countries continued to develop biological weapons. The extensive United States program on yellow fever, well what would Walter Reed have thought about that? I don't know. Fortunately, it is very fortunate after the 1972 biological weapons convention that most stocks of biological weapons were destroyed. But since 1981 the United States government has been at a big campaign suggesting that it should start stockpiling again because Soviet forces have been using, well one reason was Soviet forces have been using mycotoxins in Southeast Asia. But to say the least there is a strange confusion about these claims. One has to look at them very critically before we start another arms race there. Currently the United States Department of Defense, I'm talking about United States things because we know about them. We have an open society over there. We don't know what's going on in the Soviet Union. Currently the United States Department of Defense has 43 research projects on viruses, bacteria and toxins. Six of these projects are to clone antidotes against nerve gas and others are to produce vaccines. But the trouble is it's often difficult to be quite clear whether a project is purely defensive or may have an offensive aspect to it too. Recombinant DNA techniques raise many new possibilities. For example, might it be possible to create new diseases which would infect one ethnic group and not another? New diseases resistant to antibiotics and such like. But increasing use in chemical biological weapons research of the new techniques which are used widely in basic biological research today produces problems for the biological community as a whole. That United States physicists have had their freedom to publish curtailed when it seemed that their results might help the Soviet military. And this type of difficulty could increasingly impinge on biologists if a chemical biological warfare arms race were to build up. And I suggest that scientists should resist increasing weapons research not only because of possible war danger it may create or because of scientific idealism and moral revulsion, but also simply because they wish to preserve their freedom to publish and research. But the main argument to justify scientists doing weapons research is of course it must be done we must do this horrible work to defend human values and national security. This argument one must look at it has often been very successful powerful argument in the past. Let us look back in history we can see that you are playing on the fears that people have of evil. And if you play on the fears that people have of evil they have led civilized and moral people to commit what we now recognize were great crimes that, for example, the medieval inquisition. Convinced people morally upright intellectually sound people that it was necessary. If you couldn't persuade people from heresy it was necessary to torture them and burn them. High-minded ideals producing abomination and presumably the Japanese medical scientists in 731 while they were doing all their experiments on living prisoners of war. They must have felt that they were defending human ideals too. The best way for scientists to deal with the perversion of science and medicine in weapons research is to be very clear in their minds about the ideals of science. That these ideals are a special form of human ideals and we should make the fullest use of them as a creative force. If civilization is still with us in 100 years time I would predict that people will look back in horror and bewilderment at scientific weapons research today in much the same way that we now look back with horror and bewilderment. And bewilderment at the abominations committed by the Inquisition. Science has made enormous progress because of its spirit of open-minded inquiry. But it set limits on this open-mindedness and this threatens to destroy the world. That Roger Bacon, the Franciscan scientist predicted in the 12th century that science without a moral base would become the blindness of hell. I think that expresses what's happening to weapons research science today. Ladies and gentlemen, the ideals of science are not dead and what I say is let us extend these ideals so that science can reveal a more profound truth which expresses the unity, order and harmony not only in the material dimension but also in the moral dimension. And in that way we would be acting in the idealistic spirit of Alfred Nobel. We could then save the world from war and restore dignity and nobility to science. Thank you.
|
Maurice Wilkins first visited Lindau in 1984, 22 years after he had shared the Nobel Prize in Physiology or Medicine with Francis Crick and James Watson. The iron curtain that divided Europe still existed and with it the fear of a nuclear clash between the two super powers. Wilkins’ lecture included a plea against scientific weapons’ research and concluded the scientific program. This made much sense because of its interdisciplinary content, and perhaps also because it took place on June 28th, the 70th anniversary of the assassination of archduke Franz Ferdinand of Austria in Sarajevo, the event, which led directly to the First World War. “We must all be very grateful to Count Bernadotte and his associates in Lindau who have at these meetings helped to keep alive the idealistic spirit of Alfred Nobel”, Maurice Wilkins opens his lecture and reasons why he chooses to speak on the nobility of science. “By noble I mean morally elevated, magnificent and admirable”, he says, “but most of us would hesitate to apply the term noble in this sense to most scientific research today”. Science is subject to high commercial or military pressure, the public criticizes it “for bombs and pollution and other undesirable problems” and “as a result many scientists today are rather muted in their optimism”. This was very different a hundred years ago, Wilkins explains, in the decades before the Great War, when “there was an enormous confidence in science as a progressive social force”. Despite of today’s ambivalence towards science, however, “the veneration of science has another basis which we can continue to accept”, namely that it is “based on certain ideals and these ideals are valid even if in practice science by no means always lives up to those ideals”. In this regard, science is comparable to religion, whose practice also often fails to meet its ideals. In considering how scientists ideally should approach their work, Wilkins, who claims not to be a Christian himself, draws a further parallel between religion and science. If we take the idea of God equivalent to the scientific concept of a universal order in nature, he says, then there is a “similarity between the Quaker who sees God in every man and the scientist who has faith this order exists and without prejudice observes nature.” Both believers and scientists ideally should be in a state of non-possessive love. To underscore this notion, Wilkins quotes the poet Samuel Coleridge: “A scientist must love the object he studies, otherwise he cannot respond to its true nature.” While love is open without limits, however, science for good reasons traditionally sets limits by concentrating on certain objects. This has kept science free from unwanted interferences, yet narrowed its scope. But shouldn’t it be possible for scientists to adopt a more holistic approach and be narrow and broad at the same time? “I suggest we have little alternative but to bring this moral dimension into science”. If half of the government funds for research and development in the leading Western countries and probably also in the East are spent on problems connected with war, Wilkins says, then we have to face the possibility that applications of science could destroy civilization and science itself. “What’s the use of having all the fine science described at this meeting if science ends up by destroying itself?” Given the many inspiring accounts of the application of science in medicine presented at this Nobel Laureate Meeting, “it would be understandable if some of you preferred that the subject of war simply was not mentioned”, he tells his audience. “But here in Lindau where Alfred Nobel’s spirit lives on, we can hardly forget the great concern he had for preventing war. I feel it would be disrespectful to the memory of Nobel, if I were in any way to apologize for discussing the problem of war here.” Joachim Pietzsch
|
10.5446/55120 (DOI)
|
Ladies and gentlemen, Einstein is by all criteria the most distinguished physicist of this century. No physicist in this century has been accorded a greater acclaim. But it is an ironic comment that even though most histories of twenty-eighth century physics starts with the pro-forma statement that this century began with two great revolutions of thought, the general theory of relativity and the quantum theory, the general theory of relativity has not been a staple part of the education of a physicist, certainly not to the extent quantum theory has been. Perhaps on this account, a great deal of mythology has accreted around Einstein's name and the theory of relativity which he founded seventy years ago. And even great physicists are not exempt from making statements which if not downright wrong or at least misleading. Maybe a quote for example, a statement by Durak made in 1979 on an occasion celebrating Einstein's hundredth birthday. This is what he said. When Einstein was working on building up this theory of gravitation, he was not trying to account for some results of observation, far from it. His entire procedure was to search for a beautiful theory, a theory of a type that nature will choose. He was guided only by considerations of the beauty of his equations. Now this contradicts statements made by Einstein himself on more than one occasion. Let me read what he said in 1922 in a lecture he gave titled, How I Came to Discover General Theorem of Relativity. In Larry Einstein's statement, I came to realize that all the natural laws except the law of gravity could be discussed within the framework of the special theory of relativity. I wanted to find out the reason for this but I could not attain this goal easily. The most unsatisfactory point was the following. Although the relationship between inertia and energy was explicitly given by the special theory of relativity, the relationship between inertia and weight or the energy of the gravitational field was not clearly elucidated. I felt that this problem could not be resolved within the framework of the special theory of relativity. The breakthrough came suddenly one day. I was sitting on a chair in my patent office in Bern. Suddenly a thought struck me. If a man falls freely, he would not feel his weight. I was taken aback. The simple thought experiment made a deep impression on me. This led me to the theory of gravity. I continued my thought. A falling man is accelerated. That then what he feels and judges is happening in the accelerated frame of reference. I decided to extend the theory of relativity to the reference frame with acceleration. I felt that in doing so, I could solve the problem of gravity at the same time. A falling man does not feel his weight because in his reference frame there is a new gravitational field which cancels the gravitational field due to the Earth. In the accelerated frame of reference, we need a new gravitational field. Perhaps that is not quite clear from what I have read precisely what he had in mind. But two things are clear. First, he was guided principally by the equality of the inertial and the gravitational mass in empirical fact which has been very accurately determined and in fact probably the most well-established experimental fact. The second point is that this equality of the inertial and the gravitational mass led him to formulate a principle which he states there very briefly and which has now come to be called the principle of equivalence. Let me try to explain more clearly what is involved in these statements I have just made. The first point in order to do that, I have to go back in time. In fact I have to go back 300 years to the time when Newton wrote the Principia, the fact that the publication of the Principia is 300 years old was celebrated last year in many places. Now Newton notices already, within the first few pages of the Principia, that the notion of mass and weight are two distinct concepts based upon two different notions. The notion of mass follows from his second law of motion which states that if you subject a piece of body to a force then it experiences an acceleration in such a way that the quantity which we call the mass of the body times the acceleration is equal to the force. Precisely, if you apply a force to one cubic centimeter of water and measure the acceleration which it experiences and then you find that another piece of water when subjected to the same force experiences ten times the acceleration then you conclude that the mass of the liquid you have used is one tenth cubic centimeter. In other words then the notion of mass is a consequence of his law of motion. It is a constant of proportionality in the relation that the force is equal to the mass times the acceleration. But the notion of weight comes in a different way. If you take a piece of matter and it is subject to the gravitational field say of the earth then you find that the attraction which it experiences in a given gravitational field is proportional to what one calls the weight. For example if you take a piece of liquid say water and you find that the earth attracts it by a force which you measure and you find that another piece of the same matter experiences a gravitation attraction which is say ten times more then you say the weight is ten times greater. In other words the notion of weight and the notion of mass are derived from two entirely different sets of ideas and Newton goes on to say that the two are the same and in fact as he says as they have found by experiments with pendula made accurately the way he determined the equality of the inertial and the gravitational mass was simply to show that the period of a pendulum, simple pendulum depends only on its length and not upon the weight or the mass of the body or the constitution of it. And he established the equality of the inertial and gravitational mass to a few parts in a thousand. A century later Bessel improved the accuracy to a few parts in several tens of thousands. Where the century yet was showed the equality to one part at ten to the eleven and more recently the experiments of Dickey and Brighinsky have shown that they are equal to one part and ten to the minus thirteen. Now this is a very remarkable fact. The notion of mass and weight are fundamental in physics and when equates them by a fact of experience and this of course is basic to the Newtonian theory. Hermann Weill called it an element of magic in the Newtonian theory and one of the objects of Einstein's theory is to eliminate this magic. But the question of course is you want to eliminate the magic but how? And this Einstein, for this Einstein developed what one might call is what one calls today the principle of equivalence. Let me illustrate his ideas here. Here is the famous experiment with an elevator or a lift which Einstein contemplated. Now the experiment is following. Now here is a lift with a rocket booster and let us imagine that this lift is taken to a region of space which is far from any other external body and if this elevator is accelerated by a value equal to the acceleration of gravity then the observer would find that if he drops a piece of apple or a ball it will fall down towards the bottom with a sudden acceleration. On the other hand if the rockets are shut off and the rocket simply coarsed then if he leaves the same body then it remains where it was. Now you perform the experiment now on the same elevator shaft on the earth and you find that if he leaves the body then it falls down to the ground in the same way as it did when this was accelerated and not subject to gravity. And now suppose this elevator is put in a shaft and falls freely towards the center of the earth then when you leave the body there it remains exactly as it was. In other words in this case the action of gravity and the action of acceleration are the same. On the other hand you cannot conclude from this that action of gravity and the action of uniform acceleration are the same. Let us now perform the same experiment in which the observer has two pieces of bodies instead of just one then if the rocket is accelerated then you will find that both of them fall towards the same along parallel lines. And again if the acceleration is stopped then the two bodies will remain at the same point. Now if you go to the earth and similarly you have these two things then the two will fall but not exactly in parallel lines if the curvature of the earth is taken into account. The two lines in which they will drop will intersect at the center of the earth. Now if the same experiment is performed with a lift which is falling freely then as the lift approaches the center of the earth the two objects will come close together. And this is how Einstein showed the equivalence locally of a gravitational field of a uniform gravitational field with a uniform acceleration but showed nevertheless that if the gravitational field is non-uniform then you can no longer make that equivalence. Now in order to show how from this point Einstein derived his principle of equivalence in a form in which he could use it to find gravity I should make a little calculation. Now everyone knows that if you describe the equations of motion in say Cartesian coordinates then the inertial mass times the acceleration is given by the gravitational mass times the gradient of the potential gravitational potential there are similar equations for x and y. Because we want to rewrite this equation in a coordinate system which is not x, y and z but a general covalent coordinates that is instead of x, y and z you change to coordinates q1, q2, q3 and you can associate with the general covalent system a metric in the following way the distance between two neighboring points in Cartesian framework is dx square plus dz square. On the other hand if you find the corresponding distance for general covalent coordinates it will be a certain quantity h alpha beta with a two index quantity which will be functions of the coordinates times dq alpha dq beta for example in spherical polar coordinates will be dr square plus r square d theta square plus r square sin square theta d phi square but more generally that will be the kind of equation you will have. Now let us suppose you write this equation down and ask what the gravitational equations become then you find that m inertial times this quantity q dot beta that is the q double dot beta the acceleration in the coordinate beta times this quantity contracted is equal to the minus the inertial mass times a certain quantity gamma alpha beta gamma we call the Christoffel symbols but it does not matter what they are they are functions of the coordinate functions of the geometry times q dot beta q dot gamma and then minus the same gravitational mass times the gradient of the potential. You see the main point of this equation is to show that the acceleration in the coordinate which is corresponding to that when you write it down in general covalent coordinates the acceleration consists of two terms a term which is geometrical in origin which is a coefficient the inertial mass in a term from the gravitational field and if you accept the equality between the inertial mass and the gravitational mass then the geometrical part of the acceleration and the gravitational part are the same and this is Einstein's remark he said that why make this distinction why not simply say that all acceleration is metrical in origin and that is the starting point of his work he wanted to abolish the distinction between the geometrical part of the acceleration and the gravitational part by saying that all acceleration is metrical in origin. Einstein's conclusion that in the context of gravity all accelerations are metrical in origin is as staggering in its own way as Rutherford's conclusion when Geiger and Marsden first showed him the results of the experiments on the larger angel scattering of alpha rays and Rutherford's remark was it was as though you had fired a 15 inch shell at a piece of tissue paper and it had bounced back and hit you. In the case of Rutherford he was able to derive his law of scattering overnight but he took Einstein many years in fact 10 years almost to obtain his final field equations. The transition from the statement that all acceleration is metrical in origin to the equations of the field in terms of the Riemann tensor is a giant leap and the fact that it took Einstein three or four years to make the transition is understandable. Indeed it is astonishing that he made the transition at all. Of course one can claim that mathematical insight was needed to go from his statement about the metrical origin of gravitational forces to formulating those ideas in terms of Riemannian geometry but Einstein was not particularly well disposed to mathematical treatments and particularly geometrical way of thinking prior in his earlier years. For example when Minkowski wrote a few years after Einstein had formulated his special theory of relativity by describing the special relativity in terms of what we now call Minkowski geometry in which we associate a metric in space time which is dG squared minus dx squared minus dy squared minus dc squared and he showed that rotations in a space time with this metric is equivalent to a special relativity. Einstein's remark on Minkowski's paper was first that well we physicists show how to formulate the laws of physics and mathematicians will come along and say how much better they can do it and indeed he made the remark that Minkowski's work was Yuba Flessiger, Yuba Flessiger-Galer Minkowski, unnecessary learnedness but it is only in 1911 or 1912 that he realized the importance of this geometrical way of thinking and particularly with the aid and assistance of his friend Marcel Grossmann he learned sufficient differential geometry to come to his triumphant conclusion with regard to his field equations in 1915 but even at that time Einstein's familiarity with Riemannian geometry was not sufficiently adequate. He did not realize that the general covariance of his theory required that the field equations must leave four arbitrary functions free. Because of his misunderstanding here he first formulated his field equations by equating the rigid tensor with the energy momentum tensor but then he realized the energy momentum tensor must have its covariant divergence zero but the covariant divergence of the rigid tensor is not zero and he had to modify it to introduce what is the Einstein tensor. Now I do not wish to go into the details more but only to emphasize that the principal motive of the theory was physical insight and it was the strength of his physical insight that led him to the beauty of the formulation of the field equations in terms of Riemannian geometry. Now I want to turn around and say that why is it we believe in the general theory of relativity. Of course there has been a great deal of effort during the past two decades to confirm the predictions of general relativity but these predictions relate to very very small departures from the predictions of the Newtonian theory and in no case more than a few parts in a million. The conformation comes from the deflection of light as light traverses a gravitational field and the consequent time delay, the precession of the Perigalina Mercury and the change in period of double stars, close double stars as pulsars due to the emission of gravitational radiation but in no instance is the effect predicted more than a few parts in a million departures from Newtonian theory and in all instances it is no more than verifying the values of one or two or three parameters in expansion of the equations of general relativity in what one calls the post-Newtonian approximation. But one does not believe in a theory of which only the approximations have been confirmed. For example if you take the Dirac theory of the electron and the only conformation you had was the fine structure of ionized helium in partial experiments of a conviction would not have been as great and suppose there had been no possibility in the laboratory of obtaining energies of a million electron volts then the real experiment, the real verification of Dirac's ideas, prediction of antimatter, the creation of electron-positron pairs would not have been possible and our conviction in the theory would not have been as great. But it must be stated that in the realm of general relativity no phenomenon which requires the full nonlinear aspect of general relativity have been confirmed. Why then do we believe in it? I think of a belief in general relativity comes far more from its internal consistency and from the fact that whenever general relativity has an interface with other parts of physics it does not contradict any of them. Let me illustrate these two things in the following way. We all know that the equations of physics must be causal. Essentially what it means is that if you make a disturbance at one point the disturbance cannot be felt at another point for a time which light will take to go from one point to another. Technically one says that the equations of physics must allow an initial value formulation that is to say you give the initial data on a space like surface and you show that the only part of the space time in which the future can be predicted is that which is determined by sending out light rays from the boundary of the space time region to the point. In other words if for example you... suppose you have a space like slice then you send a light ray here and you light a ray here and it is in that region that the future is defined. Now when Einstein formulated the general theory of relativity he does not seem to have been concerned whether his equations allowed an initial value formulation and in fact to prove in spite of the non-linearity of the equations the initial value formulation is possible in general relativity was proved only in the early 40s by Lechnerovitz in France so that even though when in formulating the general theory of relativity the requirement that it satisfy the laws of causality was not included in fact it was consistent with it or let us take the notion of energy. In physics the notion of energy is of course central we define it locally and it is globally concerned. In general relativity for a variety of reasons I cannot go into you cannot define a local energy. On the other hand you should expect on physical grounds that you have an isolated matter and even if it radiates energy then globally you ought to be able to define a quantity which you could call the energy of the system and that if the energy varies it can only be because gravitational waves cross the boundary at a sufficiently large distance and that is the second point. Of course the energy of a gravitating system must include the potential energy of the field itself but the potential energy in the Newtonian theory has no lower bound by bringing two points sufficiently close together you can have an infinite negative energy but in general relativity you must expect that there is a lower bound to the energy of any gravitating system and if you take a reference with this lower bound as the origin of measuring the energy then the energy must always be positive. In other words if general relativity is to be consistent with other laws of physics you ought to be able to define for an isolated system a global meaning for its energy and you must also be able to show that the energy is positive but actually this has been the so-called positive energy conjecture for more than 16 years and only a few years ago it has proved rigorously by Ed Witten and Yoham. Now in other words then that even though Einstein formulated the theory from very simple considerations like all accelerations must be metrically in origin and putting it in the mathematical framework of Riemannian geometry it nevertheless is consistent in a way in which its originator could not have contemplated but what is even more remarkable is that general relativity does have interfaces with other branches of physics. I cannot go into the details but one can show that if you take a black hole and have the direct waves reflected and scattered by a black hole then there are certain requirements of the nature of scattering which the quantum theory requires but even though in the formulating this problem in general relativity no aspect of quantum theory is included the results one gets are entirely consistent with the requirements of the quantum theory in exactly the same way general relativity has interfaces with thermodynamics and it is possible to introduce the notion of entropy for example in the context of what one generally calls Hawking radiation. Now certainly thermodynamics must not incorporate in forming general relativity but one finds that when you find the need to include concepts from other branches of physics in consequences of general relativity then all these consequences do not contradict branches of other parts of physics and it is this consistency with physical requirements this lack of contradiction with other branches of physics which was not contemplated in its founding and it is these which gives one confidence in the theory. Well I am afraid I do not have too much time to go into the other aspect of my talk namely why is the general theory of an excellent theory well let me just make one comment if you take any new physical theory then it is characteristic of a good physical theory that it isolates a physical problem which incorporates the essential features of that theory and for which the theory gives an exact solution for the Newtonian theory of gravitation you have the solution to the Kepler problem for quantum mechanics relativistic or non relativistic you have the predictions of the energy of the hydrogen atom and in the case of the Dirac theory I suppose the Klein-Lichino formula and the Pair production. Now in the case of the general theory of relativity you can ask is there a problem which incorporates the basic concepts of general relativity in its purest form. In its purest form the general theory of relativity is a theory of space and time. Now a black hole is one whose construction is based only on the notions of space and time. A black hole is an object which divides the three dimensional space into two parts an interior part and an exterior part bound by a certain surface which one calls a horizon and the reason for calling that the horizon is that no person, no observer in the interior of the horizon can communicate with the space outside. So a black hole is defined as a solution of Einstein's vacuum equations which has a horizon which is convex and which is asymptotically flat in the sense that the space time is Minkowski at sufficiently large distances. It is a remarkable fact that these two simple requirements provide in the basis of general relativity a unique solution to the problem. A solution which has just two parameters the mass and the angular momentum. This is a solution of Kerr discovered in 1962. The point is that if you ask what a black hole solution consistent with general relativity is you find that there is only one simple solution with two parameters and all black holes which occur in nature must belong to it. And one can say the following if you see macroscopic objects then we see macroscopic objects all around us and if you want to understand them it depends upon a variety of physical theories with a variety of approximations and you understand it approximately. There is no example in macroscopic physics of an object which is described exactly and with only two parameters. In other words one could say that almost by definition the black holes are the most perfect objects in the universe because their construction requires only the notions of space and time. It is not vulgarized by any other part of physics with which we are mostly dealing with and one can go on and point out the exceptional mathematical perfectness of the theory of black holes. Einstein when he wrote his last paper his first paper announcing his field equations stated that anyone scarcely anyone who understands my theory can escape its magic. For one practitioner at least the magic of the general theory of relativity is in its harmonious mathematical character and the harmonious structure of its consequences. Thank you.
|
It is an old thruth that when scientists get older their interest in the history of science and culture intensifies. When the astrophysicist Subrahmanyan Chandrasekhar gave his first lecture at the Lindau Meetings in 1988, its theme was Einstein’s theory of general relativity from 1916. When six years later, Chandrasekhar returned to give his second and last lecture, the title was Newton and Michelangelo, i.e. something out of the 16th and 17th century! His lecture about the general theory of relativity first gives a pedagogical account of the way that the young patent clerk Albert Einstein first realized the need to enlarge the special theory of relativity. As I remember it from my time at the Nobel Museum, where in 2005 we produced an exhibition about Einstein’s Nobel Prize, this insight came when he in 1906 was asked to write a review article on the special theory. He then saw that all physical laws except gravitation could be included in the special theory. When analyzing the force of gravity, he arrived at his famous principle of equivalence, that gravitation is just a form of acceleration. So, as Chandrasekhar argues, the physical insight came early, but it then took 10 years of work to find the field equations. One particular reason that it took so long was that Einstein, with his tremendous physical insight, was not very good at higher mathematics. Several times he was led astray and it took a long time before he understood how the general covariance needed should be expressed. In the second part of his lecture, Chandrasekhar discusses why physicists today believe in the theory of general relativity. From the historical point of view, this acceptance of the new theory came from the classical observations: The perihelion motion of Mercury and the bending of light close to the sun. But these observations only test small effects in the post-Newtonian approximation. Chandrasekhar argues that the belief in the general theory of relativity comes more from its internal consistency and the fact that it does not contradict other physical theories. He also stresses the fact that there are exactly solvable problems as, e.g., black holes, ”the hydrogen atoms of general relativity”. Today, with the observations of the accelerated expansion of the Universe, we are in the situation that the theory of general relativity is again tested and this time on the most grand scale conceivable! Anders Bárány
|
10.5446/55031 (DOI)
|
Okay, so thank you very much for the invitation. I'm sorry that I'm giving the stock online as well, at least I'm not alone here. So I have to apologize that there won't be, I mean in the talk itself, there will be a lot of sort of representation theory, geometric representation theory, and actually just geometry, but there won't be any kind of enumerative geometry. Although what I'm going to talk about is supposedly very much related to at least some other aspects which are discussed in this school. For instance, it's all motivated by some work of physicists and in particular for instance, some paper of Whedon, paper of Mihailov and Whedon, which discusses Havana of homology a lot. So there should be some connection with Havana of homology, but I absolutely don't understand what it is. So okay, so let me briefly explain what the plan of the talk is. So for about half of the talk, I'll be reviewing some known results. I'm sorry. So I'm going to start with a review of some very basic thing in geometric representation theory, a view of geometric satake equivalence. Then I'm going to discuss another equivalence which is kind of similar to geometric satake, but which is again, I think pretty well known to for well to sort of people working in geometric representation theory, but maybe less well known to other people. And this is what's called capital E, which is an abbreviation for fundamental local equivalence. This is a terminology that Dennis Gates is using. And I'll maybe comment on the terminology when we get to it. And then we'll discuss Gaiota conjectures, which should be thought of as some analogues of fundamental local equivalence. And then, well, if there is time left, which, you know, I'm not sure about that, then I'll talk about idea of proofs. And this is a recent joint work with Finkelberg. And Travkin, which has just been posted on archive yesterday. And so I should also say that I have given a very, in some sense, very similar talk also in IHS, I think about two and a half years ago. But at that time, at that point, everything was just 100% conjectural. And now we actually have a lot of theorems. So I'll try to sort of kind of emphasize that. But again, I'm going to begin with, I'm going to begin with review some known results. And one thing that I want, another thing I want to emphasize before I proceed is that, well, I mean, in the end of the day, I'm going to get this to get this Gaiota conjectures and then maybe we'll discuss the proofs of some special cases of them. But the point is that for people working in geometric representation theory, these conjectures themselves, sort of quite strange and I would say unexpected. And I think that mathematicians would never be able to guess those conjectures and thesis somehow can derive them from some kind of string theory calculations. And I'm absolutely unable to follow those string theory calculations. But somehow it's, I think, pretty remarkable that by using this kind of very mysterious string theory calculations, thesis can produce some conjectures, which are kind of absolutely mathematical and also somehow very reminiscent of some other things in geometric representation theory. But again, mathematicians somehow never did anything like that. All right, so this was kind of a preview. So now let me start this plan, implementing this plan. So first of all, what is geometric satirical equivalence? Well, so we work over algebraic writers, so we see everything over C. And so we fix G, which in the end of the day is going to be G ln, but for now it's going to be just any connected, productive algebraic group. Well, over C. And so the basic object that somehow which will appear on the geometric side for us is the affine gross monon of G. And so the affine gross monon of G is, well, I guess has been discussed in Joel's talks, but let me still just fix the notation. So I'm going to join by K, the field of Laurent Poirou series. And inside there is a ring O, which is the ring of the O series. And then we can see that the affine gross monon of G, which is the collision G of K, 1 G of O. And this, well, again, I guess Joel discussed it. So this is a 5-dimensional object, but it's a pretty nice 5-dimensional object. So it's a union of projective, 5-dimensional projective varieties. And in particular, the group G of K still acts there on the left, and particular G of O acts. And the orbits of G of O are 5-dimensional. So in particular, we can see that the category of perverse G of O equivariant she is on the affine gross monon. Well, so this is some category, this is some c-linear abelian category. And well, and this turns out to be a natural way, turns out to be a tensor category, or symmetric monoidal category. So this is kind of, you can write it as perverse sheaves on the co-option, on the double co-option G of K, mod G of O, mod G of O. And this is, again, this is kind of, I think the Joel more or less discussed something very close to that. So somehow you can use convolution, some kind of convolution product here to define the tensor structure. And it turns out that it's not going to be symmetric and, you know, there's duality and so on. And the basic theorem here is that it's the following. So, in theory, which is what is called geometric subtract equivalence, is that this category as a tensor category, is equivalent to the category finite dimensional representations of the group G check, where G check is the Laguance dual group. So this is the group whose root datum is dual to that of G. But pretty soon we're going to just switch to the case when G is GLN. So let me just remind that if G is the group GLN, then the dual group is also G. So GLN is self-tool. Okay. So let me first say why, I mean, let me first mention that this is a very good equivalent. So it's a starting point for a lot of things in, well, in geometric representation theory or in algebraic geometry. So again, again, I mean, so this is very, for example, important for, for Joel's course. But also, I mean, this is really the starting point for what's called geometric Laguance correspondence. In fact, this is a kind of categorification of something of classical set, I guess, in more of a starting point for usual Laguance correspondence. So this is a kind of very good equivalence. But instead of talking why this good, let me talk why this bad. So disadvantages of this. So I'm going to name two disadvantages. There's a kind of sort of categorical disadvantage. And, and, and this is kind of representation threat. So the first disadvantage is that it does not work. Well, doesn't work as stated on the derived level. For derived categories. What I mean, so in other words, I want to say that you can can see that the Ivvavara and derived category define this momentum. So this is, whenever I have a group acting in a variety and everything cares, essentially find a dimensional, you can talk about corresponding covariant drive category and that thing is absolutely not equivalent to the derived category of finding, well, actually, when I say here representation of G check, I mean, representation of G check is an algebraic group. So just find dimensional algebraic representations. And G check is a reductive group. So this category of representations is is semi simple. And so it's derived categories pretty preval, but the derived category of geofold branch chips and I find grass mine is quite non trivial. So, so, so this thing's not the concept fact that it is known what this thing is equivalent to and this is called derisive equivalence, which I actually almost won't use. Well, I won't formally use it, but let me just for the sake of completeness, let me mention what this is. So this thing is equivalent to the following thing to I should take. Well, okay, what I'm going to say is going to be a slight lie, but I mean, it's, well, let me put maybe bounded derived category here. And, and then this will, and well, I need to find those conditions that what I'm going to say will actually be true. So here I should put, I should take the lead algebra of the long list of group. I should put it in homological degree negative two, regard this as a DG algebra with trivial differentials, trivial differential, then should consider derived category of modules over this DG algebra. So whenever have DG differential gradient algebra, you can consider differential graded modules and you can take it because one derived category and I want them to be sort of let me notation. Well, and also I maybe want to consider here, sorry, finally generated modules. And here I want to consider G check of the variant once which is, which is, which just means stuff. So I'll just denote it like this. But what it really means is that I can see the modules on which also the group G check acts and the action is compatible with a joint action on the ultra. So, so this thing is, you know, it's not, it's not a derived category of any IBM category, essentially, and it's definitely not the right category of SMIC. So, so the derived satellite equivalence is kind of much more complicated, although, although it's also extremely important for many purposes. And in fact, for this story of coolant branches, it's also extremely important. But that's not what I want to talk about. So somehow, again, let me know if you this is a drawback and the sounds that somehow have this equivalence between the being categories. And I would like to have the, I mean, I would like to have a sort of slightly different setup where, which would extend to draft categories as well. So that's kind of maybe a minor drawback. And more important drawback for me is that this geometric satellite does not extend literally to quantum. Well, okay, so this is not a mathematical statement. To, well, let me write it like this, let me write to a rep to you to check that means that I can see the representations of the corresponding point. Again, does not extend means that it's, you know, well known natural way to extend. And so now I am passing to number number two in my plan. So I'm talking to this fundamental local equivalence. And the fundamental local equivalence is a is a is a dip is a different equivalence of similar nature. But it will be in a course where both of this problems will be cured. And so jumping over the how they should say that this guy, all the conjectures will be sort of, it's a set of conjectures, which will also extend, which will make which will extend this fundamental local equivalence. So it's fundamental local equivalence will be special case with the group GLM. Okay, so now any questions so far? Okay, so now sorry, what was Q? What does Q stand for? What is Q? Well, I mean, Q, I mean, okay, so this is a, this is this is quantum group representations of quantum group. Well, Q is typically for quantum groups, it's a number, although actually, if you want to do things canonically, it's not really a number, it's, it's some C star valued invariant form on the on the weight on the co weight letters, but, but you know, if Jesus say simple group, then Q is just enough, it's a non zero number. So this is just representation of the corresponding quantum group. So quantum, sorry, this category of representational quantum groups, if it's a deformation of the category of representation of G check, and a deformation in the world of what's called braided monoidal categories. So it's not a long symmetric, so it's a 10s, but it's kind of 10s a category, but it's not, it's not, it's not symmetric. So, so it's, well, it's braided. So if you can see this on V 10s, the W is naturalized amorphous to W 10s V, but the square of this, of the sequence is not one. So, and so, so this is this kind of natural thing. And so we'd like to, so here, kind of, something have some kind of geometric realization of the 10s, a category of G check representations want to extend it to quantum groups. And somehow, this way we can't. Let me say how we can't do it. And this is also very important for a geometric longos, although I'm not, I'm not going to say anything much about the longos in this though. So the story is the following. So it's going to be, it's going to look kind of slightly different. So, so let U inside G be a maximal unimportant sum group. So say if G is G, let me just take upper triangular, upper triangular matrices with one on the deck. And so let also chi from U to the additive group be a generic character, generic additive character. So example, if G is G, you can take U to be these matrices with one diagonal, zero below the diagonal and anything about the diagonal. And then typical choice of chi is the summation of all a i plus one. So it's a homomorphism. And well, you can also take any linear combination of those, but with nonzero coefficients, it's important that all coefficients for every i is nonzero, then the character is generic. So choose a generic character. And then we can consider U of k, and we can consider the character, let's me call chi hat from U of k also to the additive group. And this is just by given by the formula that chi of U of t is equal to the residue at t equal to zero chi of U of t, something like this. And so now, if now we can consider the following guys, so first before Q, but then we're going to introduce Q. So you can see the Wittaker category of the affine-grasse mindam. And this is by definition the categorism is let me put away the derived category of U of k comma chi hat at the covariant sheaves on the affine-grasse mindam. Now, let me note that here, even to define this, have to work a little bit, because I mean, the difference between this and what we did before with geometric set, I guess, that here the orbits of the group U of k are U of k, sorry, are not find dimensional, actually infant dimensional. So somehow, so this setup is kind of more infinite dimensional than before, but sort of modern science knows how to handle such situations very efficiently. So somehow there's actually a rigorous definition. Moreover, what is actually much less of this is that this category is also tens of times. And here, I mean, here the problem is that there's no, I mean, but the tens of problems is not going to be given by convolution. There's no kind of convolution here. So this is a, yeah, in terms of category, but here you will actually have to believe me how to define. Well, let me just say that for people who know this word, the tensor structure comes from fusion rather than convolution. Yeah, I should also say that when I say derived caliber, so to hear, Mel, actually, when you have an important group of the native character acting on some space, then when you want to consider sheaves of current respect, this character, then either actually instead of sheaves, you have to use D modules, or you have to work with instead of complex numbers, you have to work over fine field because the point is that usually, I mean, the point is that this additive character defines some, you want, you want this additive character to define some one dimensional local system on your group, and with some multiplicative properties. And so in the world of D modules, we can actually do this if you work over complex numbers. So let me do it for you itself. So if you have chi from U to G, A, then you can consider D module, which is Hullbecker's spectrum of chi, sort of exponential D module on the affine line on the additive group, and that's going to be one dimensional local system on U, which has some factorization property. And so therefore, you can talk about sheaves of current respect to, or D modules rather current respect to this thing, or another sort of equivalent way is to work over fine field instead of complex numbers and work with the lattice sheaves, and then you can use the art and Shryer sheave, the pullback sorry, of the art and Shryer sheave respect to chi. And then, so this is about what is meant by this, by this category of sheaves equivalent or D modules equivalent with respect to this group. So I mean, most people are used to a notion of sheaves equivalent to the action of a group. So the claim is that there is a kind of enhancement of that. So if you have either additive or multiplicative character of the group, you can actually talk about sheaves equivalent to the spectrum of this group with a character. And if this is an additive character, then either you should do it in the language of D modules, or you should work with the lattice sheaves or fine sheaves. So let me not worry about this. So and the tensor structure, the tensor structure is defined in such a way that we actually work defined as one and this is related to the formal disk. And in order to define tensor structuring, you need to use an actual outbreak curve to form a disk and let some points move over that curve and let them collide and so on. So this is kind of typical sort of fusion, which I don't have time to explain. But again, you will have to just believe me that there is some tensor structure here. But it's kind of defined, I can actually define it for the aggressive for geophonic variants use as well. And for the Ibedian category, I actually get the same tensor structure, but on the derive category, we actually get a different structure. So, but okay, let me not go into this. And so kind of Thore's theorem is that the swediger category of the Gresemannian is now equivalent to, well, I mean, I'm using the derive category already. So it's just the derive category of again, representations of the G-check. So in some sense, this swediger category is much simpler than on the derive level, it's much simpler than the satayian category, because even on the derived level, it's the same as representation of G-check. So now, where's the quantum group here? So the claim is that you can actually upgrade this to quantum. So let me talk about this. Any question before that? All right, so our way to quantum groups is, well, again, we're going to have an equivalence of well, braided monolid categories. And then, well, it's easy to say what we're going to have on the right, on the right, we're going to have derived category of this representation. So let's fix, we fix Q and C star. So we can see the derived category of representation of quantum group. Yeah, I mean, maybe before I go to the quantum rule, let me say that, you know, this is in terms of derived level, but it also induces the corresponding Abelian equivalence. And in the sense that this equivalence between derived categories is compatible with natural T structures on both sides. So if you can see the perverse sheaves on the left and just representations on the right, you get in terms between these Abelian categories. So, and the same is going to be true in this Q case. And so here, let me write this thing like this. And let me just explain what I mean by this. So what this thing, what we do is, so this is, this is the category of U of chi, comma, chi, hat, equivariant sheaves on some determinant line bundle, L over the grass mania. So the grass mania, and actually, well, okay, so here we, we gained some orders, which I don't want to discuss. So if G is simple, then it will be just once, I mean, the question is what is Q? I mean, so if I think about Q is a number, then, I mean, it's, it's, I can really think about Q is a number of G is simple. Otherwise, I may have several parameters Q. In fact, somehow if you're afraid to think what Q is canonical, then there's actually some choice of some Bayesian form on the, on the Lie algebra and the special terms line bundles also parent phrase by this Bayesian forms. But if G is a simple group, then you can actually think about it's a number and some determinant line bundle. Well, I mean, actually, I have to remove with the zero section, the recession removed. I mean, when I say on the land bundle means on some, you know, some total space of this land bundle, which have monodromy Q along the fibers. So you have this L minus the zero section over the grass. And there's some kind of, well, for simple group, there's some canonical line bundle. Otherwise, there's some choices. And so here, well, it's a line bundle with zero section removed. So every fiber here, every fiber here, C star. And so we can see the she's, we can extend it. I'm going to have twisted she's so we can see the she's in the, or well, not, I mean, you can do it for any, for any variety of the line bundle, you can consider Q to a set she's with that. That means that we can see the she's not on your variety, but on the double space of this line bundle without the zero section, which have monodromy Q. I mean, the fundamental group of the, I mean, such, I mean, they're going to be on every fiber. It this thing is going to be a local system and the monodromy is going to be. So this again, this is like extremely general procedure in which is used in geometric representation theory all the time for many, for many, for many purposes. So, and so, and then let me go back. So then we have this equivalence, which is again, this is a braid in monodrome. Now, if you haven't seen this before, and if you're following what I'm talking about, you might ask, why couldn't I do this for the original set target equivalence? I mean, I could have tried to take also Geofoic variant perverse, Geofoic can see perverse she was again on the total space to the slime bundle with monodromy Q. And the question is that, and the claim is that that would not work. And so, for instance, if Q is not root of unity, it turns out that if you consider Geofoic variants, then there will be essentially no Geofoic variants, she's monodromy Q. Almost all of them. I mean, they'll be just well, it's simple. There will be just one, some kind of three, I mean, this category will just get over a bit for space. So, so somehow it happens that if you can see that if you do it with Geofoic variants, if you can see this Q twisting, then the category becomes much smaller. I mean, for it's for if Q is root of unity, it will become just smaller. If Q is generic, it will just essentially collapse. And but for this with the category, that's not the case. And we get this okay. Any questions? So this is, this was the review of this fundamental local. And now I should say that maybe I should some so this was this FL E. FL E was a conjecture of Jacob Lurie. And then, and then was proved by Gaze-Garant for generic Q. And then by Gates-Garant, now with SEMCO, and independently, yeah, in terms of our young. All right. Okay, so I'm exactly more or less at half time. So, so this was this was a review of some known results. So now let me go unless there are any questions, let me go to the other conjectures. Okay. My question. Could you go back to that statement of the equivalence of categories of the last one that you had? Which one? This one? Then you have just sheaths, coherences or which sheath perverse sheath. No, no, no, no, no, no, no, no. Sheaths don't appear here anywhere. Sheath means constructable sheaths. Or actually, I mean, I said that, I mean, when you work with this way to get a category, you should work with constructables sheaths over fire field or it should work with Casparin-D-Modul. Because, I mean, the ones that they're kind of, this sheaths, so a fine field, they're going to be not So, but again, this is a mild, this is not very important point. And again, I mean, even, I mean, before that, so, okay, so here, so here, for instance, in this equivalence, I mean, in this definition, same thing happens. And then, you know, if you can see this theorem, then here all sheaves are constructable. Actually, if you go back to this satay equivalent, then again, here on the left, you can see the construct, I mean, when you work with perverse sheaves, I mean, means that you construct, you, you work with constructable sheets. So, so it's rather, you know, in some kind of more general setups related to geometric land, as you have usually kind of a typical situation that you have a sort of constructible side and coherent side. And so, I mean, you know, here, if you look for, instance, at this equals, then this is, this is the left hand side. And, and the right hand side of the position of G check, I should think about it as coherent side. So it's, this rather, because it's actually coherent sheaves on the stack point mod G check. And so, and so, for instance, we look at this derived static equivalence here, then this is also sheaves on actually coherent sheaves and actually some derived stacks. So, so somehow usually it's a kind of typical situation, we have some equivalence where on the left, I have some constructible sheets on the right, you have some here and sheaves on the side. And, but in this QK, somehow it's slightly different because you don't, I mean, this, this coherent side becomes sort of, well, it becomes kind of Q twist of social matter. So I mean, it's not really coherent sheaves on anything anymore. Anyhow, so, so let's proceed together connection. So now the idea is that so the identities so defined so let me actually um, sorry. Let me say one new So let me actually, sorry, let me say one notation. So fix now two numbers, two natural numbers, M and M. And I assume that M is less or equal to N, just for simplicity. Then Gaiotou produced the following. Gaiotou produced a geometric category. By geometric category, again, I mean it will be some category of some kind of sheaves on some kind of a Fankras Mino, or I use a geometric category, such that, which is equivalent, let me say it like this, which is conjectural equivalent. Equivalent to the following thing, to representations Q based on the Sanquantum group, but this time it's going to be quantum supergroup. So this is the super world. So first of all, before Q, you can see that the super group, kind of algebraic supergroup, GLM, and this is automorphisms of the super vector space C, sort of M, N. So this is a super vector space, which has even dimension M and odd dimension M. And so this is some, you can say that super group of the morphisms, and there's a well-known, well, maybe less, not super well-known, but kind of known at least Q deformation of that as well. Let me just, here's human for simplistic use generic. Actually, I mean, you can actually do it for non-generic Q as well, but you have to be, it will also be true for non-generic Q if you define this quantum group carefully. So you have to work with some particular form of the quantum group. And so I'll maybe specialize to the case, to the case M equal to N minus one, because in that case, it will be the simplest to explain. But before I do this, let me say that if, well, we're working here with arbitrary M and M, but so if M is equal to zero, then all the super part goes away. So then glMM is equal to just glM. So we're supposed to recover the same thing we had before. And so this Gallaudet category, so this will become just the swittaker category of the congressman of glM. But I'm going to look at the other extreme, maybe not the real example, I mean, the real other extreme will be M equal to M, but it turns out that the easiest thing, the easiest example to explain for me will be M equal to M minus one. So it turns out that, so let me explain what happens in this case. And it will actually not look, it will look pretty differently from this with the category, but somehow it turns out that this is kind of, well, not very continuous deformation, but if you sort of, I mean, again, this Gallaudet conjectures are for R with ray M and M, and if you sort of move M from zero up to M minus one, then you're going to move from this swittaker story to what I'm going to tell you now. And this actually, in this case, it's extremely simple to explain, namely this Gallaudet category in this case, is the following thing, let me write it and then I'll explain it. Well, is, well, again, we can work with a B-line categories or with derived categories, that makes you work with, let me formulate the statement for a B-line categories, but then the claim is that the same thing will be true for derived categories as well. So actually you can see the perverse sheaves on the affine-drasmanian of GLM. Well, I need to put Q here because I want to do it for quantum group, but in particular, I will be able to specialize it, to Q equal to one as well, but I will have to do it carefully. And so what you should put here, should put things equivalent with respect to GLM minus one of all. So here I have the group GLM minus one can be naturally embedded into the group GLM, just in the most stupid way possible. So namely you can see the just matrices like this. You can see the matrices which have one here, and then arbitration here. So this is just GLM minus one sitting inside GLM. And you can see the things inside GLM minus, which I correct with GLM minus one of all. So it looks like, so from, so kind of symbolically, it looks a lot like the geometric statakia group. But it turns out that it behaves much more like this FLE than the geometric statakia group. So, okay, so now if you want theorem, which is written recently. So, and again, so I should say that I'm only considering this example of M equal to M minus one, but maybe I should, I will make some comment in the moment what happens for other M's, but right now you can see the only example when M is equal to M minus one. But in this case, we have a theorem that this category, well, holds both a billion and derived level, the category of reverse GLM minus one, or equivalent sheaves with Q under the five words, minus GLM. So, E is equivalent to wrap Q GLM. Well, the only thing is that I need to, I need to say one thing careful. Somehow, of course, when you start, let's forget about Q for a second. So if you start a category of representations of an algebraic supergroup, categorization of algebraic supergroup, I mean, algebraic supergroups will usually act on super vector space. So, we can see the, so the meaning of this, we can see the representation of this in super vector spaces. And then let's put also a letter S here, and the letter S here means that we can see the perverse sheaves, we can see the constructible sheaves with Q-efficient, not in vector spaces, but in super vector spaces. That makes perfect sense. And then somehow, I mean, this is what you need in order for this. So this is again a braided manoid of equivalents. And so here, for this to be true as stated, Q needs to be generic. Generic essentially means not the word of unity. So, and same thing calls for derived categories. So, I should say that it's kind of funny that if you replace, you know, if you look at this, if you look at this category here, if you replace GLM minus one by the full GLM, then somehow things become very different. Because first of all, I should say that now on the, if you can see the first one derived category, it will not be derived category. And second, if you, if you try to put Q there, if you put generic Q there, and as I said before, this category will just essentially collapse, which has become the category, well, a little bit extremely small. But for some reason, if you, GLM minus one instead of GLM, sorry, actually I should have read, I mean, my Amazon minus one, so I should write it here. So, so if you put GLM minus one instead of GLM, then somehow miraculously, all the problems disappear, and again, this thing. Now this some kind of, this is a, this is a, this is a, this is a, this is a, all the problems disappear, this is a, this is a kind of, over. I don't want to see this thing. Now, this some kind of also funny, combinatorics here, because for instance, this representation. So the quantum, or even non Quantum, just usual supergroup, Arry one-to-one correspondence was actually a useful presentation of its even parts, you know that the even part of this group is just GLM times GLM minus one. So, of GLM and the claim is that this group has discretely many orbits and the orbits are parameterized by pairs of dominant weights of GLM, dominant weight of GLM minus one. That's kind of, I mean, such things happen pretty rarely. I mean, usually if you put some kind of random group here, then it will not have discretely many orbits and they find rest of my name, but this one does. Now, let me make some comments about this. So let me make some comments about the shape of the Gaiota conjectures in general. Now, I should say that I'm only formulating this Gaiota conjectures in this GL case, although they're kind of more general. I mean, there are other, say, classical supergroups. For example, some of you guys called or the symplectic supergroup and there's some version of this Gaiota conjecture for that one, but let me stick to GLM case conjectures for let's see, arbitrary. M, well, let me say less than M because the case M equal to M is also slightly different. Well, what Gaiota does, Gaiota, so he tells you how to, so Gaiota produces for you some unipotent group, UMM inside. It's actually, well, it's inside GLM. And, well, yeah, so it's inside GLM, but you can also then, we're actually going to also then for future purposes embedded into GLM cross GLM. So somehow the embedding into GLM is, well, it's just the GLM part is three-bill here. But so, and this is normalized, this is normalized by GLM inside GLM. And also you can produce a character of this UMM into the additive group, which is also normalized by GLM. And then this in general is the other category. The category is the following guy, it's, you can see that, well, let me guess, the derived category, the category of traverses, well, I don't know if it's traverses. The variant for aspect to GLM, oh, semi-direct product, UMM of K, well, comma, Ky-hat, so similar notation as before, I should also put this Q here and I should put the affine-grasmane of GLM. And now it's actually convenient to rewrite the following way, it's the same as, well, I mean, especially if you're not afraid of various infinite dimensional problems, this prefers sheaves on aspect to GLM of K, semi-direct product, UMM of K, also character Ky-hat, well, so Q, and here I should put the affine-grasmane of GLM times GLM, which is actually the product of the gross-mane. So this is kind of an exercise to see that this is absolutely logical, this thing considering sheaves here in current aspect, this group is the same as considering sheaves and the affine-grasmane of product of this two groups considering. So in some sense, so the advantage of writing this way is that here we're gonna see the sheaves in current aspect of some group of K. So this kind of, the way I should think about on this, get your other conjectures, is this group GLM semi-direct product with UMM, so I didn't tell you what UMM is, but I mean, there's some particular definition of it. This is, well, it's, you should think about it as sitting inside GLM times GLM, where again, the embedding of this unimportant part is goes only into the second factor and GLM goes into this can think diagonally. And you should think about the subgroup, it's, you should think about this one as analog of the maximum important here. So it's analogous in many respects, so for instance has the same name, you sort of in, again, for this group for you inside GLM cross GLM. So for instance, going to have the same dimension as the maximum important subgroup in here. And so the point is that if M is equal to zero, this new amount will be just the maximum important and we're just going to get back to the same with the girl story, so the kind of special cases is that if M is equal to zero, then UMM is, then this GLM, it disappears because it's just GL zero, this is going to be maximum important in GLM. And if M is equal to M minus one, then UMM is going to be trivial. And that's another case that we consider it. And so this, so it means that, so we kind of saying that this GLM minus one sitting diagonally inside GLM minus one times GLM for many purposes is analogous to maximum important subgroup in the same group. For instance, you can do a simple exercise and check that it has the same dimension. So this is, I mean, analogous is not a mathematical statement, but in fact, it has the same dimension, easy mathematical statement, we can check that. So somehow what Gaiota does for you, he produces this sub and then the rest, and actually they also come with a character, but again for M equal to M minus one case, this character will be trivial, and then we should consider sheaves and the corresponding fine-gross model and a quiver and with this group with the character and that recovers well, if you put also few in the picture, that recovers the corresponding category of representation of quantum. Maybe, I mean, I have something like five minutes left. Sorry, yeah, at some point my Zoom stopped working, but can people hear me now? Yes, yeah. Okay, let me discuss briefly what happens in Q equal to one case. And this is our previous paper from two years ago. This is a joint paper also, and I think we'll get off again in Ginsburg. So in this case, well, and again for simplicity, let me take M equal to N minus one. So then the claim is that if you take the category of say perverse sheaves and respect to GLN minus one, on the fine-gross model of GLN, what you get is, well, you would like to say that you get represented, well, I mean, you can watch it also could ask here, just because for the same reason as before, I would like to write that they get representations of GLM. M, but I say that when you specialize to non-generic, you have to be careful. And actually, so this particular thing is actually not true. You actually have to put GL on the line here, and this is going to be some degenerate version of the set. Super cool. So let me say briefly what this thing is. So, well, if you can see that the Lie algebra, because one in Lie super algebra, this is essentially, I mean, it looks like matrices of size M plus M times M plus M. So let me write them as like blocks like this. So it has here, this is the even part, this is even part, this is odd part, this is odd part. Right? And well, and then I have very, well, on the level of Lie algebras, you have very super super commutators. And so the, you introduce the, I mean, this is the Lie super algebra GLM, but then you introduce the kind of degenerate version of that, which means that it's same vector space, same super vector space. And the commutators, super commutators, and you find, I mean, most of them will be, again, the same, except the super commutator of any two odd elements. But the bracket of any two odd elements equal to zero, if a and b are odd. And if you, if you drag it even with even or even with odd, it's the same thing as before. But odd with one will be zero. That's a kind of degeneration of this usually super algebra. And then we can also do it on the level of groups, of algebraic super groups. And it turns out that we get this equivalence. And this equivalence, now, maybe, let me just, I'm tempted to make the following comment that, let me call this star. So star is actually a special case of another set of conjectures. But this time, by completely different people, by Benz v, secular radius and vancantage. And those conjectures motivate, they can have supposed to be categorical version of a lot of known results about special values of automorphical functions. And but the, well, that's a kind of motivation. But the actual form of the conjectures is the following. So, so, well, let me tell you what the sort of left-hand side is. Left-hand side is roughly of the following sorts. You can see that some group G is before. And inside, you can see the sum H, which is a spherical subgroup. And spherical means that I think some of this microphone is on H has an open orbit on the flag right of G. Maybe let me say, give you an example. For example, this GLM minus 1 inside GLM minus 1 times GLM is spherical. And this Benz v, secular radius and vancantage, they started, say, the derived loop get of the versus does the methods derived category of H of K, equivariant sheaves on the fine grass mine and OG. And they sort of describe what it's equivalent to, but it will take me some time. I don't want to do it right now. But it's described in terms of some kind of dual groups and some representation of some dual groups. So this is some, well, they have some precise conjecture. But this precise conjecture is based on some kind of theory of the spherical subgroups. And so this, and this theory of spherical, and so it's based on some theory of spherical subgroups. And so, but the real sort of motivation for them is to partly prove and partly build up on some results about the morphic functions. So somehow, if you look at this example, if you go back to this example, that on the one hand, so let's look at the sequence. I said that on the one hand, it's a special case of this sequence of bin Ciclarides and van Catesch. On the other hand, it's the Q equal to 1 specialization of this conjecture of Gallota, which are motivated by some completely different things. And by some, again, calculation string theory, which I'm unable to reproduce or to understand. Now, unfortunately, these conjectures of bin Ciclarides and van Catesch have no known Q version. And so I actually talk to them, and they're really, well, it's not clear what to do there. But on the other hand, somehow, they can check so much more generally because there are many situations when it can actually talk about the spherical subgroups and when there's no supergroups. So I don't really know what's going on here. And again, the most surprising thing for me is that, I mean, especially if you go back to this theorem that I wrote here, then again, if you look at the formulation, it's not something that you might expect to be corollary of string theory calculations. But somehow it is, and again, mathematicians weren't never able to produce such a conjecture. And even after this conjecture is formulated, somehow, I don't know how to sort of motivate it mathematically. So that's one thing I want to say. That thing I want to say is that that thing I was planning to say, but I don't have time for that, is a few words about the proof. Let me just say that the proof is similar, although slightly more complicated. It's similar to the proof of the original FLE by Gaysguray. And it goes through some kind of, and it builds up upon some old work of Bissrukomnik of Finlberg and Schachmann, who realized the characterization of a quantum group in terms of certain factorizable sheaves. And later on, Luria somehow explained why their work is sort of essentially trivial from characterization algebra, or from two algebras. So somehow something like that is also used here to also do that not just for usual quantum group, but also for quantum supergroups. And so you realize that in terms of some kind of categorizable sheaves on space of configuration points on the curve, and then use some kind of similar argument to those of Gaysguray to prove this. And maybe the last thing I'm going to say is that this should be an analog of this. I mean, again, I know how to formulate it, which is I don't have a formal proof yet. When instead of the categorifying dimensional representation of a supergroup, you get a category O for that for a particular choice of a Borel. I should say that in the quantum supergroups, there's an optional Borel subgroup, or Borel subalgebra. But unlike in the usual case, not all of them are quantia. So for a particular choice of a Borel, you can look at the corresponding category O. And that should be geometrically realized in similar terms. But instead of defining Grassmagnon, you should use the defined flag variety. And well, there's some kind of representation variety complication of that. Because in this way, you can actually use that to get some character formulas for simple models and category O for supergroups, which actually not known. But again, this is maybe not my main interest. My main is in this kind of two-fold. So one, or maybe three-fold. One is to understand how thesis are able to actually produce such statements. Second is to understand better the connection to this work of Bintvisa, Clarisse, and Venkatesh. And third is that something, again, I don't have time to talk about. But it is actually clear that these results can be used for applications to something which is called quantum geometric langauge program. But again, I don't have time to talk about this. So I think I should better stop here. Thank you very much. [?]. [?]. Any questions online or offline? What kind of physical calculations you were talking about exactly? Well, I bet I would not even try to start answering this question. Well, I mean, it's just a, you know, it's a, for instance, there's this paper by Michalakon Witton. So there they somehow on the one hand, they have trans-time a theory of what they can respond in super-google. Well, now I have the three-dimensional trans-time. They started as dual. And somehow it has to do with some brains colliding. And so, you know, I mean, I made some kind of effort to understand this word, but I'm absolutely unable. But it's very much connected to this work of Michalakon Witton to paper to old, to some, well, not very old, but like from 12-year-old papers by Galliot and Witton about bound or supersymmetric bound or conditions in four-dimensional gauge theories and so on. So for instance, even if you look at this paper, but you get data in Witton, then you see that somehow they produce some relation between supersymmetric boundary conditions and super-groups. So it's boundary conditions for gauge theory for usual, even groups, but some kind of interesting boundary conditions for them are related to super-groups. And then when you start as duals of those boundary conditions, and somehow when you can actually compute them, and I mean, the computations, well, I mean, the actual statements, I can actually form a kind of physical statement, which is essentially almost equivalent to this theorem. And that involves field theory, that involves four-dimensional field theory. So it's really a statement about one boundary condition going to some other boundary condition under S-duality. But the real question is how do physicists know that somehow some kind of two boundary conditions are related by S-duality? And for that, some kind of string theory calculations are used, and that I really can't explain. There's something in the chin. OK. Any other questions? Only once, twice. Right? Thank you very much, Sasha. No, but the real question. Yeah, one more Q&A. All right. Almost. The buzzer. Ah, the Borel subgroup, which appears here, is kind of mixed Borel. So in order to choose a Borel subgroup, essentially, well, the reason, I mean, one way to think about why there are different Borels for supergroup is that, I mean, this is kind of, I mean, OK, so if we're working with just gl thing, you know, Borel subgroup and glm are just flags. Borel subgroup in CN, Borel subgroup and glm and are flags in the supervector space CMN. And so, I mean, complete flags. But, I mean, every time you add dimension, I mean, when you have a flag, and now you have vector space V1, which is inside V2, which is inside V3, and so on, every time you add one more dimension. But in the super world, you can add either one even dimension or one odd dimension. And so the Borel, which appears here, is the one where you're alternated. So when you add the first one, even dimension than one odd dimension, then again, even then again, odd. That's kind of, that's the Borel, which is natural to use. Any other questions? All right, enough of this. Thanks. Pasha, again. Thank you.
|
I am going to explain a series of conjectures due to D.Gaiotto which provide a geometric realization of categories of representations of certain quantum super-groups (such as U_q(gl(M|N)) via the affine Grassmannian of certain (purely even) algebraic groups. These conjectures generalize both the well-known geometric Satake equivalence and the so called Fundamental Local Equivalence of Gaitsgory and Lurie (which will be recalled in the talk). In the 2nd part of the talk I will explain a recent proof of this conjecture for U_q(N|N-1) (for generic q), based on a joint work with Finkelberg and Travkin.
|
10.5446/55157 (DOI)
|
I want to thank you for those nice words of introduction. It remains to be seen whether physicists will succeed in business or not. I'm just trying for the present time. The title of my talk is How to Start a High-Tech Business. And I probably should add on how to start a high-tech business in the United States, because this is particular for the United States. Actually, some of you may wonder why start a high-tech business. And there can be many reasons for why you want to do that. One is, for example, you might want to get rich. And I am sorry to say that for a physicist, that is not the way to get rich. I have learned that in 50 years. Now, another reason to start a high-tech business is to get money for doing research. And this is something which has happened in the United States the last five or six years. And let me try to teach you a new word. I just learned it myself, so I'm proud of it, and then I like to teach it to you. There's a word in American and English called paradigm. And in the United States today, there is a new paradigm. And what the paradigm means, it means a model. And for those in science, it means new boundary conditions. For example, in the field of physics, we have had two new paradigms here. We have the theory of relativity. After the theory of relativity came about, physics was never the same. It's a new paradigm. You had to take into account the theory of relativity. After quantum mechanics came in, that's the same thing. A new paradigm, a new model, physics will never look the same. So that's easy to understand. But it's a new paradigm again in physics in the United States. And this new paradigm, like it or not, is the collapse of the Soviet Union. You may say, what in world have the collapse of the Soviet Union have to do with physics? But if you read the papers, you noticed when the Soviet Union collapsed, so did the superconducting supercollider. These things are not disconnected. In the United States, the research, particular physics research, since the Second World War has been militarily driven. We physicists don't like to think about that, but nevertheless, it's a fact. Most of the money we get from Congress is because they thought the Soviet Union could do better. And so as an assurance, the Congress gave money. It may be tragic for us to think about that the congressmen don't care about the Higgs boson or the top quark. They really don't care. They don't even care about the quantum-sized hole effect, I'm sorry to say. And today in the United States, the research is economically driven. This is the new paradigm, not military research, but to do economic research. A good friend of mine who is now dead, John Bardeen, said the reason that the Japanese were so successful is that in Japan, and this excludes Leo, the best scientists were developing toasters and mixmasters. In the United States, the best scientists were developing airplanes and fancy things and cannons and stuff, and that's not so good. Courses and mixmasters in the long run is better, and this is what the United States at the present time is trying to do. And to that end, there is a federal program called the Small Business Innovation Research Program, and all agencies in the United States has to contribute a certain amount of the money to this particular program. That goes from the National Science Foundation, the National Institute of Health, and so on. And what it means is that they give money away so you can try small ideas in business. That's what it means. And for example, NASA has such a program, as you can see here, or for example, the Department of Defense has such a program, as you can see here. So they all have this program, and they all have to give away a certain amount of all the money they have to small businesses. So there I am, why not take advantage of this largeness? So this is what I thought. Now, what is this program all about? The program consists of three phases, and if you apply for money, there's something called Phase One. If you can get to Phase One program, you can get up to $100,000, and you have to spend it in six months. That's the easy part, to spend it in six months. If you are successful in Phase One, then you can apply for a Phase Two program, and then you get $750,000 maximum, and then you have two years to spend it, and that is real money. And now, of course, for the government hope is that you go into Phase Three, and fortunately, there's no money there. And so this is the basis of the program. This program is for small businesses. And what that means in the United States, first of all, there are two things you've got to satisfy. Fifty-one percent of the company has to be owned by United States people. What I mean by that, you have to be a citizen, or you have to be a legally-admitted alien. Either way, it's okay. And fifty-one percent of the company has to be owned by such a person. The other requirement is that you have to have less than 500 employees. Now, I'm from Norway, and that's the biggest business in Norway, is 500 employees. So that's not so difficult to satisfy. Now, if you are going to start a business, you should really have an idea. And what we always associate with the light bulb and Edison and stuff and having a good idea, not knowing or not thinking about the light bulb was invented a hundred years before Edison was born. It really was an old invention. Edison just recognized it as being a very good possibility. So it's not really enough to have a good idea. You also have to recognize it as useful. And as two examples, is that there's a book written about the Serox Corporation, and the subtitle there is the billions nobody wanted. The Serox process was a very good idea, but nobody believed it. General Electric, to their despair, turned it down long ago. They said nobody would ever do this sort of thing. And other big companies had the same chance, and they didn't recognize it. Another good idea was the unusual origin of polymerase chain reaction, which Molyse got the Nobel Prize for last year. It's just a marvelous thing, and a lot of people sort of thought about it, and actually people have published about it before, but nobody really thought it was a good idea. And then Molyse did it. It turned out to be a pervasive idea. Everybody uses that in biology. It's one of these things that's not enough to have a good idea, you also have to recognize that it is a good idea. Now my friend Charlie Keyes and I, as you heard in the introduction, dealt with immunology, so we thought we had a very good idea of looking at the immune reaction using electrical fields. And since it was our idea, we thought it was terrific, but it wasn't any good. So that was not so bad, so bad a good idea, but now we use this particular patent to look at cells in tissue culture. And that's what my high technology business is all about. So let me tell you a little bit about cells in tissue culture, so you get a little feeling for what I'm trying to do. And let me remind you that all of us consist of single cells. That includes elephant, and flowers, and me, and you. We are made up of, as Carl Sagan would say, billions and billions of cells. And so just like bricks make up a building, cells make up people. And the interesting thing is that you can take these cells and grow them independent of the body. And to a physicist, that really is an amazing thing, because you are alive and the cells are alive. And so if you think about that, it's just a strange thing. And the way you start the tissue culture, you take a little petri dish, and in the petri dish you put the good liquid you think the cells like to eat. Then you take a little piece of meat and put it in the petri dish. And if the piece of meat is fresh, the cells will come crawling out, as shown here at the bottom. So here's the piece of meat. Here the cells come out, and now you have cells on the bottom of the tissue culture dish. Now you started the tissue culture. When I say a little piece of meat, you really can't buy it at the grocer. You should have a really piece of fresh meat. And I told that to a student I have, and he liked that very much. And next day he came to the laboratory, and he had a bandage on his elbow, on his upper arm. So I asked him what happened. He said, well, he cut out a little piece of fresh meat and started his tissue culture. And this is what I call real dedication. Those are the kind of students I like. And actually we still use his cells to this day. He's long since graduated, but we keep the cells going in the laboratory. And this is my good friend here, Charlie Keyes. His name doesn't quite show, and the tissue culture today is a very simple feel and is very easy to do, and you need an incubator you put the cells in. But I should say that the feel really is not strictly scientific. It's a little bit like growing flowers. Some people have a nag for doing it, some other people don't. So it's a little bit like that. And actually Dr. Keyes very often talk to his cells, just like old ladies talk to their flowers. He denies that, but I've caught him out it several times. Now when people, when you look at cells in the microscope, what you see is this. This is in an optical microscope. And the cells here, a little hard to see maybe, but there's hundreds of cells here, maybe a thousand cells. The cells in the top of this picture are cancer cells. The cells in the bottom of this picture are normal cells. And if you look at them, you see the normal cells look different. They are sort of organized. The cancer cells are disorganized. And this is what people do when they study cells in tissue culture. They take the cells out and they look at them in the microscope and they tell their friends what they see. Believe it or not, if you're unfortunate enough to have cancer or thought you're cancer, the medical doctor take a biopsy and he looks at it and he decides that these cells are organized so you don't have cancer or these cells are disorganized so we have to operate. Not because the medical doctor is a bad doctor. This is the way he's done today. By looking is a subjective decision. And that makes me a little scared. But anyway, that's what is done. So my friend Charlie Keyes and I decided that we were going to try to get a little more science into this. And so we have developed this piece of apparatus which is illustrated here. And the heart of the apparatus is a small electrode in the tissue culture. And there you apply a voltage current with flow in the tissue culture field and back to the big electrode. This small electrode act as a bottleneck and therefore the resistance will increase when you block the electrode with cells. It just blocks the current. So less current with flow when you have the cells there and you don't have the cells there. And to do that we need a locking amplifier, a personal computer and all sorts of things like that. But a very simple idea. And the basic idea is that the electrode has to be small. And that's where we have our patent. So what we are planning to do if things go well, we're going to sell all these equipment here plus we're going to sell the little electrode and that is as we said, tell us how the razor blade or business because they got to buy that again and again and again. And so we hope to, this is one early prototype, we hope to make little wells like this where we have the small electrode in the wells for people to grow cells in tissue culture. And let me briefly show you how it looks. If your seed sells out in say four different dishes with different protein on the surface, they have different surfaces, you get four different curves. And so here before this, when you seed the cells out, when the cells settle down the electrode, your resistance increases and you can tell what the cells like to do. And here you see the cells like this particular protein called fibronectin, much more than like this particular protein called BSA because the cells go down much faster. So why do you do such experiments? Well, you do this kind of experiment because if you get cancer, if for example a woman gets breast cancer, that bind itself is not dangerous because lumps in your breast is not dangerous. What is dangerous is the cells spread from your breast into breast cancer very often the bone marrow. And when the cells settle in the bone marrow, then it's very dangerous. So you go to ask yourself the question, why do cells do that? And these experiment is made to try to find out what kind of surfaces the cells like. And then we can look at the cells after a while and see how they behave on the electrode. Since the cells are alive, the resistance received will continually fluctuate. And here we see this is a cancer cell, you see a large amount of fluctuation, a normal cell much less fluctuation, and if you kill the cells, you get no fluctuations at all. And so these here we look like you have oscillations. And fortunately it turns out to be just noise if you want to call it that. And let me show you what the noise that physicists recognizes, and these are the three noises which I easily recognize by physicists. First we call it white noise, which is the frequency spectrum is just flat. Then we have one over F noise, where the frequency goes up to one over F, and then we have Brownian motion, where the frequency goes up to one over F squared. What's very interesting is that one over F noise, where you get a frequency spectrum like that, is music. And if you listen to Beethoven's fifth and analyze it, you'll get this curve. If you listen to Scharzen's Pepper's Beatles music and analyze it, you get this curve. So you see, the way we analyze things in physics is not all that good, because you know, the big difference between Beethoven's fifth and the Beatles music, but the analysis in physics gives you the same answer. So we thought we should listen to our cells, and I'm going to try to, our cell looks somewhat in between here, and they also sound like music. And I like to try to play, if I can work it out here, the song of the cancer cells, how they play it out here, and we'll see what that will work or not. And I think you ought to be quiet. Just one thing, and we'll see who. So, that's enough. What we were trying to do, since we know it's a big difference, Beethoven's fifth and the Beatles music, to listen to the cancer cells and listen to the normal cells and see whether we could pick up by the ears the differences. Unfortunately, we haven't been able to do that, but it was fun anyway. Now, as you heard, I'm a physicist, I've gone into biology, and I have to get support from my work. If you don't get money, you cannot do research. And what I've tried to do is go to the National Institute of Health, applying for money, and I never even get into the final stages. They always throw me out very early, for some reason. And you may have thought that having an Nobel Prize will get you through these hurdles, but for the National Institute of Health, forget it. It's not working. So we decided, and actually one person I should thank for this is Dr. Professor Feinand Dagen. He, I talked about this at Lindow before, and he got very interested in this system, and he actually duplicated it in New League. And but he told me yesterday he much rather would buy it from me than making himself. So I have a customer out here, possibly. So my friend and I, we decided to incorporate and make a business. And the way you do that, of course, is to go to a lawyer, and to go to a lawyer, they get the cost, oh, during the business was $562.86 for the lawyer charge. It's very accurate lawyers in the United States. And the most difficult thing, which I didn't recognize, is to come up with a name. Because you're going to have a business, you're going to have a name nobody else have used. And we will haunt, haunt with that, and we came up with then the name Applied Biophysics Incorporated. This is a solid name, right? Applied Biophysics Incorporated. So we are very happy about that. I teach a course in the United States which I call the creativity and innovation. And one of the homework problems the student have is to design a logo for my business. So far, we have not accepted any permanently, but maybe somebody will come up with something. You may also ask, how can I have a business about a professor? But I work for a technological university in United States, and people there are very much interested in businesses. And I use students, for example, here is a student who was written up in a newspaper at RPI about student learning how to do photolithography, which we use to make the small electrodes. So I think that works out very well. Now when we had started this business and spent our own money, the question is then, how do you get the money? You can use your own, you can try to get people to give you money, or you can go to the small business innovation research grants, which of course what we had in mind from the very beginning. And as I said, the National Institute of Health has been very unkind to my grant that when I applied for a small business grant to NIH, they said this was the best thing they had ever seen. So there must be different people during these things, but that's the way it went. So we were very fortunate to get the phase one from them, and we also have a phase two grant from National Institute of Health. And I'm very happy about that. And one of these things, when you apply for a grant, this is a big deal, there's a lot of work and stuff like that, and applying for a grant from the SBIR, it's very similar to applying for a scientific grant. Everything is the same. So there's really no difference there. There's one big difference though, because when we applied for the grant, then of course you have to have a budget, and if a budget came to $550,000 and $577,000, whatever. And so they said that was everything was fine, they said, but we got a call and said, the one thing you forgot, we said, what is that? They said, you haven't taken your profit. Add on 8% profit, because you have a business. So the $36,000 is my profit. And I'd like to thank all the taxpayers of the United States who are here today for this contribution. Now one thing I had not figured when we started a business is all the forms you get. There's just no end to it. This is a federal taxes, and then there's of course in the states, we have the state taxes, and we hear from them, I don't know what. So one thing we had to do was to hire an accountant, because there's too much, it's almost impossible to keep track of these things. Actually one form you get from the federal government I thought was quite amusing, they want me to report annually on possible misconduct in science. Now if I were misconducting in science, would I tell them? So I crossed off no in both places, no misconduct was our place. I don't know, I think it's still a little dumb. So what you have to do when you have a business, you have to try to advertise in the best way we can advertise is to write papers. So we write many papers as we can trying to show biologists that we have to convince that this really is a good technique. Whether it is or not of course it's too early to tell yet, but we try to convince them of publishing a lot of papers and also participating in a lot of conferences and trying to promote then our point of view, and we really believe in this very strongly. And this is a conference which was with science organized where we were fortunate to be part of. And one of the big surprises in my life is that after we participated in a conference in science we got a request from nature trying to, so they said you can write a product review in nature. And all my life I have sent articles to nature and they always returned them unopened. So I thought getting a request from nature, this is my chance to get back. But then I said, I know I really couldn't do that because it was too important for the business, so we did go into nature with a product review which has been very helpful. We got a lot of requests from people who want to know what we are up to. That was very helpful. But the product, the nature however powerful it had was nothing compared to having an article is business week. And business week picked us up and after this article appeared we got a large number of interest. Believe it or not I got a call from Chicago and a man on the telephone wanted to invest $100,000 in my business. So I talked to him a little bit and he said he had inherited a lot of money and it just spent, tried to do with it and they decided to invest in business. So I said have you been successful so far? He said absolutely not. And I can see that when you call someone on your telephone and say here is $100,000 would you please take it? So there was a big surprise and another big surprise we were very pleased with and cost us a lot of, not worry but thinking about it was that the famous banker Blair in New York who make it their business to support starting businesses, they came up and they actually wanted to buy us out. This was very flattering in a way and in another way we were, if we really wanted to get in business as fast as way possible we probably should have done that. But if you do this, I mean there wasn't any contract written, but if you do this you lose control of the business and we really have a lot of fun, you know what we are doing now is we decided to turn them down and they are still out there in the wings hoping to invest in my business. So anyway what we have to turn this to an end, what we have, we have what we call an electric cell substrate impedance sensing and what we can do, we can measure a lot of things about cells in tissue culture. We can measure the metabolism, cell movement, micro motion, we can measure effect of electric fields and Heinen-Degen want to measure the effect of magnetic field, we have a small disagreement there and so on and you can do use it for in vitro toxicology, cell communication and so on and so forth and it has a lot of opportunities I think. And finally since I now am a business man in the hope that there are some other customers out there, this is our model 100 which is for sale for $37,500 is not ready yet hopefully sometimes in the fall and we can, if you are interested we can always negotiate about the price. Thank you very much.
|
This is a general comment to Ivar Giaever’s remarkable set of 11 recorded lectures on biophysics 1976-2004. Giaever has so far (2014) participated in no less than 16 Lindau Meetings, starting in 1976, when he received his first invitation to lecture at the Lindau physics meeting. But it wasn’t until the 2008 meeting, after more than 30 years, that he finally disclosed what he actually received the 1973 Nobel Prize in Physics for, the discovery of tunnelling in superconductors. In year 2000, he did not give a lecture, but sat in panel discussion, and in 2012 he gave a critical talk on global warming, which for a long time has been on top of the list of most viewed Mediatheque videos. But at all the other 13 meetings, he lectured on his activities in biophysics and how these led him into starting a high tech business in the US. It is fascinating to listen to the 11 existing sound recordings, starting with 1976 and following through all the way to 2004. Giaever is smart enough to having realized that the most important part of the audience, the young scientists, change from year to year, so some parts of the lectures (including jokes) appear over and over again. But as time goes, he makes progress in his biophysics research and this leads to important developments and inventions. The starting point in all lectures is the possibility to study biological phenomena in the laboratory using methods from physics. With his background in electrical engineering, it is not surprising that he in particular has used techniques from optics and from the measurement of very small electromagnetic fields. The first two lectures mainly concern proteins on surfaces, but already in the last ten minutes of the second talk, Giaever describes his ideas about working with cells on surfaces. The rest of the talks all concern his studies of the properties of living cells on surfaces. The cells are grown and kept in what is called a Petri dish, a cylindrical shallow glass or plastic container. By inserting a very small electrode made of a suitable metal (e.g., gold) at the bottom of the dish and another above, electronic characteristics of a single cell can be measured. This can be both static and time-dependent properties. A question that has been at the centre of Giaever’s interest has been to develop an objective method to measure the difference between cancer cells and normal cells. Such a method would be an important contribution, since the usual method to distinguish cancer cells from normal cells is by observing their growth pattern in an optical microscope, a highly subjective method where mistakes can be made and have been made. Another question, which Giaever has addressed, concerns what kind of surfaces cancer cells stick to. This can be important to know, because many cancers spread from the original tumour and cancer cells wander to other places in the body and form new growths in places where they stick (metastasis). When he began his activities in biophysics, Giaever worked at General Electric, but after leaving this company in 1988, he accepted a position as Professor at the Rensselear Polytechnic Institute. Together with a colleague he also started a company to develop and market a sensor for cells in tissue cultures. This apparatus is now being produced and marketed (www.biophysics.com). Some of Giaever’s lectures focus on the problems encountered when trying to start a small highly technological enterprise. His account and reflections are interesting and in parts very amusing. Some in the young audience certainly could profit from following in his footsteps, in particular from following this advice: If you don’t get funded for your research, start a profitable business to make your own funding! Anders Bárány
|
10.5446/55033 (DOI)
|
I want to thank obviously the organizers for the chance to lecture here, although I wish, you know, the original plan when, you know, this was set up a couple of years ago, it was going to come with my like my family, and it was going to be this like two weeks and you know, France and all that. Obviously, this is slightly less so less than what I was hoping for. So, so what let me begin by kind of giving some motivation about what I want to talk about, which has to do so for motivation. Let's just start off with the case of a smooth curve of genus G. And then I can look at it's a Hilbert scheme of points, of endpoints, which then see as smooth as just the same as taking the symmetric power of the curve. And so there's a very nice formula going back to probably in this version it's older, but you know, on the level of say, homology goes back to an old paper of McDonald's, which says that if I take the characteristics of these Hilbert schemes, I sum over N. Well, this just has a very nice formula where I just take one minus Q to the two G minus two. I can then kind of, you know, make more complicated in kind of a dumb way by writing it as one minus Q to the two G over one minus Q squared. And now the numerator of the right hand side itself has kind of a geometric meaning. If I take the Jacobian of my curve and I take its homology, we know what that is. It's just a superior algebra of H one of my curve. And so if I take the Poincare polynomial here, the Jacobian, I'll just introduce a minus sign. That's exactly the numerator of this expression. And so now what I have is I have some kind of, you know, kind of slightly silly identity where the left hand side, I'm kind of summing up overall N, these Euler characteristics of the Hilbert schemes. On the right hand side, I just have a single space, just the Jacobian. But now I'm doing something a little bit more refined. I'm taking the actual homology instead of just the Euler characteristics. And the Q kind of has a different meaning depending on which side I'm on. On the left hand side, the Q is just indexing which modulite space I'm working with. On the right hand side, the Q is this kind of homological variable. It's helping me keep track of my homological degree of the Jacobian. And so, I mean, in some sense, one of the things I would want to try to explain in these lectures is kind of a non-sots due to many people. So there's going to be, I'll do the attributions later on when I actually get to it. But, you know, the kind of the iteration that I'll be talking about is kind of joint with the Tota, you can know with Tota, which basically, you know, proposes a way of extending this to kind of much more singular curves. So, for instance, you know, pervs in the kind of setting of Kolabi out threefold. And one of the things that I kind of want to get to hopefully is that, you know, okay, so it's generally it's a conjecture and you know, you can believe it or not, but in the cases where we can prove it, already kind of gives you examples where something like this holds for extremely singular curves. And then the technique of proof is kind of nice because in some sense, what the technique of proof really does is it reduces it to the case of the smooth case where it originally just McDonald's formula. So there'll be some kind of chain of logic where kind of the final step will just be applying the original identity. Okay. So let's see, so let me give kind of, you know, I'll start talking about this properly and maybe the third lecture. So let me just say now kind of what I'm hoping to kind of cover in these five lectures, which is in the kind of, you know, first two lectures I want to give some kind of overview about Donaldson Thomas theory, some version of which you saw in Richard's lectures last week, but I'm going to focus really again on the setting of Kolabiat threefolds. And in particular, the kind of perspective that you get when you get to in the Kolabiat three setting is that instead of doing things intersection theoretically, there's kind of an alternate approach where you work, you know, that's kind of constructible world. So constructible functions and then that'll be today and then eventually tomorrow constructable sheets. And then kind of in the remaining lectures, I want to then kind of talk about, you know, this kind of picture that I just sketched out, which is this notion of an approach to thinking about what are called gokmar, gokmar of offland variance, which are, you should just think of the analog of this is the analog of the right hand side of this McDonald's formula, some analog of what to put in the numerator in general, where we'll be using the kind of technology developed in the first couple of lectures to pursue that. And then in kind of the last lecture, I want to kind of talk about some, you know, related conjectures to this story in particular, a conjecture of TOTA in the last couple of years, which is related to these kinds of topics are related to kind of defining right hand side of the McDonald's formula. Are there any questions before I get started properly? So again, so if there are things that show up in the chat, I'm not really probably going to be able to see them so easily, I think. So hopefully, hopefully, Andre can read them aloud in his dulcet tones. Okay, so today, I want to start talking really in some generality about, hello, about a numerical Donaldson Thomas theory. And so again, the setting for today, I'll be focusing on kind of the geometric setting specifically in the case where we start off with some kind of Calabi-O threefold. Does not necessarily have to be a projective, so just some, you know, algebraic threefold, smooth algebraic threefold with a nowhere vanishing algebraic threefold. And the kind of, you know, basic modulite space that you associate here is going to be some modulite space, you know, m of x, which is going to be, you know, some moduli of, you know, stable, coherent sheaves, moduli space of stable coherent sheaves, or, you know, more generally complex as a sheaves. On x with a fixed discrete invariance, turning classes, indexed by some vector v in the cosmology. And so the, you know, the kind of, you know, classic, you know, version of this story is where you just take, you know, maybe you fix a polarization, and then the moduli space you look at is just, you know, let's say, you know, geeseker, stable, you know, sheaves on x. To avoid, you know, kind of stacky issues or issues with the obstructions, you often will kind of trivial, you know, fix an isomorphism of the determinant of ease. Here I'm giving some kind of examples. For curve counting purposes, it's usually better to work with some kind of variation on these spaces. So, you know, the old version that we kind of, the subject kind of started off with was to work with the Hilbert scheme of curves, or rather a Hilbert scheme of one dimensional subschames. On x. So here maybe I'm going to fix an element of h2. And then I also fix an Euler characteristic. And so this would parameterize, you know, subschames, one dimensional subschames, where the support, the kind of one dimensional piece is last beta, and then the Euler characteristic of the structure sheaves is n. And then one example that, you know, we'll see if I have time to talk about maybe in some, is the special case when beta is zero, and then you're just considering the Hilbert scheme of points on x. Now, in these examples, the way you kind of put it into this framework of modulized basis stable sheaves is that instead of thinking about the subschame and then the kind of the surjection from the structure sheave, you just remember the ideal sheaves. So you think of this. You look at the ideal sheave of your subschame, which is a rank one sheave on x with the trivialization of the determinants. And this turns out to be equivalent to looking at the Hilbert scheme. So the way you put it in this framework of modulized space of sheaves is by forgetting about your subschame and instead remembering just the ideal sheave that cuts it out. The version that's cleaner, and maybe actually Rahul already has spoken about this in his lectures, or if not he will soon, I'm sure, is a variation again for curve counting purposes of the Hilbert scheme construction where you work with what are called stable pairs. So this theory was developed by Pondre Ponday and Thomas. And so here the modulized space, again, you kind of fix a curve class. You fix an integer. And then the data here in this modulized space is there's two pieces of data here. First is a sheave, E, which is one dimensional support is pure. So pure meaning has no kind of zero dimensional sub-sheaves. And then the second piece of data is just a section of the sheave. And then the stability condition is just the statement that the co-kernel of this section is zero dimensional. So what does an object of this space look like? So the simplest kind of example to think about if you haven't seen this before. Oh, I shouldn't say what the discrete invariants are. So again, the support of the sheave, just the cycle theoretic support is going to be in class beta. And then the Euler's Euler's X. So the way to think about what this space looks like, the first approximation is let's say here is X. And then imagine the support of E is let's say a smooth curve inside of X. And so you could ask what are all the stable pairs with fixed support like this? Well, the simplest is I just take the surjection OXO. So that's an example of a stable pair. But what I could also do is I could also take, you know, if I give you some line bundle on this curve, I could take E to be its push forward, which is now a coherent sheave on X with one dimensional pure support. And then if I just any nonzero section of L will then produce a stable pair on X. So given nonzero section here, it defines for me a section of E and the the co kernel of this of the corresponding section of E on E will exactly just be the, you know, the zero locus of the section on the curve. So the way you can think about this is you have some line bundle on this curve. Now I have some section. And so the same curve will contribute to, you know, many different stable pairs. Just you know, you take, you know, any line bundle with a section or equivalently any collection of points on my curve, there will be a corresponding stable pair on X. So if I the contribution of C to, you know, all of these stable pairs, it ends up looking a lot like just, you know, taking the different symmetric powers of my curve. And so in particular, this will be kind of, you know, the left hand side of this kind of McDonald inequality in general. And so again, how do we want to put this in this kind of setting of DT theory? So then the setting of DT theory that I wrote it is that we're going to want to consider, you know, modular space of, you know, sheaves or complexes of sheaves on X. And so here I have this kind of, you know, two term complex. I just think of the corresponding object in the drives category. So I take this two term complex, I think of this as an element in the drives category for here she's on X. And this is how I'm going to think about this modular space is a modular space of a certain kinds of, you know, two term complexes. Again, so one thing that's kind of, you know, instructive to do is, you know, if X is, you know, projected then this space of pairs is also projected. And so kind of a fun thing to try to do is just to understand what like certain limits look like in this space. So for instance, maybe is an exercise. This is a local question. So it doesn't really matter what the ambient space, what the ambient threefold is, but imagine I have, you know, two lines that are kind of colliding two skew lines that are kind of about to collide in some limit. And then we, you know, know from, you know, Hartron or something that the limit in the kind of Hilbert scheme of one dimensional subschains is you get two intersecting now coplanar lines and little fat point there. But this kind of limit isn't allowed in the stable pair space because the support of the sheet, because the sheet isn't pure. And so instead what you can try to work out is what the, what the stable pair with the limit of this kind of thing in the stable pairs moduli spaces, where the support of the sheets is still going to be these two lines, but now there's going to be what used to be a kind of a fat point now gets replaced with some kind of nonzero co-kernel. So these are kind of, you know, the examples of the kinds of modular spaces, geometric modular spaces that we can look at. But actually everything I'll be talking about today, it doesn't really have to be a geometric setting. So they're kind of the kind of main example of non geometric examples to look at are come from looking at, you know, representations of, you know, quivers with potential. So everything I'll be saying today and tomorrow kind of makes sense in that setting particular there's, there's, you know, a lot of the material in Marcus Reinike's lectures I think will be relevant. Okay, so these are the kind of, you know, these are the examples I want to look at. And so as always, we're interested in some kind of virtual structure on these modular spaces. And so in the context of Richard's talks last week, the kind of initial piece of data that you want to understand is something about the deformation theory. So you have a deformation obstruction theory for understanding these modular problems, which because I'm just working with modular spaces of cheese or maybe complexes of cheese, they're given by x groups. So the deformation space, the tangent space is given by, you know, the self x one of whatever your sheet for complexes, if I fix the determinant, then we usually do some kind of traceless thing. And then the obstruction space is then given by x two. And so again, my understanding is Richard kind of, you know, talked in more detail about how these show up for modular she's. And so already the first nice thing that happens in the clubby out three case. So this is off course, so far everything is completely general, which is you get to apply serduality. If I take the dual of the obstruction space, I get x one with this twist by the canonical bundle of x. But because I'm in the clubby out three setting, this is just trivial. So I get exactly the deformation space. And so particular means that the virtual dimension of my modular space was just the difference in dimensions is zero. And so if x is proper, or at least if my modular space is proper, that's really all I need. I have this, well, I have this virtual class, which is a zero cycle. And under the properness hypothesis, I can take its degree, which will give me a number. I'll call this kind of the virtual number associated to my modular space. And it's just some integer. And the procedure for doing this again, this is something that, you know, Richard sketched out is, you know, the way you produce this virtual classroom, all this data is basically by, you know, using some techniques from intersection three. So that's kind of the world where these constructions with most naturally. Okay, so what do I want to explain first then is in the again in the clubby out setting with, you know, another way of thinking about what these numbers are. So this is this notion of what we now call the Baron functions. So the setting here is that let's say I have some, you know, I have some modular space M. And I've equipped it with this kind of perfect obstruction theory. Meaning I have some, you know, two term complex that calculates the deformations and the obstructions to my modular problem. We say that he is symmetric. If I have a quasi isomorphism between E and E, it's shifted dual. So you should think of, you know, E as being kind of supported in degrees negative one and zero. And then E dual is going to be shifted, supporting degrees zero and one and then I shift it back so it's again supporting degrees negative one and zero. And, you know, isomorphism like this with the symmetry condition so such that this isomorphism itself has some kind of self loyalty property. The easiest way to get a kind of data like this is that for instance if I have, let's say if F is a vector bundle with, you know, symmetric biolinear form, alpha, then you know you can produce an example of, you know, a complex with this kind of symmetry just by taking a map from F dual to F and then you have using this to produce a map like this which exactly has this kind of symmetry. And so an example of a kind of a, so the baby example of a, you know, a modular space with a two term perfect obstruction theory with this kind of symmetry is where your structure and theories kind of done so let's say and smooth. And then your obstruction theory basically consists of the map from the tangent bundle of M, the potential bundle of M which is just the zero map. So this is like I have an obstruction space, but it's those obstructions are all unrealized because the space itself in fact happens to be smooth. And in this case if you calculate what the virtual class is. So it should be zero dimension, it's a zero dimensional virtual class because the definition of instruction and dimension. It's just going to be, it'll just given me the, the, the Euler class of the potential bundle which is my obstruction bundle. So if I take its degree. I just get up to assign the topological or like characteristics of them. And so the first observation with everything here I should say is, I'll be saying is due to. Except for the name, he didn't of course name it after himself but at some point it caught on. So in the club you have three settings all the examples that I said before. Again just the same seroality calculation I did before tells you something a little stronger. It tells you that the obstruction theories are always symmetric. So let me give the kind of key local example of one of these symmetric obstruction theories. And then I'll use again next time, which is an imagine I have some ambient smooth space V which is just fine space. And I have some function on it. And then the kind of, you know, space that I'm looking at my actual model space is just the zero locus of all the partial derivative. Just the critical locus of this function F. Because you know it's some, it's a space cut out by a bunch of equations, and in particular has a nice two term obstruction theory and then you can just write down what the obstruction theory is in this case. And it's basically determined by taking the, the Hessian matrix or a he takes this kind of symmetric matrix given by taking the partial, the second partial derivatives. And though this defines exactly that kind of this is a symmetric obstruction theory. And this will be kind of the main example for us. So this kind of baby case is the case for the function was zero and it's kind of dumb, but in general it's more interesting. So this is the frame the framework for us we have a modular space we have this kind of two term obstruction theory, and it has the symmetry property. So definition of function if I give you, you know, any kind of complex scheme, a function from the complex points to the integers. So if you're not constructible. If the set of points where the function has some value so new inverse of a. This is a constructible set. I have some you know, right and you know there's going to be some open set where it has some value zero and then maybe there's some locally close that where has value one and then maybe some, some other stratum where it has value negative one. And then given one of these constructible functions I can kind of use some, you know, version of integrating it. And then the version of integrating it where what I'm going to do is I'm going to sum. I'm going to look at all the kind of strata where the function has some value, and I'm just going to add up those strata weighted by the, add up the Euler characters of the strata weighted by the value of the function there. And then if I just had the constant function one I would just be getting the top logical or the characteristic of them. But of course in general I'll get something else. And in particular I can look at you know for instance just the, you know, a billion group of constructible functions, z value constructible functions and them. And you know one way of thinking about this is if I just look at characteristic functions. This is a basis index by irreducible subrises. So you only assume for likely many non empty fibers. Yeah, yes, that's right I'm sorry and is going to be you know just a finite type thing. So that's only finally many. And so the key theorem that kind of kicks off for me at least the whole direction of the subject is that if I give you. I'm with the symmetric obstruction theory. In particular any modular space of you know, she's or whatever on a club you have three fold. There exists, you know, associated to. Is this a constructible function on M such that if M is proper and a virtual number of my modular space meaning in the sense of taking the degree of a virtual class. It's the same as what you get by integrating this constructible function. And so what it means is that you know the, you know, this intersection theoretic quantity it means that you can kind of study it using ideas from kind of constructible geometry or micro local geometry. And I'm going to say a couple of, you know, remarks about this I'm going to say something about why this is true in a second but let me just kind of say what's kind of so interesting about this. It allows you a couple to do a couple of things that you couldn't really make sense of intersection theoretically. So for instance, you know, if I this virtual cycle to it's really a cycle class. If I give you some subset of M it doesn't really make sense to talk about you know what is the contribution of Z to the into the virtual class because you can't really you know localize. There's not a clean way of localizing this kind of zero cycle class to all the different you know, along some stratification event. On the other hand, if I give you a constructible function, it's very easy to do it, because I can just restrict my constructible function to Z and I can, you know, integrate it there. Second, the right hand side makes sense even if M is not proper. On the left hand side unless you're in some kind of equity setting like in Richard lecture. If I have a non compact modular space, it doesn't make sense to take the degree of a zero cycle class on it, but you can always just integrate this constructible function. And the left hand side of course, because it's defined because it's defined intersection theoretically is defamation. And that's the least in the proper situation. I mean, which is the only time it makes sense. And something like the right hand side if I just take for instance, the actual topological order characteristic that is, as m varies and you know flat family that's certainly not going to be a defamation. And if I take this kind of specific choice of constructible function that's kind of correcting for the failure of the other characters to be defamation. This is not, this is really not at all obvious from the point of view of the design. And so this ends up being an extremely useful theorem out let me just say I won't maybe I won't break this down. One way that gets used a lot is when you study how, how these invariance change under change of stability and wall crossing and so on which is that when you kind of cross some kind of a wall, and your stability condition changes your modular space usually changes maybe by some kind of flip or flop or something like that. So understanding how the zero cycle transfer is the cycle class transforms might be kind of delicate. But this kind of weighted Euler characteristic, you know, if there's some open part where the two modular spaces are just the same, then you can just throw it out, because the contribution to this kind of, you know, integral is going to be the same and you can just focus on the kind of the actual strata where the, where the stability is changing. So this ends up being kind of an extremely powerful tool for those kinds of analysis. So, sketch the proof of this result. And it goes into how this kind of virtual classes defined. But again, I think I believe Richard covered in his first couple of lectures. The idea that you know if you have embedded in some kind of, you know, smooth space, let's say, then the way you get this virtual cycle that you have, you know, M sitting inside the, and then there's some kind of a vector bundle over V. And then there's some kind of cone with multiplicity sitting inside of F. This is kind of conical cycle, not a cycle class and honest to God cycle inside of this vector bundle. And then when I intersect it with the zero section I get exactly this virtual class. And I think that's a good question to ask you just in general. But what would Chi showed in his paper, he said if you now add the condition that the obstruction theory is symmetric. And so in this case, you can actually refine this picture so that this vector bundle F is actually the total space of the cotangent bundle of the conical cycle inside of the cotangent bundle. This cotangent bundle has a natural symplectic form. And when I say this cone is Lagrangian I mean that every the, you know, the smooth locus of every irritable component of this cycle is Lagrangian in the usual sense the symplectic form restricts to zero and it is the middle dimension. So I got so special well so there's a natural there's a isomorphism between on the one hand, instructable functions on V. And this free this free building group of conical Lagrangian cycles, which is known as the characteristic cycle map it takes a constructible function here, and sends it to what's called the construct of the characteristic cycle of this function. And so I can define in some sense via via some kind of a Morse theory type construction. These, the fact that there is an isomorphism like this shouldn't be kind of super surprising, each of these spaces has a basis that's indexed by these irreducible sub varieties. So I already talked about how useful sub varieties just by taking the characteristic function that finds a basis here. Similarly, if I give you an irreducible sub variety I can take the smooth part and I could take its isomorphism which is Lagrangian side of here, and I can take its closure. And so that gives me a natural basis here but that's that identification is not what's used to define this isomorphism it's a little bit more subtle than that. But what's great about this construction is that the on each side. There's an evaluation map to the integers on the left hand side when I take the constructible function I can just integrate it. And on the right hand side, I, if I give you a conical cycle, I can take the degree of its intersection with the zero section. And the way this characteristic cycle construction goes that you know this diagram from us. And this is just a very general statement about, you know, constructible functions on V and Lagrangian cycles in the cotangent model that you can kind of set up an isomorphism, which makes this diagram commute. I'm not going to, I won't actually, you know, if I had more time I would actually, I had an idea sketching a proof of this kind of index one. There are a lot of proof the one I like the most is in a paper of a Schmidt and Valonan. Or basically they just reduced to the case of understanding kind of past to the real analytic world and then you reduce to the case of understanding like tiny, a tiny ball. And this is great you see the right hand side is exactly what we want to define the virtual class, the degree of the virtual number of the left hand side is the kind of thing that you know Barron's theorem is about. So to produce this kind of Baron function in the statement of this theorem. I'm going to take the conical Lagrangian cycle that's associated my obstruction theory and just move it over to the left. So this is now what we know what we now call the Baron function is just whatever constructible function. Maps to this obstruction cone. Under this characteristics characteristic cycle map. So this is how you kind of go from the intersection theory rolled over to this kind of constructible world. So then the question what do we know about this Baron function how do we, this is kind of a somewhat abstract statement how do we kind of compute it, you know, any examples and in general it's quite hard so you know, I would say if I give you a kind of a random point in the problem and I give you some random point in the modular space. You know, it's not so easy to kind of compute this thing. But some cases, we have some kind of statement so for instance the easiest cases when I'm a smooth. You can again kind of put the stupid. This is the baby example where I just have the zero fun, the zero section defines for me a symmetric construction theory. And then the Baron function in this case is just constant, negative one to the dimension. And then Baron's theorem says exactly what I wrote before if I integrate the Baron function I'm getting negative one to the dimension on the top logical or like characteristics of them, which is the degree of this kind of virtual cycle that we associated with before. So what about this local example I did up here so here I kind of wrote down this key local example, where I take the critical locus of a function. And so in this case, I give you some point on M. Again, it's something pretty nice is related to what's called the Milner number. The reduced Milner number of my function at P so it's some kind of notion and singularity theory which let me just state what it is. So given a function and some point in the critical locus I can take the Milner fiber, which is just I take a ball of you know, close ball of some tiny radius around my point P. And then I intersect it with the fiber so let's assume that f of P is zero. It makes my life easier, and then I just intersect it with a nearby fiber of my function. So here epsilon is much less than delta less than one. And so then the with the Baron function is in this case is I'm just taking again up to a sign, taking one minus the Euler characteristic of this Milner fiber. So this isn't super explicit but it's again something familiar from, from, from singularity theory. And I am not assuming the singularity is isolated, you can still this definition makes sense in general, and it makes a little harder to think about this definition. Oh, sorry, yeah, he was a mess right here. So let me give a more complicated example where you get to kind of see this. So let's say I want to let's say so I'm going to take the following kind of clubby out three fold I'm going to take a 33 hyper surface inside of P two cross P two, which if I kind of This is you know, this is a clubby out three fold complete intersection, I project on the P two and I get an elliptically fibered clubby out three fold. So I'm going to pick it, you know, to pick the defining equations, such that some elliptic So one of the singular fibers of this vibration is sitting inside of P two just given by you know, x squared times why so so one of the fibers. So, reducible non reduced cubic inside of P two. This fiber looks like you know maybe to see one plus C two. And so I'm going to look at the following you know modular space of she's I'm going to look at. She's which are one one dimensional she's where the support is C one. So that kind of, I take the non reduced component I just take the underlying reduced curve which is just a P one. So I'm just going to set my discrete variance to be whatever the train character of the structure she for this curve is so the support of the she for C one and the Euler characteristic is one. And so I can look at the corresponding modular space of she's on X and set theoretically it's just a point this is the only object in it. And this is the calculation this is from Richard, you know, 30 years ago or something is that this modular space scheme theoretically is not reduced. So you see what this is is first of all, I mean, it's cut out the equations you scored V scores and the same is looking at the critical locus of the function you cubed plus V cube because you know these up to three, these are the partial derivatives. So then, you know, so you, this is a pretty explicit function you can work out what this, what this Milner number calculation gives you so if you. So the value of the Baron function at this unique point ends up being negative one squared one minus negative three so this negative three exactly with Milner fiber in this case is value for. And you know this thing is zero dimensional so the virtual dimension equals the actual dimension. So the virtual class in this case if I just calculated it. This is just the length of this zero dimensional scheme. And this is four so as expected. All right, so this is the main thing. So what I'd like to do in the, I guess I'll start this now and then I'll continue this tomorrow. So I'm going to sketch you know how this, you know, give some indication about how this theorem gets used ends up being a really useful theorem for understanding for calculating these. These numbers. And so okay so what I'll maybe do is I'll just do one example now, and then I'll just I'll start it now and I'll kind of finish it tomorrow. So let me state this theorem. And so this is going to be a theorem about these, you know, the stable parents. This was this space of stable pairs. And then the, we can define a generating function where I six beta. And then define what I'll call the PT series. And then I'll just do a simple number of these spaces. Which again kind of should be reminiscent of the kind of generating function that I started this talk with, where I took the summation of other characteristics of all the Hilbert schemes for six curve. And then I'll just do a kind of general version of an arbitrary body of three. And so the theorem is that so again, the X here was my Claudia threefold is that this generating function is the Laurent expansion. And so the function of the rational function. Symmetric with respect to you go to Q inverse. And then you can see it built out of things that look like, you know, to the one minus R one plus Q to the two or minus. You can express this. Generating function. And so okay so I what I want to do is I want to kind of sketch the proof of this in the special case when data is useful to the proof due to ponder upon day and Thomas. And I would kind of nice about this. I'll do that tomorrow. But let me just say why this is a nice result, which is that you know this rationality is something that we expect for any threefold. But right now we can only prove it. This is expected. For all threefolds. We can prove it, you know, for things like, you know, complete intersections and so on. But in terms of a really general this is a really general statement. If we, the only case where we can prove it really in some kind of generality without knowing something really specific about the geometry of X is is in this club. Well, okay. This is not only known but they kind of, you know, for general, in general, we don't. We don't know what the actual value of the case is. What what's special about the club, the out cases precisely we get to use this kind of structural approach to prove this kind of theorem. So I'll, I'll maybe say like, you know, a few lines about this argument. Yeah, let me. In the examples you wrote down the very function is it worked out by going through this sketch you outlined or the functions could sound differently. You mean the examples of what the Baron function is. Yeah, so right so in the examples I wrote down, you can you know you can calculate this number is something you can calculate with techniques for calculating it. And so, so that's, that's basically how you do it. So like for instance, you know, it turns out that you know finding the milliner fiber the milliner number for like you keep this you could that is something you can more or less by hand. And then and then you just work it out and you get this you know negative three popping out. And in general, there is kind of a procedure for calculating it. You know, if you have like a function and you have like a lot of time, then there is like an algorithm for calculating what the what the what the milliner number is, it but it for you know, usually the John trees were interested in getting out, get larger and larger dimensional and so getting your hands on it isn't really feasible and track. If you want to do like, so an example of here's an example of something that I you know, that we like is him, you know, if you do like the Hilbert scheme of points on the Hilbert scheme of points on C3, the function that you would want to find them, you know, the milliner numbers for is in some sense pretty explicit but involved, you know, three n by n matrices and so if you wanted to actually kind of you know, do that for any given point in the Hilbert scheme, this is actually quite difficult. So if I take an Arkinian scheme or that point. Is it true that the very function is bounded above by the length of the of this that points. Sorry, say that again. Take a pet point so a spec of an Arkinian ring. So I have only one close point like in the example by Richard. So is it true that the length of this point is an upper bound for the very function. Yeah, I mean, again, up to a, I mean, so I think so I mean, if the, you know, if it's coming from one of these. Maybe it's just equal to it even in general. I definitely equal to it if you know it's, it's, it's coming from a coming from a critical locus that's just what the theorem. But if it's not coming from a critical locus there's still a definition of the Baron function and then, you know, maybe it's still equal to it in that case, but, but in the case that show up naturally, it'll always just equal that length. So certainly can be smaller I was wondering if it can be bigger as well, probably not. No, no, I, yeah. I think I think that's right. Thank you. Yeah, the question. Thanks.
|
In the first part of the course, I will give an overview of Donaldson-Thomas theory for Calabi-Yau threefold geometries, and its cohomological refinement. In the second part, I will explain a conjectural ansatz (from joint work with Y. Toda) for defining Gopakumar-Vafa invariants via moduli of one-dimensional sheaves, emphasizing some examples where we can understand how they relate to curve-counting via stable pairs. If time permits, I will discuss some recent work on χ-independence phenomena in this setting (joint with J. Shen).
|
10.5446/55126 (DOI)
|
Good afternoon. I just wanted to tell you the story of how the fullerings were discovered because it was certainly the most fun time I ever had in life and I like to share it. In the early 1980s, my colleague Richard Smalley invented a machine to study clusters of atoms of very refractory elements. The machine's concept was very simple. You took refractor material, impacted the surface of it with a pulsed laser light, this would vaporize the material and the laser would atomize it and this plume of atoms would be then trained in a stream of helium gas mixed up with the helium, cooled off and the atoms that you would initially vaporize would come back together and form clusters. You could add various free agents to the stream of gas so that you could see what would react with the surfaces of these clusters. Then downstream, the gas expanded into a vacuum, into a supersonic jet, was cooled to a few degrees above absolute zero, skimmed into molecular beam and interrogated with a mass spectrometer. One got essentially a distribution of clusters sizes, that is, typically you'd have a big hump where a maximum would correspond to the most abundant clusters. Now may I have the first slide please? This is a picture of Rick Smalley atop this machine. He's about two meters above the floor. So this was for physical chemists at least, big science. You could get inside the main chamber of the machine with no difficulty. What he's doing at the moment is introducing a region where air could be excluded so that one could ionize clusters with a fluorine laser. May I have the next slide please? Now this is what Harry Croto was interested in. What happened was that there was a meeting in Austin, Texas in March of 1984, and Harry was there, and I was there, and I suggested to Harry that he come visit Rice. He came to Rice and he fell in love with our machine, with Rick's machine. And the reason he fell in love with the machine were these compounds he was interested in. His colleague David Walton would make compounds like this for him. He would investigate the rotational spectrum in a microwave spectrometry had determined the rest frequencies and then go to an observatory, a radio astronomy observatory, and try to observe the same rotational transitions in various interstellar clouds. And the amazing thing was that he found them because these things are fairly difficult to make in the laboratory. Polyacetylenes are notoriously tricky to work with. It's said that most polyacetylene chemists are missing a few fingers as a result of their endeavors. Now the reason that Harry became enthralled with this machine was that the question is if this is so hard to make in the laboratory, how does it get made in the interstellar medium? And Harry had the idea that, which he'll tell you about perhaps himself in a minute, that material was expelled from the surface of a carbon star. The carbon atoms would get together, make these chains, pick up a hydrogen at one end and a nitrogen at the other, and create the species that he was observing. So Rick and I and our colleague Frank Tittle were engaged in a program of investigation of semiconductor clusters in March of 1984, and we thought we were going to revolutionize the computer business and we were not too interested in getting off on this sidetrack. And it wasn't until August of 1985 that we finally decided there's a break in the action. We'll call Harry and ask him to come over and do his experiments. Next slide, please. And then we looked at the literature. And there was a group at Exxon, Roffing-Cox and Caldor, who had already examined carbon clusters in an apparatus identical with the one that we were working with and had proved that they could make chain-like species in this region that Harry was interested in, clusters of this size. So we called Harry back and said, looks like this experiment has been done, but if you'd like for us to do a few things, we'll do it and send you the data and Harry's responses. I'm kicking the next plane over there. I want to do this myself. So let me tell you a bit about this. This is the kind of data that this machine produced. What we have at the bottom is a number of carbon atoms. What we have going up this direction is a scale that tells you the relative number of clusters of a given size, so that in this particular diagram, the carbon 11 cluster has the greatest abundance. Now, this is not at all what a typical cluster diagram looks like. Typically, you would just have one single hump. There would be some, as I said, some cluster of maximum concentration or maximum amount. And this was very peculiar. Not at all like this. In this region, you have only odd-numbered clusters. In this region, there seems to be some sort of forbidden zone or gap. And in this region, you have only even-numbered clusters. And it's very difficult to come up with an explanation for how you can have only even-numbered clusters. This has never been seen in any other cluster distribution. This also shows various magic numbers, 11, 15, and 19 are magic numbers, 60, 70, perhaps, are also magic numbers. So, Herron came over. He worked with graduate students Jim Heath and Sean O'Brien to do the experiments to show that these carbon-chain compounds could be made by vaporizing carbon and mixing them with something like ammonia to provide the hydrogen atom for one end and the nitrogen atom for the other. And these experiments worked. You certainly could make these chains, and so the hypothesis seemed quite viable that that's the way these materials got made for the interstellar medium. However, in the course of doing these experiments, the whole distribution was looked at repeatedly under all sorts of conditions. And the relative intensity of the peak corresponding to 60 carbon atoms changed a lot depending on what the experimental conditions were. Sometimes it was quite prominent, perhaps maybe eight or nine times higher than its nearest neighbor. And so we reached the stage on a Friday afternoon where Harry was going back, I believe, on Monday, and we had to wrap things up. And we said, all right, we have a paper out of finding these chains. But we really ought to think about why this 60 peak fluctuates so much. I remember very clearly that we were sitting in Rick's office, and as a group of five of us, we agreed this ought to be done. And then the three professors looked at the two graduate students and said, why don't you work this weekend and see how intense you can make this carbon 60 peak? So now the next slide, please. So what happened is that on Monday morning, Jim Heath walked in with this spectrum. It's the same sort of thing except we were looking at only the region from 42 to 86 carbon atoms. There were, again, there were only even number clusters. And Jim Heath had found conditions where this peak was at least 30 times more prominent than its neighbors. So clearly you need to come up with an explanation for this. You can't ignore a result like this. It was a result sort of like this one that made us believe that we needed to investigate this further. So the main difference between these panels is that as you go from the bottom towards the top, there's been more chance for chemistry to take place in the expansion region, inside of the nozzle. And so what we knew about the chemical conditions implied that carbon 60 into a lesser extent carbon 70 was a survivor of chemical attack by carbon atoms. And so one wanted, therefore, to come up with some unique structure for the 60 carbon atom peak that would reflect this kind of inertness. Next slide, please. So here are the kinds of things one might work with. We've already seen the chains. The chains typically have a dangling bond at each end. And so this would be a site for chemical reaction just like the divalent carbon here would be a site for chemical reaction. You could get around the chain having a dangling bond at the end if you have a 60 carbon atom chain by making it into a hool hoop. But then there would be no reason to think that a 62 carbon atom hool hoop would be any different in reactivity from a 60 carbon atom hool hoop. The other alternative was the basis structure somehow or another on the structure of graphite. Graphite, after all, is the most stable form of carbon, these hexagons with the carbon atoms at the vertices. And you could imagine that somehow or another, if you made up a piece of chicken wire like this or hexagons like this, that you could fold it around, curve it around, and have a dangling bond on one side react with a dangling bond on the other side. And perhaps there would be some way of avoiding dangling bonds. Next slide. And there's precedent for this. The structures that Buckminster Fuller was so fond of making looked like chicken wire at first glance, looked like hexagons, and they're curved around and they look like they could ultimately close. And in fact, next slide, some of them like the Expo Dome at Montreal are virtually closed globes. Well, there's a little bit of a problem, and that is that Buckminster Fuller didn't tell you how to do this. The Rice Library has perhaps 100 books on the works of Buckminster Fuller. And if you look for those books, it's relatively difficult to find any kind of picture that tells you what to do. So there was a famous luncheon at the Mexican restaurant, which Harry can perhaps tell you a bit about in more detail, because I wasn't there. All I know is what I've been told. We're discussing how can we make a close form of this cage. We've been talking about that. And Harry said, well, a few years back, I made a thing out of cardboard to study the constellations. It was about so big and it was roughly a globin shape, but it was a polyhedron. It was made up of hexagons and I think pentagons. And it might have had 60 vertices. And so what happens in these discussions is everybody says, hmm, that's very interesting and changes the subject. And that's what happened on this occasion. But people don't forget these things. And this was around one o'clock in the afternoon, perhaps when this came up. And this began to knoll on Harry during the afternoon. And by a little bit after dinner time, I think around 6.30 or 7, he really wanted to see this straight thing he had made and he wanted to count the vertices on it to see if it had 60 vertices. And so he said, he called, came to me and he said, I want to call my wife, Laura, who's here today, and get her to get this thing out and count the vertices. And I said, Harry, it's 2.30 in the morning and writing. I don't know. And then I asked him the killing question, Harry, do you know where it is? Well, it's been a while since I've seen it. And so I said, it can wait till morning. What the heck? Why wake her up in the middle of the night to find this thing and count the vertices? Many wives would not appreciate this. So Harry agreed with me it could wait till morning, but it couldn't wait till morning. Because Rick hadn't forgotten about the idea that this thing was made of hexagons and maybe pentagons. So he went home and he cut out a bunch of hexagons out of paper and he cut some pentagons with the same length edges out of paper. And he tried to put them together and to make some closed structure with 60 vertices. And from what he tells me, once he started working with pentagons, it turned out to be trivial. You start with one pentagon and you put five hexagons around it and you already have a bowl-shaped object. And so the next morning, he lived at that time far away from Rice, the next morning he called me and said, I'm on my way to work, get everybody together in my office, I found the solution. Next slide please. And he came in and he threw this object on the table in the office. And of course, none of us bothered to count the vertices. We knew it had 60 vertices or he wouldn't have come in claiming that this was a solution. But I'm always one not to give up too easily. So I said, well, we really got to see if the bonding of carbon works out. And so we pasted these little pieces of extra paper that had double bonds on them to see if we started out making the number of bonds from each carbon before on one side. If when we got around to the other side, it would work out. There would still be, there would be no carbons with either three bonds or five bonds. And it worked out and so I said, oh, I believe this must be the right structure. So we called the chairman of the math department at Rice. We didn't know what this was. And said, we've got this object that's got 20 hexagons and 12 pentagons, 60 vertices. What is it? He said, well, let me look it up and I'm sure I can find out the answer. And in about five minutes he called back and I happened to be the one that picked up the phone and he said, what you've got there is a soccer ball. And I was somewhat taken aback by this comment and I pretended, you know, you'd try to somehow another shift your ground to be. I said, well, what's this technical name? You know, like I knew it was a soccer ball on the log. And he said, well, it's a truncated icosahedron and you guys haven't discovered anything new. We mathematicians have known about this for quite a while. So anyway, we got a computer model made. Next slide, please. This molecule, this is what it is with the bonding of course, the carbon atoms of the vertices. This particular teculate structure, that is, you could move the double bonds around in many different ways, actually 12,500 different ways. But the ones for all the double bonds are only in the pentagon. There's only one structure where the double bonds are only in the hexagons and no double bonds are in the pentagon. And that is by far the dominant structure. So this particular material, it was discovered sometime later, reacts like a polyolefin and not like an aromatic compound. As it looks, this looks like benzene, but it really isn't benzene. So we wrote a paper for nature, a letter to nature, and we claimed that this new material that no one had ever thought of before would be wonderful. It would do all sorts of things. It would be the carrier of the diffuse interstellar bands. It would be a wonderful lubricant. I don't think we claimed it would cure the common cold, but we quivered on the edge of doing that. And we sent the paper off to nature and we were really happy. Next slide, please. And we had our team photo made. This is Sean O'Grine and this is Jim Heath. And you've met the other characters in this play. This is our mystery woman. No good story. Every good story needs a mystery woman. We still don't know who she is. Anyway, we were very happy about this. We finally found, next slide, please, how about Mr. Fuller made domes? He put pentagons in them by Goliath. So this is a picture of Mr. Fuller revealing a secret. Now, as I said, we thought we were the first people to ever think of this. Next slide, what we were. E.C. Ozawa in Japan had thought of this molecule in 1971 by the simple expedient of taking a close look at a soccer ball that his son was playing with. And apparently, this was the idea of this compound was really quite well known to a large number of chemists. For example, the Russians had done a theoretical calculation on it. Next slide, please. And the synthetic organic chemist, Orville Chapman, who's at UCLA, had looked around for a suitable target for his considerable synthetic organic skills, asked himself supposedly this question, if God would give me the grace to make one molecule, what would that molecule be? And answered his own question with soccer ball C60. And this was around 1980. He went further than that. He wrote a proposal to the National Science Foundation in the United States. It was funded to make the soccer ball C60. And he set to work with several graduate students to make it. And unfortunately, he was not able to make it. No organic chemist has synthesized this molecule by the traditional methods of organic chemistry, at least so far. Now, in this period of time, when we were discovering that, hey, this isn't such a revolutionary new idea, after all, things were very interesting because Harry had gone back to England, and we were in the United States. And so we'd sent out a lot of reprints, or preprints of our paper, and we began to get information back. And we were getting slightly different information that us in Houston and Harry in Sussex that were pointing in the same direction. What we got was a preprint of a paper by Tony Heyman, who had considered once again, done this theoretical calculation on soccer ball C60. This was actually the third time it was published, but no one knew that. But he had in his paper a lot of thoughts that have not been sufficiently appreciated. One thought that he had, one thing that he showed was he considered an alternative closed-cage structure for carbon-60, and concluded that it wouldn't be a good structure because it had five-membered rings that were bonded together, and he thought this would be a high-energy, possibly chemically reactive side. The other thing that he knew was that Euler had, in about 1764, explained the rules for how you make polyhedra. And when these rules were applied to a system that contained only six-membered rings and five-membered rings, what it said was that as long as you had 24 or more even number of atoms, you could make a closed-cage solution, which would have exactly 12 pentagons in it, and the rest would be hexagons. Now, what Harry was getting, please slide, please, was somebody pointed out this beautiful little article by David Jones, which was published in 1966 saying Daedalus, he wrote, David Jones had a column for the new scientist where he essentially said, crazy ideas that I've had about chemistry. And he published under the name of Daedalus because I guess he didn't want people to think he was crazy for having crazy ideas. He says, Daedalus has conceived a hollow molecule, a closed spherical shell of sheet polyglyc graphite whose molecules or flat sheets have been signed hexagons. He proposes to modify the high-temperature graphite process by introducing suitable impurities into the sheets to work them, reasoning that it will ultimately close on itself. And almost immediately people told him the suitable impurities ought to be five-membered rings, and you need exactly 12 of them. And essentially what we discovered was you didn't really have to do much to modify the high-temperature graphite process. All you had to do was let carbon condense, carbon atoms condense from a high temperature, and you would spontaneously make this closed-cage compound. And next slide, please. And he had, in some of his subsequent work, he had some pictures of this beautiful radiolara animals which have skeletons that are made of hexagons and pentagons primarily, although if you look carefully you can see some heptagons in there. And this picture comes out of a book by Dorsey Thompson on growth and form. Dorsey Thompson had considered the relationship between geometry and symmetry and the structure of organisms. For example, we have bilateral symmetry, a starfish has five-fold symmetry, and these little animals have sort of spherical symmetry. And so this turns out to be, in some ways, related to biology. Next slide, please. So this led us back to this distribution and the thought that perhaps the reason that there are only even clusters here is because all of these clusters are closed-cage compounds. That's they've already been subjected to considerable chemical attack and only the ones that were already closed-cages survive. Next slide, please. Well, that makes it a little hard because if these are all closed-cages, why are they reacting away? And there is something unique about carbon-60. It's the smallest-cage compound that has no adjacent pentagons in it. And so almost simultaneously, Harry and quite independently, the group at Galveston, Thomas Schmalz and Doug Klein, Bill Seitz, the reason that maybe the five-membered rings that are adjacent to each other are particularly susceptible to chemical attack, and maybe what's going on here is that C70 is the next smallest closed-cage reform that has no adjacent pentagons. Now, it turns out to prove this as a formidable challenge. And in fact, the group at Galveston, who are quite talented mathematicians, finally proved it in about 1993. But it's like a... This is the only one that you can make a non-adjacent pentagon structure for easily. Harry tried to make some in this region and never could succeed and was forced essentially to guess. And it is true that this is the next one that has no adjacent pentagons. Next slide, please. So what's happening? Here's a seafority. What's happening is if you have a pair of pentagons that are adjacent, these two particular carbon atoms that bridge the two pentagons are particularly susceptible to chemical attack, and no fullerene has ever been isolated which had adjacent pentagons. Next slide, please. Now, there are many different isomers. Once you get up in the neighborhood of C60, there are 1,812 closed-cage forms of C60. And this is a... They're all but one of the isomers. A soccerball isomer has adjacent pentagons in it. This is just one of them. Well, we had a lot of fun in 1985. Almost all the ideas that we had were on the table by the end of 1985. We spent a couple of years defending ourselves and trying to do new experiments to test the fullerene hypothesis. By about 1988 or 89, we were running out of gas. You go to give a talk, the organic chemist would say, let's see your vial of substance, and we'd say, we don't have it. We just have a few molecules in a molecular beam. And it was clear there was no Nobel Prize in what we had done. And we couldn't figure out what to do next. Next slide, please. And these two guys came to our rescue. This is Wolfgang Prechmer from Heidelberg, and this is Don Huffman from the University of Arizona in Tucson. They are physicists who have been interested in carbon particles in space for a long time. And they had a machine for making carbon, essentially for making carbon soot and then looking at it. And this machine consisted of a couple of graphite rods that you ran a current through, heated up the graphite, vaporized the carbon, and then they had a little disc that they put above it to collect the soot on. And they had some peculiar soot, and they wondered what was going on. They usually had an inert atmosphere in here. They wondered what was going on with this peculiar soot. And after kidding each other around for years, they finally decided, well, maybe it really is C60. Maybe we ought to look for it. And sure enough, they discovered that if you had about half an atmosphere of argon in this bell jar and collected soot here, that that soot was about 5% a mixture of C60 and C70. And so people around the world started – this was essentially August of 1990, September of 1990. People around the world started vaporizing carbon rods and collecting the soot from inside the container. You notice it doesn't land just here. It lands everywhere on the inside of the container. And so most of the research groups look kind of like this. Next slide, please. It was a nasty work. And scraping the soot out and shaking it up and dissolving the C60 and the C70 out of it. Next slide, please. Once the chemists knew how they could get this material out, then they separated this material. And actually, Harry's group was one of the first groups to separate the material. This is C60 in a toluene solution. This is a thin film of C60. This is C70, pure C70 in a toluene solution and C84 in a toluene solution. And so there are about, I don't know, perhaps eight or nine different pure fullerines that have been isolated. And there was a tremendous amount of excitement and a very large number of papers that came out in the year right after 1990 as people worked on this. And so where do we stand today? And have the next slide, please. Well, first of all, there's this material has come out since about 1992. If you add, this is a fullerine that's been elongated and was only capped with pentagons on one end, if you do your vaporization of carbon and add a little bit of metallic iron cobalt or nickel, particularly nickel, to the system, the catalyst takes over the system and converts all the fullerine production into the production of these carbon nanotubes, bucky tubes. And a lot of the current excitement is, can we do something with these bucky tubes? I mean, for example, if you could take millions or billions of them and put them together into a cable, all parallel to each other, you would have the strongest cable imaginable, perhaps a hundred times stronger than steel at perhaps a quarter of the weight for the same cross-section. Unfortunately, no one knows how to do that. But the effort to make something out of these materials, also the material is electrically conducting, so you would have electrically conducting extremely strong cable. So just recently, people have wondered whatever became of bucky ball. Next slide, please. This is what we have, I've got my slide out, I apologize, this is what we have come out of this in terms of the morphology of carbon. We started out with diamond as a three-dimensional network of carbon atoms. So this is basically a three-dimensional material. One other form of carbon is graphite, which is basically a two-dimensional material, these sheets of six-membered rings. Then the nanotubes are one-dimensional materials, and the bucky balls would correspond to a zero-dimensional material. So one of the things that's come out of this is that we can think of carbon as satisfying virtually all forms, in one form or another satisfying virtually all morphologies that are possible in three-dimensional space. Next slide, please. So this is what I thought was coming up. In the May 4, 1998, Wall Street Journal, this question was asked by Susan Warren, and the obvious answer is no commercial product has been made out of bucky balls. And her reasoning was that it cost too much, $11,000 a pound for C60, same inputted outside the scope of something that would be useful, and the highest utility material pharmaceuticals have to cost less than $2,500 a pound. I don't think this is actually the reason. It is true that it's hard to scale up the manufacture of C60 because of this basic, the fundamental batch nature of the process in digging in there and getting a sit-out. But no one has come up with this killer application that's going to be some great commercial value, and therefore there's been no reason to drive this price down. Now, next slide, please. That's not because people aren't trying. There's areas of research or producing fullerines, particularly in the hydro-metallic fullerines, fullerines with metals inside. There's a lot of work on the organic chemistry of fullerines, efforts to find applications in biology and medicine, efforts to find applications in optics and electronic devices, but no one, as far as I know, has come up with that practical application that we're all thinking of. So in the words of Rick Smalley, we all wonder whether the kid will ever get a job. Thank you.
|
Robert F. Curl Jr. was born in Alice, Texas in 1933. Quite remarkably, he stayed in Texas for almost his entire research career. After completing his PhD in Berkeley, California, he accepted an assistant professorship at the Texan Rice University in 1958 and remained there until his retirement, dealing with various problems from the field of physical chemistry. Still - and quite obviously, Curls scientific impulses reached far beyond Texan borders. When he received the 1996 Nobel Prize in Chemistry together with Richard E. Smalley (who also worked at Rice) and Sir Harold Kroto (at the time at the University of Sussex, UK), this was a true example of national and international scientific collaboration. In the present lecture, delivered in Lindau two years after the award, Curl gives a detailed, historical account of this collaboration, which led to the discovery of the Nobel Prize-winning, football-shaped C60 molecule, also known as the Buckminster fullerene or buckyball.In the 1980s, Curl and Smalley were studying metal clusters with an apparatus Smalley had developed in his laboratory. Using high-energy lasers, this apparatus could convert metals (or other materials) into a plasma. The latter was then allowed to expand into a vacuum, where chemical reactions took place. The products of these reactions could eventually be detected with an attached mass spectrometer. This laser-supersonic cluster beam apparatus attracted the attention of Harold Kroto, who was, at the time, studying the formation of carbon chains in space using microwave spectroscopy. Kroto believed that he could simulate the conditions in space using the equipment in Smalley’s lab (indeed, Curl mentions in his talk that ‘Kroto fell in love with this machine.’). Curl established the contact between the two scientists and Kroto came to Smalley’s laboratory in September 1985. Only 11 days after he arrived, the three scientists submitted a letter to the journal Nature reporting the discovery of a football-shaped C60 molecule, which they produced by vaporizing graphite using Smalley’s apparatus. This letter was the first of three publications that should lead to the the Nobel Prize, rendering Kroto’s 11 day visit to Rice the probably most efficient and rewarding scientific collaboration ever. However, Curl also mentions some other contributions to the C60 story, who were not rewarded by the Royal Swedish Academy of Sciences. In his autobiography [1] Curl states that ‘Jim and Sean were equal participants in the scientific discussions that directed the course of this work and actually did most of the experiments.’ In his talk, he further mentions that the C60 molecule had been predicted theoretically by others long before its experimental detection. In concluding, Curl outlines some of the developments that were triggered by C60 research. If the transition metal nickel is added to the graphite being vaporized, for example, carbon nanotubes (‘buckytubes’) are obtained. In contrast to the fullerenes, which have remained largely devoid of practical applications, nanotubes are seen as a promising candidate in various areas of material science and are already being used in turbines, sports gear and scientific instruments, to name a few. In 2006, in the frame of the last of three talks Curl gave in Lindau so far, he should discuss these and other new developments in the field of carbon based materials. David Siegel [1]http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1996/curl.html
|
10.5446/55034 (DOI)
|
My plan is to kind of be a little looser today and maybe not even use the full hour. I just kind of, I'll pick up where I kind of, you know, ended things last time with a couple of questions and maybe, you know, talk a little bit about some, you know, other kind of directions that this circle of ideas goes. So we'll see how long, how long we'll take. So let me just say where I left things off yesterday, which is that I was giving some examples of, you know, what I guess on the one hand we have this kind of, you know, what I was calling these Gopukumar bofengarians, although to really be worthy of the name, they kind of have to satisfy the conjecture, but, you know, kind of some proposed on thoughts for defining these things, which again really involves kind of, you know, one dimensional sheets along with a modular one dimensional piece. And then it's kind of, you know, perverse sheets that lives on the modular space. And then kind of studying the, how it behaves with respect to the math of the chow variety, you kind of cook up some numbers. And then the kind of, you know, the main kind of, you know, conjecture would be how does it relate to the kind of more traditional perverse kinds of theories on X and the one that I was interested in with this kind of Stadel-Paris theory. So, you know, yesterday what I kind of spent most of the lecture talking about was what I do is the kind of the most general piece of evidence that we have, which is that if you have, you know, basically an integral curve, it needs to be stronger, the push forward needs to be integral, but I'm just going to be a little sloppy about it here, inside of the total space of the conical bundle. And then I mean the statement is kind of non-trivial in the sense that it requires both understanding some perverse alteration and this kind of mysterious chief on the modular space. But then nevertheless, using just the fact that some kind of formal properties of these constructions, you're able to prove it essentially by reducing it to kind of a much simpler case, the case of locally planar curves. And so, I mean, let me just mention one kind of question I have. And I mean, in some sense that kind of two-step, that's kind of two-step reduction, you know, is one of the main things I wanted to explain because that kind of technique shows up in a lot of different contexts. And so, and it's something that I personally found very useful in the last few years. Let me just kind of mention two pieces of speculation in this direction, which are not, I mean, I hesitate to almost write anything down because they're not somehow, these are kind of half-co-opted ideas. But one kind of questions I have is that, so the way I kind of explain this procedure in my lecture yesterday is that I used, I use this kind of perverse continuation idea where you kind of show some perverse sheaves at full support. And so then if you want to prove something about them, you can prove them over the generic point where it's just a local system, all the curves are smooth, and you're in this kind of very classical situation. And so, I kind of use this kind of support theorem. Some kind of questions. In the first step. I mean, one thing that's happened, so that work of Ngo is pretty old, that one thing that's happened kind of in the intervening years is that there's been kind of a major advance in how we kind of think about a lot of those older questions, which is this work of Groesch and Eger. Wies and Siegler, where they basically give a much more kind of flexible way of proving many of the results, maybe not quite as strong as what Ngo does, but for all intents and purposes for all applications, their techniques are just as good and they're much more flexible. They have much less stringent hypotheses on the families in question. And so, I mean, this is extremely beautiful work in his talk earlier. He alluded to a little bit in his talk earlier this week. And so, one question that I had is, how does that interact with this kind of circle of idea? So, you know, you can imagine that there's kind of a more systematic way of thinking about some of the stuff that I was doing yesterday, where you kind of, you know, use their approach to things instead of this kind of more heavily perverse sheath approach. Maybe how. And I'm leaving it open then because I don't really have a good feel for what the answer should look like. Although I think it is reasonable to expect that there is some kind of fruitful interaction there. Now, the other question I have, which is again, also kind of not even half-baked, is that, you know, so, you know, the way this argument worked for, you know, you have imagined gamma some kind of very complicated space curve singularity, and then you kind of push it down onto the surface where it becomes planar, and then you kind of use this kind of, you know, theory there. When you study, let's say, reduced planar singularities, so for locally planar singularities, the moduli spaces that show up, if you look at, for instance, the compactified napogian of a curve with locally planar singularities, these are, you know, very closely related to what are called affine springer fibers in type A. Which came up in Eugene's talk last week, and then also Frank Chen's talk yesterday. And so one question that I've always wondered is, what is there, you know, a role for these, you know, space curve singularities? Is there some kind of analogous, you know, role for them to play in geometric representation theory, where you wouldn't take the comology of these kinds of spaces, you would take the kind of comology with, you know, values in this kind of sheath that we've added on top of it. Or maybe I'll write it like this. This is, you know, the contribution of this curve gamma to this moduli space. And I mean, I don't even, in this case, I don't really even have an idea for what that was, what, what, what you could ask for. But I think this is a direction that I think is worth exploring. And actually, Dory, Bajlari, at some point he had some speculation along the lines, although I don't know how precise it was. Okay. So what I'd like to talk about today, next, is maybe what are the, so these, this kind of, you know, in some sense, most general evidence we have for this kind of connection. And, you know, obviously still a pretty restricted situation, but there's some generality to it in which the singularities of gamma are very bad. And so what I'd like to first do in today's lecture is just talk about some kind of non-reduced example. You see this kind of relationship between these two theories in the integral case is kind of the simplest. Basically, you know, you look at the PT theory in this class gamma and you look at the, this kind of Copacorn-Movoff endurance of this class gamma and then they just match up with some denominator thrown in. But if you have a non-reduced or a reducible cycle, then they're going to be all these corrections that come from effective sub-cycles. And so in some sense the relation is much more complicated in that thing. And so I think it's kind of important to have examples where it's true there. And we don't have so many, but here there are kind of two main sorts of examples where you can produce examples where it's true. And so, okay, so let me just say that here. So the relation now, let's say the contribution of the stable pairs and variance for this class gamma in terms of mg gamma prime. And this is in particular where that kind of exponential and stuff really show up. And so the first source of examples where it works out is by taking Flops. So Flops is a situation where you have a collabi out three-fold. And you have a vibrational equivalence with another collabi out three-fold, which I'll call X cross, which is a different, you have some contraction of X and you have some kind of different resolution of the contraction Y and where the exceptional loci are curves and X and X prime are isomorphic and co-dimension one. So basically there's going to be some kind of curves here that kind of get contracted and then you blow them up again and get some new curves in X prime. And so there's all, you know, this in birational geometry, this is kind of a well-studied situation, in particular one thing that you get, which is the arm of Bridgeland is that the, well, okay, so there's first of all the, let's call this birational map, maybe not see, call it the, you can identify the H2 on these two three-folds, although effective curves may or may not go to effective curves on the other side. And then there's a theorem of Bridgeland that actually gives you a derived equivalence between the derived category of coherent sheaves on these two three-folds. And what that means that you can take, you know, a coherent sheaves like our one, our modulite one-dimensional sheaves or a modulite space of stable pairs and you can apply this derived equivalence and then you'll get a family of now not necessarily sheaves on the other side, but maybe more complicated complexes. So assume you're in a situation where you have a kind of a one cycle, let's say an effective one cycle on X. You have to think a little bit about what this means, but it makes sense to talk about what its push forward is to the other side. It'll be a cycle in the class corresponding to this identification between H2 of X and H2 of X cross, but you can actually kind of refine that to get an honest to God one cycle. I'll call gamma prime. Assume this is also effective. Not always true, but it often is. And if you start off with an irreducible curve that isn't contained in the exceptional locus of X, you'll get some effective curve on effective one cycle on X prime, but typically it'll now be reducible and have multiplicities and so on. And so the theorem, which you can over prove, is that if you calculate the contributions of the Gopchamer-Rovach invariance on each side, they just match up on the nose. When I say local, that means I'm just taking the contribution just at this one point of the Chow variety, the local invariance I defined yesterday. I'm going to put a little asterisk here. The star here is because as always, you know, there are these, to define these invariance, there was this choice of orientation I have to make. So the assumption is that you can kind of take orientations on both sides. Probably it's the Chow-Lavi-I orientation. And so again, you know, what he's doing is he's taking a one-dimensional sheath, moving it over to the other side, and then kind of analyzing what you get on the other side. And similarly, this is an older result of, again, also you've Nobu and John Calabrese. The stable pairs invariance on X and X prime can be related. It's a little bit more complicated relationship, but again, there's some kind of very explicit relationship between stable pairs on one side and stable pairs on the other side. And again, the technique of proof is always the same. You have a sheath or a complex of sheath now, and you apply this derived equivalence to get something on the other side, and you just see how far away that is from a stable pair on the other side. So I won't write down this relation explicitly just because it's a little complicated. But the upshot is, again, subject to these conjectures, this basically gives you that if you know the kind of correct relationship on one side of your flock, it implies it for the other as well. You can just follow what's supposed to be expected behavior of these Gokul-Kumarov invariance under this more complicated relation. And what's nice is that this kind of flopped one cycle, gamma prime, actually can look very different from gamma. So this gives examples. Gamma could be one of these examples that we've already proven the theorem for in the integral, and then gamma prime will look kind of different singularities, will have lots of components, will have multiplicities. Non-reduced, non-planar. So this is kind of the first main example we have of kind of non-reduced one cycles where things work out. The second example is actually classical, but it still gives, I think, a kind of an interesting check. And this has to do with Higgs bundles. And in fact, most of what I want to say for the rest of today's lecture is really about this case of Higgs bundles. So this is kind of a more classical modular space. And so let me just kind of go over what the definition is. You haven't seen it before. So the starting point for Higgs bundles is you start off with a smooth, productive curve. The modular space of Higgs bundles on this curve, rank R, and then I'll set chi equal to d is I take two pieces of data. It's going to be a vector bundle. So e is locally free of rank R. I fix its Euler characters to be d. And then phi is a twisted endomorphism. I like to e. So it's a map from e to e twisted by the canonical bundle of my curve. So fiber-wise, it looks like an endomorphism of the fiber, but globally it's kind of twisted by this canonical bundle. And OK, there's some kind of stability condition to get this to work. So this is really just a smooth modular space. It's a smooth, quasi-projective variety. You could also consider more general kinds of twists. One of the ones that are better understood is when I further twist by an affected advisor on my curve. All the same definitions, except now my twisted endomorphism goes to a twist by the canonical bundle with a little bit of extra. This twist makes a surprising amount of difference in the theory. So both of these spaces, this kind of regular higgs and this kind of twisted variation on it, carry what are called a hitch in maps. These are maps, just the affine spaces. So in the case of the original higgs space, the way this is traditionally thought about is that you have this twisted endomorphism, and then you just kind of take the twisted characteristic polynomial. So at every point of my curve, I could take the characteristic polynomial of the endomorphism of the fiber. That'll give me the coefficients of that. And then as I kind of study how it behaves on the twist, I end up with an element of... Excuse me. This is basically taking the characteristic polynomial of Higgs field. And if I do it with poles, I mean, I can do the same thing. If I have this kind of, you know, instead of k, I do k-twisted by d, I just take global sections of k of d to the i. And so either of these situations, these are proper maps. And generically, the fibers of this map are basically Jacobians of some curve that covers c. And so this is very similar to the situation I had yesterday. This kind of generically looks like some kind of family of a billion varieties, but then the fibers get very badly singular. And so it's similar to what I was doing yesterday when I was considering versatile families of planar curves. So what does this have to do with kind of collabi... So these are, you know, whatever. These have been studied forever. These are some kind of my classical modular problems. So what do they have to do with this kind of collabiow story? Well, the idea is that you can kind of embed these problems into the kind of collabiow modular problems we've been looking at. In kind of a dumb way, it's kind of amazing to me that this ends up being a useful way of thinking about it. So first, let me just do the right case of... I guess I'll do both at the same time. So first I can take a surface, if I start off with a curve, and I can associate to this curve a non-compact surface, which is just the total space of the canonical bundle or the total space of the canonical bundle twisted by D. So you can think of this as kind of like a non-compact K3 surface, non-compact bundle is trivial, and this is kind of like a non-compact funnel surface. And then if you just think about what does it mean to give you, let's say, a one-dimensional sheaf on this surface, well, if that support of the sheaf is proper over the curve, that's the same if I push it forward, I'm giving you a vector bundle on the curve with one of these twisted endomorphisms. So one-dimensional sheaves on these surface really correspond to either these modular spaces of Higgs bundles. And then what is this Hitchin map? So if I look at one-dimensional sheaves on the surface, the sense of just being the same as the corresponding Higgs space. And then this Hitchin map ends up just being the map to the linear system of the surface. So the chow variety of the surface, which just remembers really the support of this one-dimensional sheaf, ends up being exactly this kind of base of the Hitchin map. But actually, I don't really want to work with surface, I want to work with Kladiya threefold. So I'm going to do kind of the same thing that I've been doing always to take a surface and turn into a Kladiya threefold, which is that I'll take the total space of the canonical bundle of the surface, this is going to be my x, which in the first case is just the total space of this rank two bundle on my curve, on both cases of this. And so now if I take my kind of curve class beta on this non-compact Kladiya threefold to be r times the zero section, then this moduli space that I've been interested in, one-dimensional sheaves, all the characters one, well, this just ends up being kind of, let's say, the twisted Higgs moduli space, let's say, for geeks one on my curve, 1d is greater than zero. And in the case without the poles, I just get the regular Higgs space on my curve, but then cross a copy of a one, that's because in this case, in the k three case, when I've done the total bundle, total space of the canonical bundle of the surface, that's just taking s cross a one. So there's always going to be this trivial a one factor. So this is my moduli space on my Kladiya threefold, and up to this factor of a one, it exactly recovers the kind of classical moduli spaces that people have studied. And then what I was saying earlier, the kind of thing that we've been looking at is we've been looking at this moduli space of one-dimensional sheaves on the Kladiya threefold, and I'm mapping it to the chauvriety of the Kladiya threefold. And in this case, again, this just ends up being exactly the hitch in math. Maybe with this extra copy of a one floating around. But again, this includes what's nice about this geometry is that from the perspective of what I'm interested in, this includes one cycles that are non reduced, reducible, and so on. The support of the sheaf in this case, depending on which point of the chauvriety I'm looking at, it can be non reduced. The most extreme example is when I Higgs field, and the morphism is nilpotent. And then the corresponding support is just r, literally r times the zero section. So it's kind of non reduced with multiplicity r. And then what are these kind of Gopel-Komar-Bachlan variants in this case? Well, all right. So again, what are we doing? In this case, m beta smooth, because it's just this Higgs moduli space. And then, you know, after the shift, what I'm interested in is I take the kind of perverse homology sheaves when I push forward to the chauvriety, namely the hitch in base, and I take their Euler characteristics as sheaves on this base. And so concretely, one way of thinking about that is that I'm taking that kind of perverse, kind of extra perverse filtration, or grading even on the homology of m beta, which is just this Higgs moduli space. And then I'm just forgetting, because I'm taking this Euler characteristics, I forget the homological degree, and just remember this kind of the perverse degree. I'm remembering this kind of which perverse homology sheaves are coming from, but I don't necessarily care what the homological degree was. And what's nice is that this has been studied a lot. So in d equals zero, meaning the case of traditional Higgs bundles, this object is exactly the subject of what's called the P equals W conjecture, non-Abelian hodge theory. This is kind of a big subject, and it's super interesting, but I'll just give the part that's relevant for us, which is that, yeah, so again, d equals zero. So I'm just doing Higgs bundles where I'm twisting by the canonical bundle, and nothing more. What this says is that if I looked at a moduli space of Higgs bundles, there is a certain diffie morphism, so some kind of C infinity diffie morphism, the character variety of the curve, which involves maps from the topological fundamental group of the curve, maybe with the puncture, maybe with the GLR, module of conjugation, and with the property that the kind of the loop around the puncture gets sent to a certain group of units. This is what's called the twisted character variety. This is a diffie morphism, but of course it's not an algebraic map because the right-hand side is an affine variety. It's some kind of exotic construction, but since you have a diffie morphism, you can identify the singular comology. So this one hand, the Higgs space gets identified canonically with the comology of this character variety. This is some kind of known thing. The conjecture, which is due to the Taldo, I was a little in miglerini, is that on the left-hand side I have this kind of extra information coming from perverses on this chow variety, which is just the hitch and base. On the right-hand side, I have, well, this is an affine variety, so if I look at the comology, it has a non-trivial weight filtration. And the conjecture is that these are also identified. There's a, I have to scale the grading by two or something like that, but I'm going to skip that. So why is that relevant for us? Well, this is the thing we're interested in. If I forget the comology degree, I just remember the perverse degree. That's the thing that we want for these Boko Komarov invariants. If I believe this conjecture, then I can compute it instead by looking at the comology of this character variety, where I only remember the weights. And this is something that you can actually compute. So this is done through a Hausal, a Latelier, and Rodriguez, a Viega. Sorry, I'm just going to use initials for the second time. Which is that if you're only interested in the weight polynomial of the character variety, you can do it. This is something that you compute by a point counting. It's a cool story because it's really related to, basically, you replace this group GLR with GLR over a finite field. It's something you can compute using the representation theory of GLR over a finite field. You can do it. And then what was observed later on by trying the Akinescu and Pan is that this matches, when you kind of go through this entire procedure, this matches on the nose what you would expect from the Stable-Paris theory of X. X in this case being this kind of local curve, the total space of K plus L. On the numerical level, it's not hard to compute. So this is kind of an extremely long chain of reasoning, and it's kind of built on this conjecture. But it means that every time, every case where this conjecture is proven, you then get the corresponding connection between the GoproKoroboff variance and the Stable-Paris theory. So the upshot here is that basically any time you have the P equals W conjecture, it implies the kind of Bromow-Witton-PT relation, sorry, the GoproKoroboff-PT relation in this case. And so in particular, we know this conjecture, for instance, we know this when the rank is 2, this was done in this original paper. The conjecture was formulated. We also know it, so this is R equals 2, but genus can be arbitrary. We also know it when the genus is 2 and the rank is arbitrary. So recently work of Ticotaldo, myself and Joneon Shin. And so in particular in these cases, everything here is ironclad, and then you really do get kind of provable cases. What is special in genus 2? Sorry, say that again? So what kind of feature of genus 2 you use to prove? I'm sorry. Can you try that one more time? You mean why can we only prove it in genus 2? Yes. Oh, yeah. Well, yeah, that's kind of a complicated story. The way the proof goes in this paper is that we prove the theorem by embedding a genus 2 curve inside of its Jacobian and then studying one-dimensional sheaves on this abelian surface. And so what's special about genus 2 is that there's kind of basically the comology of the hitch in space is subjected onto by the comology of this compact hypercaler variety. So there's a theorem that you get for higher genus, but you only kind of get it on some piece of the comology of the hitch in space. So there's a version of this theorem that works for arbitrary genus, but it's not strong enough for this application. You only get a certain subalgebra of the comology for which the injector holds. And then what's going on in the rank 2 case is the rank 2 case, even though the genus is arbitrary, you actually have a complete presentation of the comology of these hitch in space. And so you can really kind of write down all the generator's relations and see where they go and match them up and what piece of the filtration you want. So these two proofs are very different in the sense that this first one is kind of, you know, uses a lot more information about the comology. The right hand side uses kind of, you know, the fact that you kind of can embed, you know, there's a compact geometry that kind of governs the story without any loss of information. What are the kind of next cases I think to think about? So my feeling with this whole, this relation is it's still kind of in the stage of, you know, every example kind of shed some light on what's going on. So for me, what I think is kind of the most accessible one, actually, I mean, the one that's kind of, you know, I think really low hanging through that we didn't pursue, but not for strong reasons was that when we, when you study local surfaces, you know, we only restricted ourselves to the situation where the push forward of the one cycle, so the local surface is the point, the total space of the canonical moment surface. So we kind of restricted ourselves to the situation where this was integral. But I think with, you know, with a little bit of, you know, elbow grease, it should be possible just to handle kind of, you know, reducible instead, reduced and reducible should be okay. The statement is more complicated because now you have corrections coming from these kind of irreducible components. But still, you know, the situation sufficiently nice in the sense that, you know, this theorem of myself and you and and me and Shenday has already been extended in the local plane of case to the setting with the right corrections. But more interesting, the local surface case is, you know, non reduced. And in fact, already, you know, the kind of, you know, this, the simplest example where I think, you know, this is still open is, you know, when maybe the modular spaces are kind of still smooth. So for instance, if I take the total space, the local surface for like a del petso surface or something like that, in this case, the modular space is smooth. And I think understanding what's going on there is very interesting. And I, you know, I think accessible as well, although, you know, maybe not as easy. What's interesting in this case is that if I look at a stable one dimensional sheath on this threefold, it's scheme theoretically supported on the surface. But on the stable pair side, the one dimensional sheath that I'm taking sections of will be thickened off of the surface. And that kind of leads to some contributions. And you can see that in the conjecture. An example that Jim Bryan mentioned to me is something that he's kind of interested in, I should mention. Jim Bryan is still the local curve geometry. So like what I was doing in the Hitchin case, but maybe not where you kind of twist it so the spaces are all smooth. So for instance, if I have a curve, and I could take a theta characteristic of the curve, so choice of square root of the canonical bundle. And he's been interested in kind of basically understanding what's going on in this kind of geometry. And then the last case, which I mean, the last kind of specific example that I think would kind of shed a light on it on what's going on a little bit more is the following, which is again, I'm going to take it'll be a local curve geometry, but I'm not going to have assumed that the normal bundle is split. So let's take N, which is a kind of a generic rank two bundle. Determinant canonical bundle of the curve. And then I just take the total space of this rank two bundle. And so this is a little bit like what you would get if you, you know, when I was talking about on Wednesday, I was talking about how one way of thinking about these conjectures that you imagine you're in an ideal situation, where all your curves in your Kolabi out three fold or maybe one smooth and isolated and rigid in some sense. And you could try to imagine what it would contribute. And you know, one of the issues is that, so this is kind of an example of what you would get if you thought about it that way. But what's interesting about it, and you can see in this case is that if you consider, let's do for instance, the simplest case that I already don't know how to do, which is beta is two times the class of the curve. And so set theoretically, if I look at the modular space here, what I get is I just get a rank two stable bundles, I get all this. In particular, you know, none of the sheaves that show up are thickened in any way. And so you might think, well, all right, this is great. You know, we know we've known for, you know, millennia with the homology of the modular rank two stable bundles, so I should just be able to compute the variance and see what happens. But what happens is that actually this is not a scheme theoretical quality. So m beta actually has some kind of non reduced structure, which is a little mysterious in the sense that, you know, you can kind of see what the locus that it's supported on the non reduced structure, but you know, I don't have a good feel for what it looks like. And I mean, why you have this non reduced structure is just something you can see if what they if if F is a rank two stable bundle, you know, the tangent space of the modular space of stable bundles on the curve is something like, you know, X one of on the curve of F on that. But there's some, you know, long exact sequence and there's a non trivial co kernel here that can happen generically it's zero but then non generically there's some piece here which is something like, I want to say, F. And this group can be non zero and in particular this this group can be strictly larger than this one. It's kind of like some kind of real another question on this modular space that affects the co-mology. All right. And so, okay, so I, in terms of studying this connection, this is I think are the kind of main examples to think about next. But let me kind of conclude with just another direction, which I actually was planning on spending a maybe half an hour talking about but I'll just I'll just say a few words about it. So kind of the other direction in this circle of ideas has to do with what are called, you know, kind of kind of dependence questions. You know, in some version this is conjectured in, you know, my work with you can obu and then kind of more systematically in the paper he wrote the semi stable case. And it has to do with some choice I made at the very beginning when I define these invariants and when we define this approach to Gopal Marv often variants. What I did is I started off with some modular space of one dimensional sheaves and which one did I take, I took, you know, sheaves where the support was in class beta, and then I fixed the other characteristics to be one. And so you could ask why, what was special about Chi equals one. Why did we pick that and the answer in some version is, you know, in some sense is just laziness. You see when Chi is equal to one, you don't have to pick a polarization to define stability. But if you're willing to make that kind of choice, then you could have picked any value for this for Chi and you know, you could still get a modular space. So the simplest case is when, you know, you pick some integer K, and you know, assume that in some appropriate sense, it's relatively prime to the curve class. So you don't have to worry about semi stables of course. And so now you get a modular space and Sigma, okay, where Sigma here is some kind of stability condition that you have to use as well. And so just as I did before, you can define some invariance now depending on this integer, which is the value of Pi and then also the stability. So the first conjecture, which we made in our paper just to deal with this objection was that, okay, in fact, none of this matter, whatever choices you made here, independent both of this stability, which is and this choice of Chi that you picked. And in fact, the independence of the stability condition, this actually was proven later on by Yukinobu. And so I think, you know, under some assumption that these orientation issues work out. As you want, I mean, you could even ask for something stronger, we never formally made this conjecture. But you know, I, which is that you have this math from your modular space for the child variety, and the child variety doesn't require any choice of stability or doesn't depend on what K is, it only depends on what beta is. And then you can just ask for this push forward of the, you know, this DT sheaf, which depends on Sigma and K just to be independent. And Sigma. This is a little surprising, depending on your point of view, this is either surprising or not surprising. So you see, so why is this kind of, you know, not surprising? Something like this might be true. If I specialize all the way to Euler characteristics, I mean, I look, I just take the gena zero thing, the gena zero statement, that kind of, you know, n zero beta, Sigma K is independent. This is actually conjectured a long time ago. This is in fact a prerequisite. This is, you know, equivalent to this strong rationality conjecture that I stated. But this is something that people have been interested in for some time, and you know, basically it boils down to the independence of these numbers on all these choices. And so this is just some kind of, you know, souped up version of that. And, you know, whatever reason is making this numerical statement true is maybe also making this kind of sheath theoretic statement true. So from the point of view of this kind of DT perspective, this is not, we did not view this as a big leap. But it is a little more surprising from the classical point of view. Well, the surprising is maybe it's not the word, but it's a little. And the following sentence, which is that, let's say I take a smooth curve of genus G, and then I take the modular space of stable bundles on this curve, or rank R and kai is supported, and so assume, and again, let's assume I'm still in the co-prime case. So you know, you could ask how does the comology of this thing depend on the choice of D? And well, I can always twist by like a degree one line model, and that'll relate, you know, to the one modular space with the other modular space where the degree is shifted by R. But what Harter and Narhassimhan showed was that basically other than this move, and other than like taking duals, that in general, these comologies are different. So the comology. But for instance, you know, I think already when the rank is five, and then D is one, and D prime is two, these spaces are different. These comologies are different. And that's a little bit like what I'm asking here. I'm having a modular space of, you know, things supported on a one-dimensional scheme. And so you could ask why, what's special about this Kalabi-Ao-3 situation that isn't kind of happening in the case of curve? I don't really know a great answer to that. But one thing that kind of gets explained from this perspective is that, so you see, okay, so this is a very classical question about stable bundles on a curve, and you don't get any kind of independent, kai-independent statement. However, if I look at Higgs bundles, or twisted Higgs bundles, then in fact, it is true that the comologies of these Higgs spaces, with or without the D, are independent of the Higgs bundles. And so, you know, this is something, let's see, I think this has proven a couple of different ways. I think maybe the first proof was basically by just calculating, finding a formula for these Ponkray polynomials. So this is essentially done, I think, by Schiffman, in the denontrivial case, maybe Schiffman, Moskvoy, and Gorman is what I wrote down. So I think this is the final paper that proved this equality here for non-triviality. And so you might ask, why is it the case that Higgs bundles have this kind of kai-independent statement, while just regular stable bundles on a curve do not? And so from my perspective, the explanation is just what I said before, this kind of Higgs moduli spaces are secretly a kind of Kolobia 3 moduli problem. But just stable bundles on a curve are not. There's not some embedding of the curve into a Kolobia 3 fold, such that the moduli space of one-dimensional sheets is just stable bundles. But then you could push it forward, is that you could ask a little bit more. Here I just focused on the case when beta and k were co-prime, because then this moduli space, I don't have to worry about semi-stable sheaves, and there's some stack and some core space, you could ask if there's a way of pushing beyond that. And the answer is yes, there is. So now there exists semi-stable sheaves. You'll have some kind of stack of semi-stable sheaves with some kind of core-smoduli space, which again, maps to the chow variety. And okay, maybe I won't get the intricacies of the construction here, but there is basically a way of modifying everything I said to handle this. So this is developed by Yukinobu. He basically produces perverse sheave on this course moduli space, which is now sometimes called a BPS sheave. And then you can kind of run the same story. And so already in these kinds of classical moduli spaces, if you specialize, you get kind of some interesting statements. So the kind of something that was proven recently by myself and Jinliang, if you do twisted, these twisted Higgs bundles, but now in this kind of non-coprime case. So this is some kind of singular course moduli space. Well, it turns out that this, you know, whatever this mystery sheaf is, it's somehow, you know, I haven't told you how to define it. It ends up just being intersection comology. This is something that was proven by my heart. And this is kind of what you get. And this is actually a theorem that you can prove is that, you know, so in the, the intersection comology of the singular variety twist is independent of D. And in particular, just equals the regular comology in the coprime case. And so this is, you know, and sometimes this is kind of a classical statement that you could formulate in the 70s or something like that. And you know, the reason to expect something like this is true, the only one that I can really see is by thinking of it as a kind of a specialization of these kinds of considerations. But the analogous statement for if I remove the twist, if I just consider regular Higgs bundles, it's still open. You know, now you don't need intersection comology. This Toto sheaf is going to be something a little bit more mysterious. And already in that case, we don't know how the proof should be true there. All right. Okay. So I ended up using the full hour, which was not my intention. But let me, let me stop here and thank you for your attention. Thank you. Thank you. Any questions? Yeah. Yeah. So why does like the trick that you use to go with Yidunyang-Shen to prove this household failures, conductors not work for, for example, using this kind of pendant, doing this kind of pendant? Oh yeah, yeah, yeah. That's interesting. Right. So I'm going to talk about that trick, which I didn't, I, you know, I meant to actually talk about that trick a little bit. But you know, I mean, so what this trick is about, just let me just say a couple of words about this. So there's this kind of the same technique that I used in yesterday's lecture can also be used just to study questions about Higgs bundles. Higgs bundles are a smooth modular space. You wouldn't think it would be so helpful. But the idea is that you can embed them inside of these twisted Higgs bundles as a critical locus. This allows you to take statements that are easier to prove for the twisted case and turn them into stasis theorems for the untwisted case. So it's exactly the same kind of philosophy as from yesterday's lecture. And so what happens for these kinds of questions? Well, in order for this to be true, it's important to work with stable Higgs bundles. The analogous statement is not true for semi-stable Higgs bundles. The critical locus is bigger. So you get some, you know, thing that's true, but it's like it's the comology of some bigger modulite space. So it's not actually the thing that you're interested in. And so then, you know, what you have to do is you have to somehow relate that bigger space with maybe the smaller modulite space that you actually care about. Does that answer your question? Yes, thank you. Any other questions? Well, let's thank the best again. Thank you very much.
|
In the first part of the course, I will give an overview of Donaldson-Thomas theory for Calabi-Yau threefold geometries, and its cohomological refinement. In the second part, I will explain a conjectural ansatz (from joint work with Y. Toda) for defining Gopakumar-Vafa invariants via moduli of one-dimensional sheaves, emphasizing some examples where we can understand how they relate to curve-counting via stable pairs. If time permits, I will discuss some recent work on χ-independence phenomena in this setting (joint with J. Shen).
|
10.5446/55162 (DOI)
|
Ladies and gentlemen, I apologize for not being able to give my lecture in German. However, I do think that you will be able to understand my English better than you would be able to understand my German even if you don't speak English. So I can't try to speak German. I might say a word about the relation between my talk this morning and the talk of my friend and colleague, Professor De Bruy, yesterday morning. There was no pre-planning of this. As a matter of fact, there was no pre-planning that we would happen to be the two members of the economics profession to represent that profession at this meeting. But since we are, I think you will see something of the range of approaches to the problems of economic behavior. Professor De Bruy yesterday discussed theories of economic equilibrium, very general and powerful abstract theories. I will be concerned this morning primarily with departures from equilibrium, what happens to this system when it is not operating at equilibrium. Professor De Bruy discussed the construction of a pure deductive theory that proceeds from a very few strong assumptions. I will primarily be talking about the problems of verifying economic theories empirically. Professor De Bruy's work represents a culmination of a long period of work on the formalization of economic theory, what is sometimes described as the theory of subjective, expected utility, maybe say a word about that term. A development of formalization that has been taking place over the past 40 years or in a more general sense, if we don't just think of the mathematicianization of it, has been continuing all the way since Adam Smith. On the other hand, I will be discussing some trends in economic research which today are just becoming barely visible in economics. Perhaps we can even find some economists who haven't heard that this is happening. The question I will raise is to what extent will these new developments characterize the next 40 years of economics. Of course, it is the students in the audience here who will eventually discover the answer to that, who will be around to see what did happen and what the end of the story is. I can only tell you the beginning of the story this morning. My remarks will be largely methodological, and you may regard them as a case study in a particular scientific discipline, a case study on the relations between theory and empirical work. In a field of study, which has had a very powerful development, as I indicated, of formal deductive theory, but where in the past it has proved extremely difficult to mount a successful program of empirical research, whereby a successful program, I mean a program which really subjects the theory to very strong empirical tests. Now I know that we economists here are much outnumbered by chemists, but I mean all of these words in exactly the same sense that chemists will understand them. We're talking about science, we're talking about trying to understand a particular set of phenomena, that is the behavior of human beings when they are engaged in activities which we call economic, and understanding that behavior according to the same canons of methodology, the same rules of the game that we would expect in any scientific endeavor. And my remarks may enable you, I hope they will, they may enable you to understand why economists appear very often to disagree about public policy. When our governments sometimes turn to economists for advice, which they do occasionally, they follow the advice even more occasionally, but when they turn to economists for advice, it appears that they can get a wide variety of advice from economists. And that might lead some to the conclusion that economics is sort of what anybody thinks it is, and I hope that my remarks will show that that's not really the case. Alfred Marshall, the great English economist, said that economics is a part of the study of man. And if that's true, of course, we'd expect economics to have a lot to do with psychology, because psychology certainly is a study of man. And yet, if you look at the universities in our country and certainly in the United States, the two disciplines have very little to do with each other. In the United States, I don't think I know of a single Department of Economics that requires a single psychology course of one of its PhD candidates, or I regret to say vice versa. So we can live in very good ignorance of each other's work. Now, there's a very good and deep and important reason for this relative indifference, this mutual indifference. And that is that the two fields, while they are both concerned with human rationality, psychology, of course, is also concerned with non-rationality and even with irrationality, but insofar as the two fields are concerned with rationality, their views of human rationality are very different. And sometimes we apply to the kind of rationality in economics the term substantive rationality. That is, economics is concerned, not with human frailty. Economics is concerned with what is the action, what is the behavior that the situation calls for. If you want to study mountaineering in Switzerland or here, then if it's good mountaineering, mostly you study mountains. But if you want to understand why some people have mishaps on mountains, why sometimes they fall over the cliff, then you have to study people. You have to study the limits on the adaptation. And so as against the substantive rationality of economics, we often talk about the procedural rationality, the processes that human beings go through in order to try to behave rationally in certain situations. And psychology primarily concerns itself with procedural rationality, whereas historically economics has concerned itself, as Professor Brigitte Brous talked to yesterday, with what I would call substantive rationality, what the situation calls for, if you are, for example, to maximize utility. The theory of substantive rationality, common to almost all economists, is a theory which starts out with some very important assumptions that each person has a consistent set of preferences, often described as something called a utility function, that everyone is faced with a fixed and given set of possible strategies. In yesterday's talk, those possible strategies were possible combinations of commodities or exchanges of commodities, and that a person could calculate the consequences of picking one of these strategies for the resulting utility, or at least could make a probabilistic computation if there is uncertainty in the situation, the uncertainty being expressed by probability distributions of outcomes. Now, under these circumstances, what does the rational actor do? The rational actor simply selects that alternative, well, simply is a tricky word there, selects that alternative which maximizes utility in the face of the predicted consequences. Now, the procedurally rational person of psychological theory, in the first place, in most forms of the theory, is not maximizing anything, but trying to reach a satisfactory level of achievement, trying to reach some goals, which may or may not be maxima of something. Using very incomplete information, using very limited computational capabilities, with or without computers, computers are very nice, but they're not quite equal yet to the complexity of the world that we live in, that a very large part of the task is not to select among given strategies, but a very large part of the human decision task and problem solving task, is to develop the strategies themselves, to find out what the alternatives are that we may choose among, to invent new alternatives, and then of course, to evaluate them in the light of the knowledge that's available, choose a strategy which is likely to reach one's goals with some reasonable efficiency. Now, there's some real advantages in taking the classical economics approach of subjective expected utility, a substantive rationality, and I think some of those advantages were illustrated yesterday in the power of the theorems that you can derive about efficient use of resources. It's a very economical theory, a very powerful theory that leads you to very strong conclusions about how resources will be employed when people are behaving rationally, or if you want to turn to a normative side of how you would have to employ them to reach efficient solutions on the Pareto efficiency boundary that was described yesterday. Now, the theory, the classical theory also has some disadvantages, and I suppose one of the big disadvantages from the standpoint of scientists, whether chemists or what, is that the theory is possibly false. That's quite a disadvantage for a theory, empirically false, and that if it is, it may lead economists to incorrect conclusions about the economic system. Now, I believe, not all economists believe, I'm not representing the profession here, that this is not only a hypothetical possibility, but an unfortunate reality, that there are serious discrepancies between the actual economic behavior we see in the world and what this theory tells us about the world. I mentioned earlier that if economists are asked about, for advice about economic policy, you frequently get different kinds of advice from different economists. But you have to be very careful in interpreting that, because as I shall show in a moment, in fact, there's a very large area of agreement among almost all economists ranging along the political spectrum. I'll leave the Marxist out for the moment, although many Marxists would fall under this definition. But certainly in the political spectrum, from the people we call Keynesians, I guess I'm on the left here, you're right, my left, the Keynesians, over to the monetarists, we can put Milton Friedman and Bob Lucas, people of this sort, over here. They give very different advice, and yet I'd like to point out that really the difference between them are minute, but critical. The subjective expected utility theory in most of its forms has as one of its consequences, one of its implications, that all markets are cleared, that you're not left around with piles of unused things, so to speak, after the market has done its thing, at least in a static model, you are not, that all markets are cleared at the market prices, that if they're not cleared, something happens to the prices to re-equilibrate the markets. And a particular consequence of that is, of course, that the labor market is cleared, so that at equilibrium, there is no possibility of unemployment. All people who offer their services at the going wage get employment in the world of subjective expected maximization, utility maximization theory, in equilibrium. Now some of us have looked at the world and have decided that, in fact, this isn't quite true, that there is something called unemployment. Not all economists quite agree with that if you add the adjective voluntary. Milton Friedman, at least a good deal at the time, seems to say that there is no voluntary unemployment. They're just people unemployed at the current wage because that isn't a wage they're prepared to accept. Most of us, however, think there is a phenomenon of unemployment, and some of us even think something ought to be done about it. Now let's look as an example of how economics theory deals with this when the equilibrium theory makes such unemployment impossible, how unemployment gets reintroduced into the theory, and to show you the commonality of thinking of economists, I'd like to take Keynes over here and let's say Lucas over here and see how they get it. When they get it by following the assumptions of the substantive rationality, the perfect rationality theory, accept that they are willing to introduce perhaps one or two departures from those assumptions. People are rational in terms of that theory, but not quite rational. And the variety of economic theories of the business cycle and unemployment derives not from the rationality part of the theory, it derives from what assumptions are made as to exactly where the departure from rationality takes place. Take Keynes, for example. If you read the general theory, his famous book on stagnation, if you read that general theory, his reasoning is very orthodox. All economists will recognize Keynes as reasoning like an economist with maximization, rationality, and all that. Except, well there are a couple of excepts, I'll just mention one. Except that labor isn't quite rational. Labor and labor unions get confused according to Keynes about the difference between a money wage and the real wage. They don't notice the changes in the price level. And so they bargain in terms of the money wage when they should bargain in terms of the real wage, and I won't go through the derivation, but you get unemployment, unemployment out of this. The rest of Keynes is pretty orthodox reason. Well how does Lucas get it? Lucas is a neoclassical economist, father of the so-called rational, almost father of the so-called rational expectations theory, who does believe that people are rational indeed, but he also has articles and books about the business cycle. Where does his cycle come from? How do you depart from this equilibrium? Well in his case, businessmen are occasionally a little stupid, wrong word, a little irrational. Businessmen sometimes mistake changes in the general price level for changes in prices in their industry. They also suffer what is called the money illusion. So Keynes, the great liberal, thinks labor is stupid. Lucas, the great conservative, thinks that businessmen are stupid. And out of this, these little departures from perfect rationality, you get quite different consequences and quite different policy proposals as to what to do about it if anything. You see the real action in these theories. That is, the real action in trying to fit particular sets of facts in the society, the unemployment facts in this case. The real action doesn't come out of the assumptions of rationality. The real action comes out of where you assume people are unable to behave perfectly rationally, where they depart from this. The nature of human bounded rationality, the nature of the limits on human information and human knowledge on human computational ability. That's where the action is coming from in the theory. If that is the source of the action, then clearly we need some empirical data. We need some empirical data about how and where in fact people depart in their behavior from the assumptions of perfect rationality. Now, we really wouldn't have to go to the bother of doing that kind of detailed empirical investigation if in fact the classical theory, this theory of rationality and the resulting equilibrium, if that classical theory had made a large number of correct predictions. Even if it didn't explain unemployment, we could say, well, it's a good zero order approximation or first order approximation to the world and we can do a lot with it. Maybe sometime we ought to get around to putting some footnotes on it, some qualifications, but we can do a lot with that theory. But the question is, has the theory ever made any striking predictions about the world? And the answer is that in fact very little of the theory has been empirically tested even at the level of markets and the tests that are made have been highly indecisive due to the large role played by these auxiliary assumptions about the purchase from the theory, boundary conditions if you like, in the models and so I don't think we can be reassured by the state of verification of the theory. I have time to give you maybe just an example or two. It's a well-known empirical fact that if you take business firms in a particular industry in a country or even all the business firms in a country like the so-called Fortune 500, the 500 largest business firms in the United States and you arrange them according to size, sales volume, employment, the exact criterion used doesn't matter, you arrange them according to size then you find a very curious regularity. If you plot the rank size distribution on log log paper, all of it is a great thing to do if you want something smooth, you plot them on log log paper, you find that they form almost a straight line. Now you go back to classical theory and ask why is this so? And you encounter a thundering silence from the classical theory. Now going into details I might say that the classical theory either predicts that all firms in an industry should be the same size because that's the optimal size, the size at which you have the least cost of production. They either predict that or they predict that different firms have different resource opportunities, different access to resources and therefore the size distribution can be anything depending on the distribution of resources. Well the real world isn't that way. Again and again you encounter these linear distributions on on a log scale and you begin to ask why. And in fact stochastic models that show this phenomenon to be a steady state of a stochastic process are easy to construct, have been constructed and looked at, alas those stochastic models don't make any assumptions about human rationality. Those stochastic models proceed from assumptions like this, that firms are striving to grow, that their opportunities for growth are on average sort of proportional to how big they are already. If you're a big firm you have access to large investment funds from the banks, you have access to a large marketing organization, if you're a little firm you have access to small resources, small marketing organizations. If you assume that all firms on average have the same expected rate of growth, percentage rate of growth and one or two other boundary conditions then it turns out that the steady state distribution of firm sizes is exactly what's observed empirically. So here's a case where classical theory is helpless, where one can build stochastic models, what's actually going on and there's an immense gas between the two. I could give you other examples of the same thing but that one will give you the flavor of what some of the difficulties are. Now I should not imply that economics or economists have been indifferent to the problem of empirical testing of economic theory, in spite of the seductive beauty of this deductive structure that economics has. In fact, alongside of the very rapid growth and development of mathematics and economics during the past 40 years has been an equally rapid and impressive growth in econometric methods, in methods of fitting quantitative data. And the theory of data fitting of econometrics today is indeed extremely sophisticated, an extremely important part of mathematical statistics today. In spite of that, I think there's a good deal of disappointment in the profession today as to what has been accomplished by the development of these very sophisticated techniques of econometric analysis of data. And the source of those difficulties is very obvious. The data are bad. Economists mainly have to rely on aggregated data about the economy, usually collected for public purposes like taxation purposes or census purposes and available on a very gross scale at infrequent intervals, not exactly the kind of data you would choose for tracking a dynamic system. And what we have tried to do in economics in order to extract the information from these extremely noisy data is to improve our statistical methods. Historically, that has not usually been what the sciences have done with noisy data. In chemistry, when you don't have the kind of data you need to understand phenomena, one of the things you try to do is to invent new instruments. Saw some beautiful examples of that yesterday of the use, no longer new, but the use of crystallographic techniques to discover molecular structure. More recently, the use of NMR techniques to discover molecular structure and so on. I guess physical scientists are sometimes very patient. If they don't have the right data, they work hard to see if they can change their methods of doing research until they get the right data. Perhaps economists have been a little impatient in trying to develop tools for extracting information from noise. And I think that the effort has only been very modestly successful. Now, that, of course, doesn't mean that we should consider at all abandoning empirical research and economics, but that we have to look at other ways of doing it. And my own belief, and the belief of a small but growing band of economists in the United States and also in Europe, is that the place to look is not primarily at improving these very aggregate statistics that describe whole economies or attempt to describe whole economies or attempt to describe whole industries, but that we have to go back to the source of economic action, that is, the decision making and problem solving of individual human beings and study behavior at the individual level and perhaps up to the organizational level, because, of course, many important economic decisions are made in a context of a business organization where many people are involved in the decision making process. At that level, by looking at actual human behavior, we can find out something about how human beings with their very limited abilities, our, I shouldn't say they are, our very limited abilities to calculate, to use all of the information about the world that's around us and the lack of information as well, how we human beings decide what problems to focus our attention on, because we're not attending to everything at once. Even nations, when they worry about the environment, forget about energy, and when they worry about energy, about the environment, many other problems of that kind, how we human beings simplify reality, how we represent problems, how we carve out a model of a situation that we can cope with, how we go about discovering possible lines of action, how we go about discovering new products, for example, in the economic sphere, and then how we compare them and how we reach our conclusion, how we evaluate consequences. This is the kind of empirical work that is probably required if we are to resolve these very critical uncertainties in the application of economic theory. And fortunately, at this particular moment of history, there is becoming available a large body of empirical work, especially work carried out since the Second World War, in psychology, in computer science that is artificial intelligence, which provides the foundations, and maybe a little more than the foundations, for a very detailed theory about how human beings actually go about solving problems. This theory doesn't have anything like the mathematical elegance or the neatness of the general equilibrium theory that we heard sketched out yesterday. But it does have the advantage of being very firmly based on empirical observations, primarily to date observations of human beings solving problems and making decisions under laboratory conditions, but problems of real difficulty. The chairman described me as working at present on theoretical physics, not quite so. I am working at present on the way in which physicists go about solving problems. It's a little different than working on theoretical physics. How physicists go about solving problems? And we can observe some of our faculty members and our graduate students are nice enough to come into the laboratory where we give them sometimes quite difficult problems, and of course they successfully solve them, but not until we've gathered a good deal of interesting data about how they go about it. There are many ways in which we can make good empirical observations on human data, depending on how microscopic or macroscopic those observations need to do. We can conduct surveys. A lot of problems in doing that, but today we have a considerable body of knowledge about what kinds of questions you can ask people. Can you ask them about their intentions? Can you ask them about their preferences? Can you ask them about their expectations? We know you usually can't ask them about their motives, for example, but you can ask them about some of those other things. Today we are learning how to carry out human experiments in simple market situations, bringing together a group of people under a certain circumstances in which they can trade something, give them an economic incentive, probably not a million marks, but a few marks or dollars, give them an incentive for behaving as rationally as they can, and study those markets. Very interesting research going on now, what is called experimental economics. In organizations especially, we can actually observe people behaving. Sometimes we can even ask them to think aloud while they are making decisions or solving problems. And today we have a good body of theory as to what you can and can't learn from a thinking aloud protocol. What part of the human thinking process can be verbalized and what part is going on there and what you usually call the unconscious. We have a good understanding of that boundary, and so we know how to use pretty well thinking aloud behavior. And with these tools we can go out, actually, not only to the laboratory, we can go out into organizations. You usually have to get permission from a few people, but in fact business organizations are often quite willing to allow their decision processes to be studied, and there are an increasing number of people who are undertaking that difficult task. Now what about theory? We don't just want a lot of grubby facts, although some of us, if we have to make the choice, might like grubby facts rather than only theory. But we'd like to have, of course, in any reasonable science, we'd like to have both. The great accomplishment of chemistry in my lifetime was to go from a situation where at least some people trying to learn chemistry, maybe not chemistry, but other people trying to take courses in chemistry, felt somehow or other weighed down by the facts in relation. We wish there was a little more theory. Today it appears that, of course, the balance has been very much redressed in chemistry. So here too we want to have a theory. Now some parts of economics, of course, can be handled very well with real numbers, and we saw what some of those parts are yesterday. It's very natural to talk about quantities of commodities and to talk about prices, and those are represented very well by real numbers. But when we begin to look at the details of human decision-making processes, a great deal of what goes on, the human beings, at least, do not think they are expressing or thinking about in terms of real numbers. They are thinking in terms of much more general symbol systems, including sometimes visual images of all sorts. And unless our theory can represent those things, it's not going to do very well. At this point the computer came along and did us a very great favor. Because, as I think all of us realize today, at least since the advent of word processors, a computer is not limited. It's not condemned forever to be nothing but a number crunching machine. Computer is a very general symbol processing machine. And many of us think today that the processes which go on in the computer, first, are processes which are adequate to enable a program computer to exhibit intelligence. And secondly, a somewhat smaller number of us, but a substantial number, believe that the processes which we human beings use to think, and we're trying to be intelligent, that those processes also are the processes that we see exhibited in computers. It would have to be another lecture, maybe someday I can come back to Lindau and give that other lecture, although I don't know which Nobel field it lies in, not economics quite, about why computers can be used to model human thinking. But whatever the reasons, there has been a very vigorous activity in recent years to use computers in just that way, to use them as a way of building, so to speak, a set of difference equations. After all, a computer simply is a system which, as a function of where it is right now, what its present state is, executes an action and then is in a new state, executes another action. The computer program is simply a difference equation. We've learned how to use these peculiar kinds of difference equations to build theories of how human beings solve problems and go about making decisions. Now, I don't want to talk about the computer models. I don't have time to do that, but let me give you a little flavor of the kind of theory of human decision making and problem solving that we have today. Now, don't expect from me Newton's three laws of motion or even a set of elegant axioms like those that Professor De Bruyne gave us yesterday morning. The theory that's emerging looks a good deal more like molecular biology than it looks like classical physics. And by molecular biology here, I mean a theory the heart of which, well, it has some general principles, but the heart of which is a whole set of very complicated mechanisms and sequences of processes. If you want to understand the array of cycle as Krebs did, it's not enough just to say that ammonia comes in here and urea comes out there. The scientific problem was what happens in between? What is the succession of processes? And Krebs had an answer to that when he had discovered the role of ornithine as a catalyst, the role of arginine as an intermediate product, and so on. Now, in the line, the theory here consists of a model of the processes. And that, of course, is a very common form that theory takes in molecular biology. The kind of theory we are building and have built about human problem solving is a model of the processes and has many of the same characteristics. What are those characteristics? First, that most human problem solving and decision making involves what we call heuristic search. There are a vast number of possibilities. Some people say in a chess game one is confronted with 10 to the 120th possibilities. Well, chess is very small compared with the game of life. Maybe that number 120th is wrong. Maybe it's only 10 to the 60th, but I don't think that slight difference will trouble us at all. For human beings to solve problems when they are confronted with such immense spaces of possibilities, or even much smaller spaces, we are only capable of searching very selectively, using rules of thumb to guide that search. Rules of thumb, which today we call heuristics. So the first thing we've been learning about is the nature of heuristic search. How do you build a system, a computer, which doesn't just spin its wheels as fast as it can to find answers, but searches in a very careful fashion, but not necessarily guided by an exact theory, guided by rules of thumb? We know a great deal today about how to build such systems and how human beings use heuristics in their search. We know, for example, that one of the principle mechanisms here is what is called means-ends analysis. That is to say, we often solve problems by saying, I'm here. I need to get here. What's the difference? A difference in location. How do we reduce differences in location? Well, we go on a bicycle or we take an automobile. We apply the operator. Now we look. Still haven't reached the goal. We go through the same analysis, apply it again. Compare goal with present situation. Find in memory. Find a difference. Find in memory an operator. Relevant to reducing that difference. Apply the operator. Do this recursively. We see that again and again as a central mechanism of human problems. Second major mechanism, recognition and expert behavior. All of us who are good in what we do, we say, we're asked, how did you do that? Intuition. In other words, I don't know. We make some profound statements about unconscious. Or if we're very immodest, we talk about creativity. But the honest answer is, I don't know. Well, why should we? Why should it be conscious? Today, however, I think we do know what goes on. We know that most of the intuitions, the sudden answers to questions which are often right, not always, we know that most of this has to do with a phenomenon of recognition. That in the field in which we're expert, we can recognize 50,000, maybe twice that number. We only know this number within a factor of two. I think we know it better than an order of magnitude. 50 or 100,000 patterns in that field. All of you chemists here are familiar with 50 or 100,000 patterns in the field of chemistry. Molecules, particular configurations within a molecule, particular radicals, things of this sort, reactions. That's what you've been doing all your lives, or many years of your lives, acquiring those 50,000 patterns. And of course, when you see one of those patterns in a situation, then you're reminded of something. That pattern is the index to the vast amount of knowledge you've stored in your head. And once that index evokes the information, you are able to respond intelligently to many situations. This recognition ability, this source of intuition, has been studied very intensively in the game of chess, human chess players. And I think the evidence, which I don't have time to give this morning, the evidence that this is a major foundation for expert behavior is very solid today. So, and this is addressed to the students here, so especially the rest of us are beyond redemption. But for the students, if you want world-class performance, I think we all aspire to world-class performance, if you want to be a world-class performer, then you've got to acquire those 50,000 chunks. Better make it 100,000 so you have a little surplus. 50 or 100,000 chunks. And we also have good evidence that that requires not less than 10 years of arduous application to the domain, including child prodigies. You say, what about Mozart? Well, Mozart was composing music at age four. It was not world-class music. Mozart was not composing world-class music until he was almost an adult. Most generous evaluation would say age 18. Four from 18 is 14. Mozart was a slow learner. We find no instances of world-class performance in less than 10 years. Well, that's just an aside on the question of how one acquires expertise. And the final characteristic I want to mention is that human beings in general do not optimize, do not maximize, not because they wouldn't like to, not because they wouldn't prefer more to less, but because in most complex situations we don't know how. What we do know how is to set a goal which on the basis of past experience we think is reasonably attainable, a so-called aspiration level, and then try to find courses of action which will achieve that goal. This procedure is sometimes called satisfying, an old Northumbrian word which you may or may not find in your dictionary, it's in some dictionaries. We find also that people are in fact not very good at handling uncertain information, at handling probabilities. And we're beginning to find out some of the ways in which we distort probabilistic information. Just one example from work of Kahneman and Tversky. Suppose people are asked whether they will accept or refuse a dangerous medical treatment for a disease that will otherwise soon be fatal, but the treatment may also be fatal. Many more people will say that they would accept the treatment if they are told that it is successful in 80% of the cases, and the number who say they will accept the treatment if they are told that it is fatal in 20% of the cases. 20 plus 80 add up to 100. It makes a lot of difference as to whether a person is told that what he's being asked to do will be successful 80% of the time or whether he or she is told that it will fail 20% of the time. There are many other examples of this kind of uncertainty bias that we know about today. So the theory reveals the bounded rationality of human beings. It reveals how we use very limited information and computational capacities to deal with an immense environment. And most of this theory has been built outside of the field of economics by, to explain the phenomena and the processes that actually determine the course of economic events and thereby to form a sound basis for the choice of economic policies. Thank you. Thank you.
|
“Economics is a part of the study of man.” Endorsing this assumption of the British economist Alfred Marshall (1842-1924), which he already had quoted at the beginning of his Nobel Memorial Lecture in 1978, Herbert Simon wonders why economists and psychologists „live in very good ignorance of each other’s work“. An important reason for the mutual indifference of the two disciplines probably is their very different view on human rationality, Simon argues. Classical economics is concerned with substantive rationality and asks for the action that a specific situation calls for to maximize utility. Psychology on the other hand deals with procedural rationality and is primarily interested in „the processes that human beings go through in order to try to behave rationally in certain situations”. A procedural rational person, according to Herbert Simon, is not maximizing anything, but trying to reach a satisfactory level of achievement, using incomplete information and limited computational capabilities. With this distinction of two concepts of human rationality, Simon draws on his own pioneering research into the decision-making processes in organizations and companies, whose first results he had published in his groundbreaking book “Administrative Behavior” in 1947. There he contradicted the classic economic theory, which regarded a firm as one omniscient entrepreneur who applies almost pure reason to maximize profits. He emphasized that a firm in fact is composed of a number of cooperating decision-makers, whose capacities for rational action are limited, and who have to be content with satisfactory rather than with optimal results. “Individual companies, therefore, strive not to maximize profits but to find acceptable solutions to acute problems.”[1] Governments are also frequently looking for acceptable solutions to economic problems. When they turn to economists, however, they can get a wide variety of advise. Nevertheless, says Simon, there is a wide agreement among all economists from the Keynesians to the monetarists that the classic concept of substantive rationality is very well suited to determine the efficient use of resources. Yet one of the implications of the classic theory is that in equilibrium “all markets are cleared and you are not left around with piles of unused things”. This clearance should also occur in the labor market, but obviously it doesn’t, because unemployment is a current and common reality. Comparing explanations of Keynes and Lucas, Simon subsequently describes how differently unemployment can get reintroduced into classical concepts - by allowing for several departures from their original assumptions regarding rationality! “Out of these departures from rationality, you get quite different policy proposals”, he analyzes. “The real action to fix unemployment does not come out of the assumption of rationality, the real action comes from where you assume people are not able to behave rational.” Instead of choosing different routes of departure from the classic concept, economists would be better off, if they applied a concept of “bounded rationality” from the beginning, Simon emphasizes. This calls for innovative econometric methods and for empirical data, which help depict economic reality better than the traditionally used aggregated data that have been collected for public purposes. “We have to go back to the source of economic action, that is the decision-making and problem solving of individual human beings”. Only if psychologists and economists collaborated more closely, a sound basis for the choice of economic policies could be found. Joachim Pietzsch [1] Cf.
|
10.5446/55035 (DOI)
|
Okay, so I want to pick up where I left off yesterday. And so just as a reminder of what we were doing yesterday, the setup was, you know, in some generality, we started off with some kind of collabial threefold and then associated it some moduli of objects, sheaves, complexes, whatever on the collabial threefold. And then we, you know, produce essentially a constructible function on this moduli space and z value constructible function, which has the property that, you know, if you're in the proper situation and you have these kinds of virtual endurance in the sense of intersection theory, that this constructible function recovers that information that you kind of, you have some notion of integrating a constructible function where you add up the Euler characteristics of the strata weighted by the values, and this kind of recovers this virtual class and variant that was defined in Richard's lectures. And so what I wanted, you know, kind of talk about first today is give kind of, you know, an indication of why this is kind of such a powerful tool and there'll be something that I'm going to want anyways for later on in this course. And so the kind of moduli space I'm going to take is what's called the stable pairs moduli space or the PT moduli space sometimes here, ponder ponder and so the data here is it's two pieces of data and this is kind of a moduli space that's meant to reflect some kind of curve counting and variance. So first you have a sheaf E, which is one dimensional and pure has no zero dimensional subchef. And then you have a section of the sheaf with the stability property, which I forgot to write that the co kernel is your dimensional. So this has one of these, you know, you know, symmetric obstruction theories or whatever from yesterday. And so we can take its virtual invariant and we could sum them up in a generating function. It's not hard to see that this ends up being a Laurent series in the sense that for and sufficiently negative, these moduli spaces are empty. And the theorem is that this generating function is actually first of all a rash of the expansion of a rational function in Q and it has this kind of Q goes to one over Q symmetry. And so this was proven in this generality by Koda and Brigelans. And so what I want to kind of first do is kind of sketch the proof in the kind of simpler case when beta the curve class, which is the, you know, the support of the one dimensional sheaf is irreducible. So that means, you know, if you like all the curves that appear in this class or beta is not of the form beta one plus beta two with the beta I effect. So all the curves that kind of appear which have support beta are in fact, integral curves. And in the proof in this case, this is actually one of the first. This is kind of originally done by Ponder and Ponday and Thomas. And what you know, the reason I want to give this proof is really just to kind of indicate kind of the power of disability to kind of work constructively instead of working intersection theoretically this this property of being a rational function is supposed to be true when you do curve counting on any threefold using the stable pairs my life space, but we don't really know how to prove it in that generality. Okay. And so, so I don't want to get too much into the details. I'll just state what kind of where the constructability ends up being useful, where in particular how you kind of see both this rationality and this symmetry. And so the kind of idea is that, you know, to prove this kind of, you know, rationality and symmetry is actually a NOS. So, for instance, if you're a Laurent polynomial, the symmetry would just say that the, you know, the Q to the n coefficients is the same as the Q to the negative n coefficient. This statement is a little weaker than that because you can have a rational function with poles. And so instead, what you end up showing is, so let's call this virtual degree for the given modular space, I'll call it Pt beta comma n. And so if I compare the Q to the n coefficient with the Q to the minus n coefficient, it's enough to show that this is basically of this form, some constant times negative one to the n minus one. This is just some statement about power series. If the constant were zero, then you would get an honest to God Laurent polynomial. And the idea is that this statement is something you can check kind of strata by strata on the, on the modular space. More precisely, if I look at this pairs modular space for n and the pairs modular space for negative n, I forget full map where I can just forget the section. And this will go to kind of m beta n, which is, you know, some space of, you know, one dimensional sheaves, you know, with these discrete invariance. And similarly, I forget full map here. And then I have a natural isomorphism between these two spaces, which sends a sheaf E, what I'll call, you know, E dual, which is this sheaf here. So this is again some kind of pure one dimensional sheaf with the same support. And though you should, the way you should think about it is that, for instance, if the support were like a smooth curve, and E was like the push forward of a line bundle, then this dual would be the push forward of the dual line bundle tensor, the canonical bundle of the curve. And so now to prove this kind of identity where I want to compare the invariant for n with the invariant of negative n, I can study it using the fact that the invariant is somehow defined now constructively. I can try to study this difference, you know, fiber by fiber. So let's call this map price of n, I'll call this map price of negative n. Using this identification of the two bases. And so it turns out, well, what is if I pick a point, if I fix a sheaf, the fiber in one projection is the projectivization of global sections of the sheet. You see the support is irreducible. So this business of the co-kernel being nonzero just means that the section is nonzero. And similarly, if I look at the fiber with respect to the other projection, well, it's given by the projectivization of this H0, which if you kind of mess around with duality ends up being naturally identified with H1 of the original sheet. So the idea is that both of these are kind of, you know, the fibers are always projective spaces, but the dimension of the projective space jumps around. But since I'm working constructively, it doesn't matter. I can just kind of assume I have a projective bundle. And then when I want to kind of compare the difference, well, I'm just taking an Euler characteristic. And so the difference in the Euler characteristics. I'm just going to be taking the integral of my Barron function just over each of these fibers. Now there's one thing I need, which is that I need to know something about the value of the Barron function here. And the key fact that they prove is that basically the Barron function on the kind of PT space is just pulled back from the Barron function on the base. So it turns out that here I'm going to use some fact that the Barron function is constant on fibers. So I'm going to assign the pullback of the Barron function downstairs. And so as a result, you know, once I kind of sort out the signs, I get exactly the statement I want. The difference between the, you know, the Euler characteristics of this projective space minus the Euler characteristics of this projective space is exactly H0 minus H1, which is an Euler characteristic. And to then when I now kind of integrate over the base, when I kind of add up over all the strata of M, I get exactly this kind of identity. And so really, this is kind of the, you know, the why this is such a useful idea. You kind of just focus, you know, fiber by fiber, and you can actually then turn that into an argument about the global invariance. So let me just say a couple of remarks about this. So I, it was maybe not clear from what I said, but I was using the fact that the curve class was irreducible, you know, in a bunch of different ways. That's why this kind of argument is so clean. In general, you have to kind of be much more kind of clever. So you need this kind of much more complicated, all algebra technology of the kind that, you know, is hopefully the subject of Veronica's lectures. Second, yes. So there's a question about the proof. So is it obvious that the main function pulls back? No, it's not. This is, this is something that, it's not hard actually, but this requires some proof. It's like a one page argument. And in fact, I mean, what's going on in the argument, there are a few different ways of thinking about it, but ultimately, you know, it's, it's, it's something that they, they prove using, you know, the fact that the, the Baron function behaves well with respect to smooth maps and so on. And the fact that the Baron function in some sense is really, you know, it can be detected at the level of the schemes without keeping track of all the obstruction theory and stuff like that. But this really requires an argument. Okay. So maybe the, maybe the second remark is that, you know, this rationality is much more general. This is what I said before, this should always hold. Once you kind of put some, you know, homology classes to cut the virtual dimension down to zero, but we don't really, you know, right now that we, I don't think we don't really, we can't access it in that kind of generality. Although there's some kind of recent announced work of Dominic Joyce that might kind of change that situation. The third remark has to do with this, you know, question about the Baron function. So here I was really, you know, the Baron function was really along for the ride. What was important was that I was kind of, kind of, could kind of study this question constructively. And really then, and then I needed to check something about the Baron function. Basically, I needed to check that the Baron function was kind of, you know, constant on fighters. And so in particular, this entire argument would work if you replace the Baron function with something that was constant. So it would work if you somehow for, you know, replaced this function that you're waiting everything by something constant or maybe, you know, a sign or something like that. And so in particular, this rationality, I mean, this was actually originally what you can approve, this rationality of the series is actually something about the actual or like topological or like characters because also not just the virtual invariance. So tomorrow I will kind of state a stronger rationality conjecture, meaning a more kind of constrained version about what these rational functions are, which only holds. So unlike this kind of weaker statement, which this only holds in the virtual setting, and it's still open, even in the even in the clubby outcase. So I'll talk about this tomorrow. Actually, Andre, could you do me a favor? Could you, could you repost the link if you're I don't know if you're online, but you know, there's a link to that, to that backup. Could you repost that? So I try using it. I don't see. Oh, is it not working? Oh, hold on. Oh, is it not working for you? I just see initial state. You get a link. You share it. Can you access it? The host one or the participant one? Well, the one that you sent to everyone. In the drive. So I just see the, oh, I see it now. Yeah, it's working now. Yes. Okay. All right. Could you just repost the link? Actually, I'm not sure if it. Okay. So. What I'd like to talk about for the rest of today. It's okay. So in general now, let's go back to this kind of generality where you have X. We have some modular space as an X. And then we have this kind of constructible function on it. And so we would like to, what I want to kind of explain is how to kind of promote this story to some kind of comology theory. So I'm going to go back to the, to some kind of comology theory associated. My modular space. With the property that the, you know, the virtual inheritance. Will just be the, you know, alternating some. Of the betting numbers. And what we'll actually do is we'll actually kind of, you know, construct. An object here, which I'll call that, you know, the DT on. And, you know, what this will be is you, well, you should think of this as a, this will end up being a. A perverse chief on M, but that, you know, to first approximation, you should think of this as a, like a constructible she for a complex of constructible she's or something like that. And then what this will have the property is that if I, for instance, take the stock wise, like, characteristic. At some point, this will recover the value of the bear and function at that point. And then if I take, you know, the global homology and I take the characteristic, that'll be like integrating the bear and function, which in particular gives me the virtual number. I just want to make one point, which is always a little confusing is that, you know, on the one hand that went, you know, when we do these modular problems, we're doing modular coherence she's on X. This object here is really constructible she, you know, with respect to the, so I'm thinking of the, you know, taking the analytic topology and so on. Okay. So what is the idea behind this construction. I have to do with something that I said yesterday, which was a kind of, you know, one of the examples of where, you know, I said what the bear and function was, which is where I had a smooth variety. I had a single function on it. And then the kind of mod space I was looking at was the critical locus of that, which is just a zero locus. And then in this setting, you know, the bear and function at a point was, you know, something like negative one, the dimension of the one minus the topological Euler characteristic of the. The Milner fiber at this point. And so the idea is that I want to kind of promote this down to instead of just a number, it is some kind of, you know, Comology what I can do is instead of just taking the Euler characteristic of this Milner fiber, I could just take the Comology of the Milner fiber and, you know, and then do that as P varies around M to get a, to get a, a sheet for a complex of sheets. So, there is a standard way of doing this. So, I'm going to be using this notion of what are called a vanishing cycles vanishing cycles, you know, in a word is kind of measured. If I think of this as a family of varieties over a one vanishing cycles measures that, you know, difference in Comology between the singular fiber and the smooth general fiber. So, that's the way that kind of set that up. I have V over a one, which is my function. And let's say I'm interested in the fiber over zero, which I'm going to assume is singular. So, the complement of zero and it's universal cover. Take the corresponding. Partisan. By a cream. And let's call, you know, these inclusions, let's call this one I and let's call this inclusion all the way here from the universal, you know, from the pullback of universal cover all the way back to my original variety. I'll call that, let's say J till the. I can do the following I'm going to produce a sheaf on on on the central fiber of the zero or really a complex of sheets where I do the following I can take the constant sheaf on beat. And then I can, you know, pull it back and then push it forward back to be and then restrict this to the central fiber. This is what's called local this. So I queue. And the way you should think about this is that if I look at the stock of this at a point, this is like this. And then I can do a sort of. The, the, the, the, the, the, the complex of the combology of this miller fiber at that point. But now if I want to do, you know, the analog of this, you know, one minus and so on, I want to kind of. This is like taking reduced homology. So to, to kind of do that construction, I have a just from a junction, I get a map. the derived category of constructable sheets. And then I can just take the cone, which I'll call PQ. And so again, up to shifts and so on, this is really the analog of taking, you know, this reduced millner fiber to get the bearing function. And again, what this is, you're supposed to think about, this is measuring the kind of the difference in topology between the nearby fiber and the kind of the singular fiber over zero. And so once I throw in a shift, these two operations, instead of taking the constant shape, I could have taken any sheaf on V. And these two operations define functors from, you know, sheaves on V to sheaves on V zero. This lives on kind of, you know, she's on V zero. And in fact, kind of the surprising thing is that they preserve kind of this abelian category of perverse sheets. So I have V sub F, which goes from perverse sheaves on V, sheaves on V zero, and also size of F. But this is the one that I'm going to be interested in. Okay, there's kind of a lot going on here. So I really, I mean, what I really want to encourage you to do is just to think of this as this original definition where I took the reduced Euler-Kekers and replace it instead with reduced Comal. And then imagine doing that in families to get an actual complex of sheaves. So in particular, the specific object that we'll be interested in is where I take in the constant sheaf on V. Well, the constant sheaf on V isn't quite perverse, you have to shift it. So instead, I'll take this object here, which is a perverse sheaf on the zero fiber that's supported on the critical of this. And in particular, it has the property, I'll just call this piece of M, it has the property that its stock-wise Euler-Kerkers-Pic is exactly this bearant function. And so the goal in which I'm going to sketch is that we basically want to kind of take these kinds of things and glue them together. But to do that, I have to argue why this is even a good local model. And so actually, I mean, there are a few ways of thinking about this. Again, so now let's go back to our situation where I have X and the corresponding model to my space M of X. And what I'd like to argue is that at least locally on M, my space looks like something where I'm taking the critical locus of a function. And so this is actually easier to think about if you're willing to work analytically or kind of formal locally. So for instance, in the kind of gauge theory world, where let's say M of X is like a moduli of vector bundles, the way you could try to model what your moduli space looks like is you could take a C infinity bundle. And then what you're interested in doing is putting some kind of holomorphic structure on it. So you're kind of looking for an integrable d bar connection on sections of E. And so if you write any such kind of 0, 1 operator in terms of some kind of base 1 plus some kind of correction, where this correction is like a 0, 1 form on the endomorphisms, then it turns out that you can write down the integrability condition purely in terms of a critical locus condition. You can write down what's called the holomorphic turn signin functional, which is some explicit integral over X. X is collabial, so you have, let's say, a holomorphic 3 form. And then you just kind of cook up some 0, 3 form to kind of compensate it. So this is actually what was first written down as far as I know in Richard's thesis. All right, whatever. This is some kind of construction that you knew that picks out exactly the condition that you want. And this is modeled off of an analogous condition in the real 3-manipolds. But this is not the only way of thinking about this, actually. For instance, the approach that I like the best is using deformation theory. And this works much more generally. If I give you some point in my modular space, so some object on X, then if I look at the formal completion of my modular space at this point, well, any time you have a scheme and you take the formal completion, you can embed it inside of the formal completion of the tangent space at this point, which in this case is just some X1. And then you can actually write down explicitly a power series on this tangent space whose critical locus cuts this out. So the idea is that if I look at the X algebra of my object, it carries a product, the Yoneta product, but actually carries higher operations, basically massive products, induced by the fact that the way I get this is I take the homology of some differential-rated algebra. So you have these higher operations go from once you write them down, they go from symmetric power of X1 to X2. And then you can just write down a formal function on X1, which is just the sum. I might screw up the denominators, but I think this is right. And here, what's going on is that here, this pairing that I'm using is exactly the serduality pairing. And so what you then show is that this formal completion is just literally the critical locus of this formal power series. OK. So there are a few ways of thinking about this. I should say both of them, the Kalabiya condition showed up in some pivotal way. Here it showed up because I was taking a holomorphic 3 form. Here it was showing up because I was using the fact that I have a natural pairing between X1 and X2, which in some precise sense is compatible with these operations. OK. Do you mind giving us a bit more details about what just happened? Because I'm a bit confused as to what F is here. Oh, right. So OK. Yeah, maybe I shouldn't have gone into this. What I was trying to do is I was trying to kind of give some idea of why the modulized spaces that are showing up are actually critical loci, at least locally. And so if you're kind of comfortable in the gauge theory world, you can see it for vector bundles just by explicitly writing down this functional. If I took the derivative of this functional with respect to a, this is some infinite dimensional thing, so you have to kind of deal with that, then you get exactly the integrability condition, which is something like d bar a plus a commutator a is equal to 0. So that kind of falls out exactly here. What's less clear is why this kind of formal thing also works. But I just wanted to indicate that there is a way of making sense of this purely algebraically without thinking about gauge theory. But so maybe it's better or best not to get into this. But I just want to indicate that there is a picture that lets you kind of talk about in kind of much more generality how to see this critical structure. So thinking about this, we have one more question. Is this some sort of infinity deformation theory? Yeah, that's exactly what the, I mean, this higher operations is I'm thinking of this as an infinity algebra. And then the sero duality is giving me the structure of what's called a cyclic infinity algebra. And so then there's some kind of general theory about understanding your moduli problem in terms of exactly the critical, in terms of the critical of this thing. So there's some, let's see, a reference for this. Basically, there's some nice notes of conceivage and Soiglmann from the 2000s that kind of discussed this. There are other places as well. I mean, in general, OK, this is kind of a tangent. In general, even if I don't have a collabial, if I want to understand what are the equations cutting out this formal completion inside of the tangent space, you're always just looking for the zero locus of these formal functions, which go from x1 to x2. And then what's special in the collabial case is that you can take all of those functions and group them together as coming from derivatives of this one power series. OK, well, OK, maybe that was a bad thing to try to emphasize, because these kinds of results are nice, but they work analytic locally or formal locally. And what you really need is something, a much more stronger result, which is due to kind of a Pantov, Tauin, Vakye and Vesozi, and then Chris Brav of Vittorio Bussi and Dominic Joyce, which says that, again, so if I have my moduli problem, my moduli space, and I take some point on it, and I'm assuming I'm in the scheme setting, so m of x is a scheme, there exists a risky open chart from my point, which can be written as the critical locus of a function on a smooth scheme. So these kinds of deformation theory arguments or gate theory arguments only give you kind of very small, maybe some you could show convergence, you'd get really just like an analytic open, where this is true. What's kind of striking about this result is that you really get kind of a risky open set where it's true. And this proof is really much harder. You see, the idea is that the first paper introduced this notion of what are called minus 1 shifted symplectic structures, which is really a notion that's kind of properly in the world of derived geometry. And then the second group kind of proved that there was essentially a local structure theorem, a risky local structure theorem for whatever these symplectic structures are. What I like about this result, by the way, other than the fact that it's just like a cool result, is that it really kind of in a strong way uses a derived geometry in the sense that a lot of the other constructions in the subject, like all the stuff about like obstruction theories and virtual classes, are kind of dodges to get around to actually work with derived systems. And so I think that's a really good point to make. To get around to actually work with derived schemes directly. And this result really, there's no real way around it. And so any kind of construction that builds on it, which is what I'll be doing for the rest of the lecture, ultimately really relies on that kind of formalism in an essential way. So let me just say, we've seen already some example of, so for the most part, this theorem tells you that you have these Zyrsky local kind of critical charts where you have this critical of description. It doesn't really give you some very good idea of how to find them or how to find the function and so on. But in some examples, you can actually see explicitly, we've seen some examples already. So for instance, there was an example from yesterday where I had this kind of 3 3 hyper surface inside of p2 cross p2. And then I just get some explicit modulite space and I describe it as a critical locus of a function. Another example that I like a lot where, again, you can actually write down a global critical locus description is the case of the Hilbert scheme of points on C3. Here, the ambient space that you work with. So you can think of the Hilbert scheme of points on C3 as giving you three commuting matrices on C to the N along with cyclic vector modulo conjugation. And so the commuting condition kind of imposes some strong singularities. So the ambient smooth space is I'm just going to take three matrices on C to the N, kind of a cyclic vector V inside of C N. And then that's it. And then I mod out by the action of GLN. So the cyclic vector just means that if I take x, y, and z and I apply it to V, eventually it spans all of C to the N. So this is a smooth space. And then the way I think of the Hilbert scheme inside of this smooth space is this is the critical locus of an explicit function, which is just a trace of x times the commutator of y and z. If you just explicitly calculate what the critical locus condition is, I just take the derivatives and set it to 0. It exactly picks out the condition that the matrices commute. And this is a really nice example because it's explicit enough that you can imagine you can actually try to calculate Milner fibers and so on. But of course, as N is big, it also gets kind of unwieldy. So I won't maybe go into too much detail about here, but what I've said already is that M of x is basically covered by what we call these critical charts, which again, just some description open sets, which are described as critical loci. And then not surprisingly, there's also some kind of compatibility on overlaps. And one way of thinking about this compatibility is what's known as the notion of what's called a D critical scheme, which is basically some notion that was developed by Dominic Joyce to avoid having to talk about these shifted symplectic structures all the time. OK, so what we would like to do then is the following. So on each of these critical charts, I have this perverse sheaf that I've constructed on you. If the chart is given by this data of u, v, and f, where u is the actual open set of M of x, I've produced this object, which is vanishing cycles. Essentially, the constant sheaf. And you would like to glue these together. But there's a problem with doing this. And this problem is kind of there's an obstruction to doing this gluing, which you can think of already in the kind of the simplest case, which is, I suppose, v is, let's say, some smooth variety. And I can think of v as a critical locus on itself by just taking the zero function. And then I can think of v as a critical locus on itself by just taking the zero function. And if I run through this construction, so the critical locus of s in this case is just v itself. And then this kind of vanishing cycles object that I'm going to produce is just the constant sheaf. Again, with this shift. But there's another way of getting v as a critical locus, which is that what I can do is I can take, let's say, I'll call l, it's some Z mod 2 local system on v. So assume v has nontrivial local systems. And then I can take the corresponding two-torsion line bundle. And then I can write down a function on the total space. I can call this f tilde, which is basically just fiberwise, as t going to t squared. So using the fact that the l squared is trivial. And so it's not hard to see that if I take the critical locus of this thing on this bigger space, it's again just equal to v. But now if I calculate the vanishing cycles, I don't get the constant sheaf anymore. I get this kind of rank one local system associated to l. So depending on how I described, v is a critical locus, so I either got a constant sheaf or I got this kind of Z mod 2 local system. And so this is going to cause some kind of problem when I try to kind of match things on overlaps to produce a global object. So to solve this, you actually need a little bit of extra structure, which is something that already came up a little in Richard's talks. It came up essentially for similar reasons. So on my moduli space, I have this two-term complex, this kind of obstruction theory, which is if you like the dual of my complex of deformations and destructions. And what we call the virtual canonical bundle is just the determinant of this thing, which is a line bundle on my moduli space. And so the extra structure that we need is what's called an orientation. It's just a choice of square root of this line bundle. And it turns out once I pick this choice globally, then I can kind of solve this problem about bloom. So the theorem, which is due to the Braves, Boussie, DuPont, Joyce, and Centroi. Because if I take my moduli space and choose an orientation, choose the square root, then you can kind of solve this Gling problem, which we'll call this DT sheath, which is now some perverse sheath that lives globally on your moduli space. And this choice really makes a difference. So first of all, there's a question about why does the square root even exist? And then if choices will differ by two torsion line bundles on M, how does that affect the answer? And then the first thing to point out is it really does affect the answer. So different choices. We'll kind of change this DT sheath by tensoring with this kind of local system, which seems like a mild thing. But when I, for instance, take homology, it will totally just change to what the homology is. So I'll just go ahead and write this down. I'll just write it down. And then I'll go ahead and write this down. So I'll just go ahead and write this down. And then I'll just write the homology as. What is its orientation in the kind of the critical case? So for instance, if M is just globally a critical locus, then, well, so the obstruction theory in this case, you might remember it looks like this two-term complex coming from the Hessian of F. If I take the determinant of this, I get the canonical bundle of V restricted to M squared. And so then the kind of the natural orientation, if you have a global critical locus, is just to take, OK, KV restricted M is kind of the natural choice. But of course, it depends on my description as a global critical locus. But for instance, in the examples like the Hilbert scheme or something where I gave you some kind of nice function that cuts it out, then in particular, gives me also a nice orientation that cuts it out. Another example where there's a natural choice, this is kind of one case where there's a natural choice. Another example where there's a natural choice is when Michael Abiyath refold is the total space of the canonical bundle on the surface. So again, this is the kind of geometry that showed up in Richard's talks last week on the Vaffelwitten theory. In that case, if I consider, for instance, sheaves on x that are kind of proper over s, I have a map where I kind of take this sheave on x, and I just take its push forward to s, and I get a point in the moduli stack of sheaves on s, e maps to pi lower star e. And let's just assume I'm in a situation where this push forward is coherent interest. Then there exists a natural orientation on m of x that you can think of just by looking at how to think about the obstruction theory on m of x. So you, this r-hom is what basically the determinant of this is going to calculate my virtual canonical bundle. And I can trap this in terms of two complexes that are built out of s. So on one hand, I can, this is built out of r-hom ff, and then the kind of other piece of the triangle is r-hom of f, f tensor ks with a shift here. And so then, well, if I combine ser duality with just taking the determinant of this is going to be the product of the terms of these two things, I combine that with ser duality. I get that the canonical bundle on m of x is naturally the pullback of the virtual canonical bundle on m of s squared. And so again, this gives me a natural choice of square root. But in general, a priori, it's not clear that they even exist, let alone that it's kind of a nice choice. And so the kind of two theorems that kind of help with this, is first of all, there's a very soft theorem. If I had more time, I would kind of explain the proof of this due to Necker-Soff and Okunkoff, which is that orientations always exist. And their argument is in kind of very great generality. Anytime you have some kind of, you know, moduli of objects in a Kolabi-Aubree category or something like that. But they don't tell you how to pick it. And so then the more recent theorem from a couple of years ago of Dominic Joyce and Marcus Upmeyer is that, OK, so I'm not actually sure about what the precise hypothesis. So maybe it's important for them that it's a projected Kolabi-Aubree fold. So maybe this might not be exactly correct. They kind of provide a kind of a canonical choice. And in particular, what's great about what they've done is that it's a choice that's kind of compatible with kind of varying the moduli space under things like extensions and direct sums and so on. But their proof is heavily gauge theoretic. I have to confess. I mean, at some point I was meeting with them and they were explaining to me. And if I understand correctly, which maybe this is wrong, it's like what they do is they take the Kolabi-Aubree fold and they cross it with a circle. And then they kind of do some analysis about the corresponding real seven-manifolds and some kind of special holonomy for those things. So at least from my point of view, I don't really have a good feel for what's going on in this Sector Zero. So I'm almost out of time. So now I can just state the upshot of all of this, though, all this kind of formalism. This is pretty much all I'll need. We started off with x. We took m of x. We maybe have to choose the square root, which maybe the choice has been made for us or maybe we are in a situation where we have a natural choice. And that produces this kind of dt sheath that lives. It's some perverse sheath that lives on my modular space. And then if I want to, for instance, kind of get a homology associated to my modular space, I can just take the hyper-homology of this perverse sheath. And so what properties does this satisfy? Well, first, I mean, the first one is the thing that I said at the beginning, which is that if I take the stock-wise Euler characteristic, I get exactly the Baron function. That's really cooked into how we kind of picked this choice of feet of glue. If I instead take the global homology and I take the Euler characteristic there, well, this is the integral of the Baron function, which in particular is my virtual invariant. What else? Well, we get some properties that just come from the fact that vanishing cycles has some nice properties. So for instance, this dt sheath is closed undertaking duals, variate duals, which concretely means that if I take the kind of homology, it's naturally dual to the kind of compactly supported homology. So for instance, if m is proper, you basically get quanker duality. What else? I mean, this won't be relevant for me, but this is extremely interesting and useful in general is that everything here can be kind of decorated with hodge structures. So the kind of formal way of doing it is that this fee lists to this category of what are called the next hodge modules. But concretely, what that means in practice is that then when I take global sections, this carries a natural mixed hodge structure. And this is extremely useful for calculation. And so finally, let me just state a non-property, which is that one of the things I mentioned in class yesterday, was that, well, the virtual class, because it is defined by intersection theory, has a property that is deformation invariance. If you have a family of x's and you're in the proper situation, then the virtual degree is going to be independent of where you are in the family. And if you translate that to a statement, I would bear in function that's something kind of very non-obvious, because the actual family of modulized spaces, well, maybe it's proper, but it doesn't have to be bad or anything like that. So we know this kind of, if n is proper, then by virtue of this kind of index formula from yesterday, this weighted Euler characteristic is a deformation invariant as x varies. But that's not going to be true for this kind of comological thing. So you can, a simple example to see how things can go wrong is where you consider the following family. These aren't really modulized spaces. Just a family of things that have this kind of critical structure, which is you just take a Riemann surface, m, v, c, c. And then I'm going to just take omega to be a one form on it. One more for one form. And then I'll define a family of spaces, which is just the zero locus of t times omega. So when t is non-zero, I just get a bunch of points corresponding to the zeros of omega. And in that case, you can just see, well, everything is isolated. So the comology is all supported in degree zero. And you just get, you know, here to the 2g minus 2 in degree zero. But t equals zero, I get the entire curve. The vanishing comology in that case, the phi in that case is just the constant chief up to a shift. And then now when I take the comology, it lives in different degrees. You know, I get the q, q to the 2g and q. And of course, these have the same Euler characteristics, but they're just different. And so in general, that makes this problem a little bit subtle. If I'm working with, for instance, if I'm interested in the Quintic three fold, it's not at all, it doesn't really make sense to talk about in general these kinds of, these kinds of comological invariance for a modular space she's on a Quintic three fold. You have to tell me which Quintic three fold. Like in general, there's no reason to expect the answer to be insensitive to it. OK. All right, let me stop here. OK. Thank you. Thank you. Thank you. We have questions. We have one from the chat. I can try asking it myself, but I'm not sure how you do. So the question is, if the, actually, think that the question is, if the orientation data on the stack of compactly supported complexes is, if it comes from the global one of choice, Upmire. Oh, yeah, that's a good question. So this is like, I mean, so I actually don't know if, can I see this Q&A, maybe? It's answered. It's in the answer tab. Oh, I see. There's a long discussion here. The short answer is so, OK, hold on. If you expand the stack where you know, this is from Boyko, is that correct? I mean, he would know better than me. Make sure I try to understand this question. I was wondering, oh, I see. The canonical Hilbert scheme orientation data come from the global one of Joyce Upmire. I have no idea. I mean, again, I think he's in a better position to answer this question than I am. But what I'll say is that, again, it's a gauge theory construction. So if you have a torsion sheaf, the very first step is you're going to kind of, let's say you're interested in something on C3, well, you're going to embed C3, as I understand their argument, inside of something compact. Or maybe you'll do some kind of boundary and framing or something like that. But then you're going to resolve your torsion sheaf by vector bundles. And so then you lose a lot of this structure. Like, so I think even for a hill, it's not clear to me that their construction will agree with this kind of, what I was calling the canonical one, which is maybe not the right choice of work. But so a priori, it's different setting, because they're working in a compact setting. But for instance, in an analytic neighborhood or something, you can identify these two modular spaces. And then it's not clear to me that those two orientations agree. But again, I'm not really the person to ask there. One more question in the Q&A. In the local case, there isn't a canonical one. Oh, it's an answer to what you said. In the local case, there isn't a canonical one. You could prove that there is one depending on a choice of compactification. OK. All right. So a question is an answer, I guess. Yeah. OK. All right. Any other questions? No, the best thing. We wish you good things too. Thank you.
|
In the first part of the course, I will give an overview of Donaldson-Thomas theory for Calabi-Yau threefold geometries, and its cohomological refinement. In the second part, I will explain a conjectural ansatz (from joint work with Y. Toda) for defining Gopakumar-Vafa invariants via moduli of one-dimensional sheaves, emphasizing some examples where we can understand how they relate to curve-counting via stable pairs. If time permits, I will discuss some recent work on χ-independence phenomena in this setting (joint with J. Shen).
|
10.5446/55128 (DOI)
|
Great insomnia, Lieberkollega studenti. I think that exhausts my supply of German and the rest of this will be in something approximating English. I'm going to talk about infections, particularly virus infections. I didn't expect to also be giving you a practical demonstration, but at the moment I have the flu or something like it, which just goes to show we haven't been very good at dealing with virus infections. There are many that we haven't dealt with. Now the word immunology comes from the Latin term immunus, which means without tax. That referred to the fact that certain members of the Roman state, former soldiers, were exempt from tax. The tax that the immune system has evolved to remove is the tax that's caused by infection. This has been the selective force that has driven the evolution of the immune system. The idea of being without tax, of course, has considerable appeal to many conservatives, particularly in the United States, but there is no life without tax, as you will see from my performance at the moment. Now by the late 1960s, many people were thinking that the era of infectious disease was essentially over. This is one of my scientific heroes, Frank McFarlane Burnett. Mac Burnett won the Nobel Prize for Immunology in 1960 with Peter Medawaffa's studies of immunological tolerance. In actual fact Burnett was a virologist by training. He did some of the similar experiments of virology. He was the first person to do quantitative virology, and of course if you don't do quantitation, you don't do anything very much. Firstly with bacteriophage and then with mammalian viruses. Then somewhat late in his career, he focused his attention mainly on the immune system. By the late 1960s, when he wrote his autobiography in 1968, he was convinced that intellectually at least the era of infectious disease was over. Now as we now know, that is not true. He was wrong that we have enormous intellectual and practical challenges in dealing with infectious disease, the AIDS epidemic, even simple respiratory infections like I have at the moment. We have not dealt with them terribly well. I bring Burnett into this because he is one of my personal scientific heroes who had some influence on me. He spoke three times at Lindow, I see from consulting the book, and he would have been 100 years old if he had lived. This is his centenary. In Australia we're celebrating his centenary next month in fact in Melbourne. Now the reason that Burnett thought that the era of infectious disease was largely over was that we had at that stage had enormous success with vaccines. This is what happened when polyovaccine was introduced, the Salk vaccine. You can see the incidence of cases here. It was a terrifying disease. Many people were enormously frightened of it and there were large numbers of cases. You can see that the vaccine pretty much knocked that infection right out. There were some early mistakes and so forth, but we are now almost at the point of eliminating polyomilitis virus from the world. The Children's Vaccine Initiative has delivered enormous numbers of polyomilitis vaccine doses throughout the world, something like 10 million doses alone were given one day in India in a pulse vaccination program that is hoped will eliminate polio by 2002. So all of us are aware that there are some infectious diseases that we can get rid of. There are others that have also been eliminated, smallpox is gone. That's 200 years after the first published report of that vaccine or written report by Edward Jenner, the English physician. It doesn't normally take us 200 years to develop vaccines at the moment fortunately. Polyo is on the way out. Many others are very well controlled. Measles is a thought could also be eliminated. There are other viruses and other infections for which we actually have good vaccines, but we still have substantial mortality in the world population. Yellow fever virus, for instance, will kill a number of people in Africa every year. We've had a very good vaccine for that, Max Tyler's vaccine since 1957, but the fact of the matter is that it's not distributed widely because of cost. This, of course, in the developing world is a major factor, the cost of vaccines. We have to be able to deliver cost effective vaccines even if they aren't good vaccines. This is a problem for us, especially since we talk about trying to develop AIDS vaccines. Many of the AIDS vaccines that are currently under discussion are extremely expensive products and we would have to see how we can actually give those. I believe the funding would be there for that and we actually had a product that was successful, but at the moment none of us, I think, can guarantee that we can make an AIDS vaccine, though there are something like 300 different vaccines in various stages of testing at the moment. There are also many other infections for which we do not have vaccines and it seems extremely difficult to make them. When Burnett in 1968 was saying that we'd overcome the problem of infectious disease, he was not, I think, thinking about the problem of infectious disease in the developing countries, where particularly the parasitting infections create an enormous toll. Malaria, for instance, kills something like one child every 15 seconds, I think it is. Tuberculosis is an enormous problem. We've had a so-called tuberculosis vaccine for many years, but it's not a very good vaccine. We need something much better that we can deliver much more broadly. There are others as well. Of course, other vaccines are under test, the vaccines against respiratory and social virus in children, for instance, is under test at the moment and so forth. The world progresses, but I would not like any of you to go away from this with any sense that these problems are solved and there's nothing to do in the future. There are enormous challenges here for young people. There is enormous complexity in dealing with these infectious agents. They evolve very quickly. They change very rapidly. The problem of dealing with them is at the limits of our conceptual ability when we think about the immune system. There are going to have to be new insights and new ways of doing it. So here, I think, is a challenge for you. The other problem that we face is though diseases like measles can be readily controlled by vaccination, many people in the developed countries have never seen these infections. And a number of people, quite vocal minorities, are refusing to have their children vaccinated against the common childhood diseases. I ran into this in Australia after the Nobel Prize. I had a certain amount of public exposure in Australia and there was a very vocal minority, critical of vaccines, believing that this is some sort of plot between the medical profession and the drug companies who actually don't particularly like vaccines because there's not that much money in them and academic medicine. It is a problem and there have been outbreaks that have resulted from the spread of these sorts of attitudes. It's been less a problem in the United States because it's mandated that all children be vaccinated before they go to school. The Australian government has now introduced legislation which brought up the vaccination rates which were down as low as about 55% from completion of the childhood vaccines, brought them up to about 90%. In fact, to cover the population, you need to develop a level of herd immunity that comes in at about 80% vaccination level. So as long as you can achieve something like 80% vaccination, you can do quite well. But this rejection of vaccines I think is part of a general phenomenon that we're seeing in Western culture and that is the rise of the irrational and the rejection of scientific-based medicine and scientific-based approaches. Rather sad, I think. The immune system is complex, enormously complex in fact. The two basic elements of it are the receptors that are highly specific, very varied and show enormous diversity. They're expressed by two types of lymphocytes or white blood cells. The antibody molecules that are expressed on the surface of the B cell and a secreted product of the B cell lineage, the plasma cells. These are protein molecules that are secreted into the bloodstream and they bind to protein molecules and act very well to deal with isolated protein. The other receptor that I'll talk about most in this is the T cell receptor which is embedded in the surface of effector cells. In this case, the effector is the molecule, the antibody molecule. We produce enormous numbers of these molecules. They flood the immune system and they stick around for life. In fact, the B cells or plasma cells that make these molecules after stimulation, after priming will stay in the bone marrow for many, many years, certainly for the life of a laboratory mouse. The immune cells that bear these receptors on the other hand is somewhat more limited in availability because the development of this response depends on the doubling time of those lymphocytes. I'll expand on that as I go through and talk about the nature of cell-mediated immunity. Now, let's talk from now on about virus infections and we'll deal with relatively simple virus infections, the negative strand RNA viruses. These are the viruses that commonly cause respiratory tract infection. This is the para-influenza type 1 virus which causes croup in small children. They have a nucleic acid core. They have various proteins in them to keep the nucleic acid protected and they have surface glycoproteins which are involved in such functions as getting the virus into the cell or getting the virus out of the cell and so forth. The surface glycoproteins are where the virus is vulnerable to immune attack by antibody and this is where the antibody molecules will bind to these surface molecules. The cell-mediated system on the other hand can draw from any protein that the virus makes within the cell and the immune response can be directed against a peptide of any protein from the virus. It doesn't have to be on the surface of the virus. This is how antibodies bind. This is an X-ray crystallographic picture from Peter Coleman, Graham Laver, Rob Webster's work showing the hemagglutinin, one of the surface molecules of the influenza virus. They worked out by a combination of selecting viruses with monoclonal antibodies and by X-ray crystallography that these are the actual contact sites for the antibody molecule. The point I'm making here is that the antibody molecules bind in the main to tertiary structure of the protein. That is, it's not generally possible to make a linear peptide or linear component that will actually give you good protection against a virus because it depends on the proper folding of the molecule that you will get the appropriate binding that will then bind the infectious agent itself. Most vaccines are based on that interaction, the antibody-protein interaction. Most of our successful vaccines work by antibody. The most successful vaccines are those that actually deal with infections that have to go through the blood to cause disease or pathology. The reason for that is it's possible to keep reasonably good antibody levels in the blood. If you're vaccinated against yellow fever virus, you will still have antibodies that are readily detectable in your blood 10 years later. Now, that means to actually get the disease yellow fever, the virus has to get from the mosquito that injects it, it actually replicates the mosquito also, to the liver where the damage occurs that causes yellow fever. It's yellow fever because the virus destroys the liver and gives you jaundice. Once the virus gets into the bloodstream, the antibodies can grab it and take it out of the circulation by neutralizing it. Those vaccines work very well. Other examples of this are polyomylitis and measles. Most vaccines, however, do not give you complete sterilizing immunity. The polyovirus, for instance, still gets in, it grows still in an immune individual. It can still grow in some of the superficial epithelial cells of the mucosa. Then, but because of disease, it must go through the blood and get to the large spinal motor neurons, which are damaged by the polyovirus and cause the disease we know as polyomylitis. Now, at the worst, that only occurred in about 1 to 2 percent of the people who were infected, but so many people were being infected, so many children were being infected that we were getting a high incidence of polyomylitis. That disease can be stopped even though we don't have sterilizing immunity and may still replicate the virus. The same is true of measles. It will replicate in the RF-arrings, but it doesn't cause disease unless it can get through the blood to the site where it's going to cause damage. Now, the viruses where we have tremendous problems are the viruses that either change so that they can avoid the neutralizing antibody, and the classical example is the human immunodeficiency virus that causes AIDS, or the viruses that change by some other mechanism, such as reassortment or recombination, as it occurs with the influenza viruses. Influenza viruses grow in birds and humans, and they have a packaged genome, a segmented genome, which if you infect a cell simultaneously with two different influenza viruses will repackage and give you a new virus. So, if you happen to be in a situation where you've got a duck and a human being living closely together, and the human being gets infected with the human influenza virus, and also with the duck influenza virus, you can get a new influenza virus out again, which will be part human, part duck, and could cause an enormous epidemic, in fact. This is one of the things that we're absolutely terrified about in epidemiology, because influenza will kill large numbers of people. Immune influenza is a highly lethal disease. That's also part of the problem, is that influenza is a superficial mucosal infection, and this is the other area where we have great difficulty in providing sterilizing immunity. Though we can provide sterilizing immunity for something that goes through the blood, it is very difficult to keep enough antibody at a mucosal surface to stop the virus from actually getting in. Antibody is secreted through here by various means, but we need a lot of antibody around to get enough through into the mucosal area. Excuse me, I didn't mean to blow into the microphone quite so much. We'll see what happens with this soon, in fact, because a vaccine that is going to try to prevent mucosal infection is currently under trial. That is the human papillomavirus vaccine. Papillomavirus is the cause of cervical cancer in women, and the vaccines are currently under test to see whether we can actually stop that virus from transmitting. That would be an enormous advantage, of course, if we could. Now, for the rest of this discussion, I'll talk about cell-mediated immunity. Cell-mediated immunity is to do with getting rid of the virus-infected cell. Viruses are obligate intracellular parasites. They can only grow within living cells. One virus particle will get into a cell, you'll get millions of virus particles out. It's essential if you're going to terminate the infectious process to get rid of the virus-infected cell, to get rid of the factory that's producing the virus. This is actually the virus I showed you before, the para-influenza one virus, an electron micrograph showing the viruses in the cell. We have to destroy that cell if we are to end the infection. The way that we do that is by cells that are particularly programmed to destroy the virus-infected cell going from the vasculature into the tissues. These are highly migratory cells, the T lymphocytes. They migrate through between vascular walls. Here you can see some of the blood vessel lumen migrating through and then emigrating, in this case, into the central nervous system. What they do is induce apoptosis in the virus-infected target cell. They, in fact, trigger the cell suicide pathway and the cell, in fact, self-destructs. As we know, altruistic cell death is a basic feature of cell biology, as we understood over the last few years, in fact, and this is what the T cell does. It causes the virus-infected cell to suicide. This works through the classical cell death pathway, the same one that's used by FAS, FAS-Lygan interaction. In the case of the T cell, the T cell carries large granules in its cytoplasm, which include two sorts of molecules that are involved in the induction of this process. One is the perforins, which make a channel in the membrane of the target cell at the point of interaction between the T cell and the target. The other is the granzones, which are serine esterases, various types of esterase, which are then thought to pass through those channels and induce a trigger the cell death pathway. This is thought to operate through the caspases and triggers, it induces the latter part of the same pathway that is induced by FAS-FAS-Lygan interaction. This causes rapid cell death. The cells are dead within about five, certainly dead within five or six hours after an induction of this pathway. When T cells induce cell death by the FAS-FAS-Lygan pathway, it seems to take longer, but it's also effective and also works, we believe, in viral immunity. Now, of course, if you're going to have cells that are going around the body killing other cells, you want to have that under very precise control, because you don't want to have promiscuous killing, you don't want to have large-scale tissue destruction. These are very powerful cells. They will kill one cell, then they will go on and kill another and kill another and kill another, so they have very great potential. But they will only kill the cell, which is specifically modified by the process of infection. They will not kill the cell next door, which is not modified. And the way that works we all now understand is when the virus infects, of course, it's main motivation, all the virus cares about is to produce new virus particles and to keep infecting other individuals. That's how it survives in nature, that's how it evolves. When the virus infects and uncodes, it exposes its nucleic acid and it makes new protein. That protein is always made in excess. Some of it is chopped up through the proteasomes, it goes into the Golgi and endoplasmic reticulum, through the taps, into the endoplasmic reticulum through the taps, where it associates with the nascent strong transplantation molecules, the class 1 MHC glycoproteins, major histocompatibility complex glycoproteins. These are called major histocompatibility complex glycoproteins because they were discovered by people in Joe Murray's lineage who were working on transplantation. We knew that we had molecules that were involved in transplantation, that were specifically recognized in transplantation. For many years after we understood a great deal about transplantation, we had absolutely no idea where those molecules existed. Why would God design a system whereby a kidney from one person was rejected when it was given to another person? There was no evolutionary reason for that. We don't normally have kidneys transmitting between people in nature. Was it just there to frustrate the transplantation surgeons? Some of the physicians thought that was probably a reasonable motivation because they think transplantation surgeons are pretty appalling. We know that transplantation surgeons are actually one of the few people that can really help us when we get in real trouble. It turns out what these so-called major histocompatibility complex molecules are for is in fact to signal that our own self has been changed by carrying peptides from a foreign source to cell surface, in this case from the virus. So we call it major histocompatibility complex and we call it transplantation. What it would have been if we discovered it another way would be called the self-surveillance complex or the self-monitoring system. It's just that it's historical that it was discovered in that way and it has that terminology. Science is full of that sort of situation where you have a terminology which constrains you to think in different ways that is actually determined historically and has nothing to do with the real biology of the system. For instance, if you take the immune system, there are all sorts of proteins which are enormously important that are called interleukins which says that they were discovered originally in leukocytes and it says, well, these are something to do with the immune system. When we then turn around and we find those interleukins expressed in the brain, the questions then asked is why does something that's in the brain that's in the leukocytes getting expressed in the brain? Why are we expressing an immune system molecule in the brain? Well, it's probably the other way around. Evolutionarily, the brain is a lot older than the immune system. There are organisms that have brains and don't have immune systems. It's probably, we've stolen, the immune system has stolen something from the brain rather than the other way around. But thinking does, the way that things are described conditions your perception of them. It's one of the things we have to be careful of in science as we try to think conceptually about complex system. We don't get locked into rigid intellectual frameworks which is a problem for all of us, I think at the time. So this is what the T cell recognizes. It recognizes very specifically an 8th mirror, or 6th mirror, or 7th mirror peptide from the virus presented in the transplantation molecule which now makes the cell look foreign. These peptides, as I said, can come from any part of the, any protein of the virus, internal or external. They can come from polymerases and ribonucleotide reductases. It doesn't matter what it is. As long as it's from the virus and it's not self, they will be, they have the potential to be recognized. We tend to have immunodominant peptides in a large virus. We may have say one or two peptides which are normally recognized in association with a particular transplantation molecule. A lot of people are working that out. Robert Huber told me, he thinks from their studies, this is working at the level of the proteasome. But there's still a lot of questions to be answered in terms of antigen presentation and processing. Now this is what the thing actually looks like. This is the tip of the class 1 MHC molecule. This is the peptide. I think when we saw these pictures, which was 1996, this is from Ian Wilson's laboratory at the Scripps, we were all surprised at just how little of the actual non-self is expressed on the surface of the, this is the tip of the MHC molecule, is actually expressed. The T cell receptor sees both this and that. So the T cell receptor is seeing both self and non-self. And it interacts also with the self-majoristic compatibility components. And when we get into discussions of thymic differentiation and so forth, we get very much into considerations of this dichotomy. Now that's one sort of picture of the peptide in the MHC molecule. This is actually another picture. This is actually a monument which is at Memphis International Airport. Though I'm Australian, I work at St Jude Children's Research Hospital in Memphis, Tennessee. Memphis, Tennessee does not regard itself as one of the great intellectual centers of the world. It's well known for Elvis and for pork barbecue. But it doesn't have great academic presentations. So they were enormously pleased when a typical Southern American boy like me was awarded the Nobel Prize. So they decided on erecting a monument which they did in the departure area of the airport. This is a 12-foot high stainless steel cenotaph. Memphis was the ancient capital of Egypt. And here we see a hologram of the class 1 MHC molecule carrying peptide. They claim it as the first hologram monument in the world. And I don't know what other actual statues of protein molecules there are in the world. But Max Peretz will know that and I will ask him afterwards and he can tell me. Fortunately, it doesn't have a picture of me on it. So I can walk through the airport and be absolutely anonymous. But it does have a plaque saying something about me and about Zinkernagel and so forth. Now, they wanted to attribute that structure to Zinkernagel and to me. It's not our structure. It came along after our discovery. It was actually the structure of Pam Walkman and Bjorkman Don Wiley and Jack Stromanger at Harvard. And I said that if you attribute this structure to Zinkernagel and to me, that will be plagiarism. Not only plagiarism, it will actually be monumental plagiarism. And so, but I said that Don Wiley has some affiliation with Memphis. And actual fact, his parents live in Memphis. So he's now described on this as a native Memphian Don Wiley. And actual fact, he has never lived in Memphis in his life. But that makes the very little difference. Once one gets outside the sort of scientific laboratory world that we normally live in and you get into the world of the media, you find you're in a totally different situation where truth is extremely relative and doesn't matter too much. With due apologies to the gentleman of the press, I'm sure, who are all from the scientific press and are extremely careful. But I've had a lot of public debate and so forth in Australia after the Nobel Prize. And I discovered that one can have some extraordinary experiences with the, particularly the print media. Now, respiratory infections. For the rest of this, I shall talk about some experiments and just about what happened in a fairly simple respiratory infection. As you can see, respiratory infections are a major problem. They continue to be a major problem. This is second world war British propaganda where they're trying to stop people from infecting each other and thus decrease production. The influenza epidemic, the world's worst virus infection epidemic before the AIDS epidemic was the 1918-1919 influenza epidemic. It came at the end of the first European war. It contributed considerably, I think, to ending that war because people were dying on the trenches on both sides. It was called the Spanish influenza. That was because the Spaniards were the only people who would admit to having it. They weren't involved in the war so the Allies weren't going to admit to having it and the Germans weren't going to admit to having it. So it was blamed on the Spaniards. Now, it actually killed worldwide somewhere between 20 million and 40 million people. That's many, many more people than the war killed and it killed them all over the world. When I say between 20 million and 40 million, it certainly killed 20 million and it's thought it may have killed another 20 million in India but the British were controlling it and they didn't bother to count them. So it was an enormous epidemic and there's no gap. Even though we understand how influenza works now, they didn't know what influenza virus was at that stage and we know we can make a vaccine against it, it still will take us probably six months to get large quantities of vaccine out into the population, certainly three months, three to six months. This killed people all over the world before there was jet transport and while there were still quite strong quarantine measures and acceptance of quarantine measures which no longer really work and so we're all actually rather frightened of another influenza epidemic which is why there's a very strong virus watch program going on in China at the moment. One of my colleagues, Rob Webster, is setting up activities in Hong Kong because a lot of these viruses tend to come out of Asia and we're watching all the time to see if emerging influenza viruses because we think that apart from AIDS, this could be something that would give enormous problems. There is no way you can protect yourself against a respiratory infection despite this cartoon. You can't stop yourself getting it. There's no behavioral change that will stop you getting it. The only thing that worked in this epidemic that stopped a population getting the infection was in Western Samoa which was controlled by the United States Navy and they shut up Western Samoa. They didn't let anyone in, they didn't let anyone out. It was just like Alcatraz Prison but nobody died. In Eastern Samoa which was controlled by Democratic New Zealand, lots of people died. This is the influenza virus, electron micrograph and what we're going to talk about with this is mouse experiments. Now the basis of all immune responses of course is clonal expansion and differentiation. There are relatively few lymphocytes which bear the receptors that recognize say the influenza peptide or the influenza protein if they're B cells. What the immune response is is a process of proliferation. You select from those very few lymphocytes, they proliferate and then we end up with large numbers of lymphocytes. They proliferate and differentiate. Some differentiate to be effective cells, some go on to become memory cells. We talk about a primary response. This is one that the individual has never encountered before. The first time we encounter influenza or a secondary response. We've encountered this infection before, we've got better, we're okay and then suddenly we get something like that again. What will happen then is we will have expanded numbers of lymphocytes which can react to that infection. We will get a more rapid response. These are memory T cells and we're talking about a T cell response. The characteristic of memory T cells is there are many more of them than there are primary T cells and they're also partially differentiated. They're probably their chromatin is changed and so forth and they turn on much more quickly and they will respond much more quickly. So we will get a more rapid response. But as I will show you, there are limitations to this. These responses occur in the lymph nodes which is the part of the body that's specialized for immune responses to develop. It's a sort of a nurturing environment that allows all the various components of the immune system to come together to get this response going. The first thing that will happen in any virus infection, if you get a respiratory infection, is the lymph nodes will swell and you will have swollen lymph nodes in your neck if you've got a respiratory infection. What is happening there is this non-specific recruitment of white blood cells that are in the blood and the lymph into those particular lymph nodes that are infected. And it's thought that some of the cytokines and so forth that are secreted by virus infected cells, particularly alpha interferon, are draining into those lymph nodes and causing this selective recruitment to the site where the immune response needs to occur. And so what happens is something like this. This is a cartoon of a respiratory infection in a mouse. We infect the mouse intranasally and these that are the mouse give it the virus down the nose influenza virus. These viruses grow only in the superficial epithelial cells of the respiratory tract. They only cause productive infection in that site, is that they only make new virus progeny in that site. The reason for that is that they're the only cells that have an enzyme which cleaves one or other of the surface proteins of the virus. With influenza it's the hemagglutinin protein. So you can get a defective infection in other cell types but you don't get virus production. You need that enzyme to actually give you a productive infection. So even though these cells only infect that superficial layer of cells, they're still highly lethal. They don't generalize, they don't get systemic, they don't infect any other site in the body but they're still highly lethal simply by destroying the respiratory tract so that we no longer breathe. Some of the virus will also go into dendritic cells which are specialized antigen presenting cells that are particularly apparent in the upper part of the respiratory tract. They're all around the body in various forms but they're at large numbers in the respiratory tract. They will carry virus or protest virus down to the lymph node where the immune response will then develop. That will take six or seven days and then after that we will start to see the T lymphocytes coming out and the B lymphocytes as well and going into the, particularly the T cells, going into the virus infected respiratory tract. They'll come out through the lymph, into the vena cava, then into the bloodstream and they will escape more or less on a stochastic basis into the virus infected respiratory tract. Here they will become fully functional killer T lymphocytes which will bump off the virus infected cells here and gradually the infectious process will resolve. If we look at these inflammatory sites which we obtain by Bronchol alveol olivage, you will see the term BAL in some of the slides, we'll find about half the cells that are there are monocyte macrophages and about 60% of the rest are CD8 T cells, the killer T cells and about 30% are CD4 T cells, the helper T cells. After about 10 days the virus will be eliminated, the whole thing gradually resolves and the individual gets better. Though people can actually remain quite unwell for some time after respiratory infections and it may be due to cytokine, continuing cytokine release. Now this is the inflammatory cells we obtain by Bronchol alveol olivage and as I said the CD8 T cells actually work in this infection by direct killing. They can work through killing by that perforin mechanism or killing by the FAS mechanism. With influenza CD4 T cells, I haven't said much about them but CD4 T cells can also control the infection but they do that by providing help for the antibody response. The essential point about the CD8 T cell is it must make contact with the virus infected target cell. I've done a lot of experiments to show that over the years, I've been very emphatic about it and someone gave me this slide. This is not to discourage discussion with the students later. They were probably bigger and stronger than I am anyway. Now memory is established and it's enormously important. It's been known for many, many years and it was first described by Thucydides, I think he was spelt wrongly, writing about the Peloponnesian wars that only those who recovered from the plague could nurse the sick because they could not catch the disease a second time. The altered state is specific. This is the basic dogma of immunology. He was somewhat ahead of any immunologists that I know of. That's Salvador Dali's idea of memory. Of course, he's thinking about neurological memory. You can see the floppy clocks and so forth. He was influenced, I think, by the sort of intellectual discussions that intelligent people used to have at that time. They used to talk about Einstein's relativity theory and so forth. That's the floppy clock. Now they will probably discuss Dallas or the tennis. Here you can see an immunologist, this sort of floppy, spineless looking creature. The way we thought about memory until very recently is that we had from techniques which are really rather imperfect. We did not have methods for directly measuring the number of antigen specific T cells. This was a considerable problem to us. All we could do was take lymphocyte populations, put them in little culture wells and expand micro clones of T cells and try and count those clones. We had to proliferate those T cells over six or seven days to get enough T cells to read out in our assays. This required that lymphocytes go through at least 10 or 12 cycles of division in the tissue culture wells. We never quite knew whether we were measuring all the T cells. Still from that protocol, we developed this sort of idea of memory which is basically correct, though some of the numbers may actually be wrong. That is initially in the immune response. You get enormous clonal expansion, a great overproduction of T cells and then some of them stick around more or less forever as memory T cells. They may be less activated as time goes by. They may be less readily re-stimulated, but they are increased in numbers for a long time, certainly for the life of the laboratory mouse. We know much less about long-term memory in humans which is a problem for us because humans live about 35 times as long as a laboratory mouse. I am not at all sure that one year in the life of the laboratory mouse is equivalent to 35 years in the life of a human. This is the influenza virus. As I said, the T cells vary tremendously due to this reassortment or recombination process and also because of immunosollection in the presence of antibody. The T cell response can be directed against these internal proteins. What I will talk about is a response that is directed against an internal nuclear protein peptide which is actually very conserved between the different influenza viruses. That means we can infect mice sequentially with different influenza viruses which will not be recognized by neutralizing antibody, will not be cross-neutralized, but they will share the same T cell response element so they will then share the nuclear protein peptide. We can study that response now extremely quantitatively. This is studied by cytotoxic T cell activity, the killing activity. Here you see the primary response, slower and of smaller magnitude than the secondary response. This is an old slide. It goes back to the late 1970s. We can now study that in a very, very quantitative way indeed because we can now actually stain the T cell, the T cell receptor that recognizes the peptide MHC interaction. The way we do that is by the use of tetramers that were developed by John Altman working in Mark Davis' laboratory in Stanford. The first papers published using these things were published in 1996. What he did was take MHC molecules plus peptide and then link them with an avidant core. This gave us something with sufficient avidity or affinity to bind to the antigen specific T cell. Now we can directly visualize the T cell, something we could not do before that or directly visualize the antigen specific T cell using the flow cytometer. These were labeled with various fluorochromes. Then as we pass them through the beam of the flow cytometer, we can actually stain the immune T cells directly and actually measure their numbers. I'll use that just to show you the difference between a primary and a secondary T cell response. This is a primary response to a Hong Kong influenza virus, H3N2, hemiglutinin 3, near-amididase 2. Here we're seeing on this axis, we see staining for the tetrameric reagent that recognizes the nucleoporotent peptide, which is immunodominant in this response, in an H2B mouse. Here's CD8. Here's days after infection. These are broncoovial and lavage cells which are washed out of the lung. Three days after infection, we see nothing. Five days, we see nothing. Seven days, we see a few cells. The virus is eliminated between day seven and day 10. Here we see about 12% of the CD8T cells that we isolate from the broncoovial and lavage. That's this top right-hand corner, staining both with CD8 and with the tetrameric reagent specific for the virus. Here you see the difference with a secondary response. These mice were primed eight months previously with this other influenza virus, the H1N1 virus. They share the peptide that's recognized by the CD8T cell, but they don't share the surface components recognized by neutralizing antibody. Here you see the difference between a primary and a secondary response. Nothing on day three, which is a bit disappointing for people who are trying to make CD8T cell vaccines. Here we see the T cells on day five, day seven. By this time, 70% of the CD8T cells in the broncoovial and lavage have that specificity. This was quite extraordinary to us. We had absolutely no idea that this was actually the case. We've been able to confirm that by different techniques. I won't go into that. Can we skip the next two slides, please? Not that, go on. That's just another technique showing the same thing. Go on. Here we see what happens. This is the virus growing in the respiratory tract of a primary infection. You see the virus, we isolate from the lung and then it's eliminated. These are the virus specific CD8T cells coming up here that are responsible for the elimination of that virus. This is the secondary response. Early on, we get just as much virus growth in the respiratory tract. Then the virus is eliminated more quickly. Here we see the much bigger secondary response. We get enormous numbers of lymphocytes in the lymphoid tissue and the spleen and so forth in the secondary response. This is the primary response. We can just detect them with our flow cytometer in the primary response. Here's the spleen. This is the secondary response. Massive numbers of T cells. This is creating enormous numbers of questions for us as we try to understand the immune system. Here we have a secondary response, which a virus that grows only in the respiratory tract causing 25% of the CD8T cells in the spleen to be of a single specificity. As it was, we had enough difficulty understanding how it was that we covered all the bases with immune responses. How do we have enough lymphocytes to really recognize all these various infectious agents? When we had an immune response, how do we fit them all in? The immune system doesn't basically change in size. It's subject to homeostatic control, just like any other organ in the body. Here now, we have 25% of the CD8T cells in the immune response with a single specificity. This is now rather typical of virus infections. As 25% immediately after the infection, by 100 days after the infection, we're still around about 8% of the T cells with that single specificity. It makes it even more difficult for us to understand what is really the most difficult problem in immunology. That is understanding homeostatic control. It is attracting a lot of conceptual activity, but it's difficult to actually work on because we're dealing with populations of cells which are dispersed around the body. They're in the lymphoid tissue. They're in the blood. They're in all sorts of different tissue sites. It's really quite a challenge to understand how this whole thing works, especially when we're getting these large numbers of T cells specific for a single entity. That simply summarizes what I've said. I'll leave the story there. This is Dali's last word on the program. I'm not sure what he's symbolizing there. I think all these arrows and things here are actually jack and statin molecules and they're showing that the signal transduction people are taking over the field anyway, as they are with everything else. This is the last word on Burnett, actually. Burnett is, because it's his centenary, he's actually traveling around the city of Melbourne where he worked in the Walter Eliza Hall Institute on the side of a tram car. He shares that tram car with Howard Flory, another scientist who was an Australian scientist who awarded the Nobel Prize, who was 100 years old last year if he'd lived that long. Here we have Burnett, I don't know how well Flory and Burnett got on, but they're on the same tram car. Here we have Burnett saying science to me is the finest sport in the world. Not surprisingly, this doesn't really resonate with the Australian popular culture, which is totally sport obsessed. Also, most of the scientists in Melbourne don't seem to realize that Burnett is rotating around Melbourne on this tourist tram. I talked there last year and pointed out that this was the case and none of them had actually seen the tram car, though it's been going around Melbourne for a year. There's just a point, if you're thinking about orienting your career so that you win the Nobel Prize, don't actually believe that you're going to achieve any permanent glory whatsoever. The best that you're likely to achieve in the popular sense is to end up on the side of a tram car, and that only very briefly. They've recently taken Faraday off the British 20 pound note, I think it was, and replaced him with the musician. The reason that was given was that Elgar had a much more spectacular moustache, which would be very difficult to count a feat. Thank you.
|
This year we celebrate the centenary of the eminent Australian scientist Sir Frank MacFarlane Burnet, who was born just two years before the award of the first Nobel Prizes. Burnet spent the first 30 or more years of his professional life studying virus infections, then switched to the related field of immunology. He shared the 1960 Nobel Prize for Physiology or Medicine with the English immunologist Sir Peter Medawar for developing the theory of immunological tolerance, an area that is still a major research focus today. Burnet also did seminal work on the influenza viruses, and formulated the clonal selection hypothesis that remains the central dogma of immunology. However, like all of us, he was not infallible. Writing in his autobiography of 1967, he stated that the era of infectious disease was essentially over, and that we were now simply in a mopping-up operaton. Sadly, this has turned out not to be true. Those of us in the infectious disease community are acutely aware of the enormous danger posed by the human immunodeficiency (HIV) viruses that cause AIDS. Though the experience in the advanced countries and, recently, in Thailand, has shown us that appropriate preventive measures can limit the spread of this deadly virus, the disease is spreading with enormous speed in much of the developing world. The acute need for an effective vaccine is well recognized with the US government alone currently spending some 150 Million dollars per year on the enterprise. There are two elements of this. The first is to control the current human disaster. The second is that we are dealing with a pathogen that we have not yet been able to defeat. What would happen if a similar virus came along that spread, for example by respiratory infection? All of us know that, apart from receiving the current vaccine, there is no way that we can avoid catching the 'flu when it is spreading through our community. Somewhere between 20-40 million people died in the 1918-1919-influenza pandemic. This infection contributed to ending the 1914-1918 war, the deaths occurred world wide, and the spread was not stopped by various behavioral interventions, such as wearing facemasks. The present discussion will focus on how we deal with virus infections. The mammalian immune system has clearly evolved to protect the multiorgan, multicellular higher vertebrates against parasitism and destruction by simpler organisms, such as viruses and bacteria. A variety of mechanisms are used, from the innate response that has roots in early phylogeny to the specific immune system that we manipulate when we give vaccines. The essential components of the specific response are cellular, the T and B lymphocytes that move around the body and produce a variety of effector molecules and functions. Much of the progress over the past 30 years has resulted from isolating these various lymphocytes and analyzing their properties in isolation. More recently, we have begun to develop a clearer understanding of how the various components of immunity work together to defeat invading pathogens. The situation is both challenging and enormously complex. The necessity of deal effectively with existing, and emerging, infectious diseases are likely to remain at the forefront of preventive medicine for the foreseeable future. The risks are increased by global warming, rapid international travel and the continuing growth of the human populaion. There are great intellectual and practical challenges. Much remains to be discovered and illuminated by fresh, young minds dedicated to science and rational inquiry.
|
10.5446/55037 (DOI)
|
All right, thank you. So today I'll talk about the Huber scheme of points on the plane and its connections to Linkham also mostly contractual but most partially proved. And before Linkham also I just want to spend some time talking about the Huber scheme of C2. So many people here know what it is but I want to give slightly maybe different perspective and related to Joel's lectures very clearly. So what is the Huber scheme of C2? So you have zero dimensional subschims of C2. So C2 is a fine so the ring of functions on C2 is C of xy polynomials and two variables and any zero dimensional subschims is defined by some ideal. So we're looking at ideals inside C of xy such that the co-dimension dimension of C of xy mod i is equal to n. And it turns out that this space is smooth of dimension 2n as a theorem of Fogarty and it is a symplectic. So there is an algebraic symplectic form on it which is non-degenerate everywhere. And in fact to connect to Joel's lectures this is an example of symplectic resolution. And I would say it's my favorite personal example of symplectic resolution. So I think for Joel it was t star p1 for me like whenever I think of symplectic resolution I think about this example really. So why is the resolution and what is the resolution off? So you have a Huber cho map which sends an ideal to support of the corresponding zero dimensional subschim. So support of the quotient of C of xy mod i. And this is n points on C2 with multiplicities. And so you have a map from the Huber scheme of n points on C2 to symmetric power of C2. Again these are just n tuples unordered and tuples on points. And this set of unordered and tuples of points is very singular and Huber scheme is smooth and in fact it is a resolution of singularities of the symmetric power. And you can see more. So for example this is a Poisson variety, it's a fine Poisson variety and then this is non-affined but this is symplectic variety and the forms agree. And so one way to see it is to say that what is symmetric power of C2 really? So what are the functions on symmetric power of C2? Well we have n points with coordinates x1, y1, x2, y2, x3, y3 and so on. So we have functions on those and this will be just polynomials in x1 through xn, y1 through yn. And then we quotient by sin so we take Sn invariance where the action of Sn is diagonal, we permute x's and y's simultaneously. And we take the spec of that and that is clearly a Poisson variety because we can just say that the Poisson bracket of xi and yj is delta ij, maybe I have to sign. And so this defines the Poisson bracket on C of x1, xn, y1, yn. It's an invariance so it defines the Poisson bracket on the sin invariant functions. And so this is a Poisson variety and this is the resolution of that which turns out to be symplectic I think by result of boolean if I remember correctly. And additional piece of data which was very important for Joel's lectures is that this is a conical symplectic resolution and it has an additional torus section. So what are these here? So you have two cisteractions on C2 which they even they live to cisteractions on the Heber scheme. So the Hamiltonian torus scales x by q and y by q inverse. There is an echo. There's a little bit of an echo. So there is a Hamiltonian torus here where which scales x by q and y by q inverse. So this Hamiltonian torus preserves the standard symplectic form on the plane and one can prove that it preserves the standard symplectic form on the Heber scheme as well. So this is what Joel denoted by T in his examples. And then you have this conical structure. So you want some dilatancy direction and this dilatancy direction just deletes everything. So you send x and y to Tx and Ty. So sometimes people use Q and T in slightly different sense and it doesn't matter that much today but this is the conical torus. It scales everything or it shrinks everything. If T goes to zero to the most singular fiber, we're over zero and it scales the symplectic form by T to the power of 2n, I guess. No, T squared, sorry, T squared. And another piece of data which was also important is that given a Hamiltonian torus, you can look at a tractive sub-variety of this Hamiltonian torus or you can look at fixed points first of all. Yes, maybe let's talk about the fixed points first. So the fixed points for the Hamiltonian torus are monomial ideals. So you have any partition of n, lambda, so we draw it in French notation because we're in France and we fill every box of this partition with monomials in x and y. So in the horizontal direction we have x's and powers of x in vertical direction we have powers of y and so in particular this box will be x to the 6, this box will be x to the 5y because it goes up by 1, this will be x square y square and so on and this will be y to the power of 4. And then monomial ideals are generated by all monomials outside of this yan diagram in this yellow region and in this example the monomial ideals are generated by x to the 6, x to the 5y, x square y square, xy cube and y to the 4. And one can check that all fixed points for the Hamiltonian torus actions are actually isolated, there are finally many of those and they're labeled by partitions of n as I just described. So we are in the setting of the journal's lecture. And final piece of data which I skipped over was Lagrangian sub-variety or this attracting sub-variety. So we look at the locus where the limit at few goes to zero exists and if we just look at c2, so we look at the action qx and q inverse y, the limit of this thing exists only if y is equal to zero because if y is not equal to zero, this q inverse y will blow up and go to infinity. And so if you have n points then and you have this kind of action, so you want all the points to be on the line when y is equal to zero. And so the attracting sub-variety in notations from Joe's lecture, this will be the heel band of c2 plus, so this will be like y plus. And in more kind of algebra-geometric notations is the Hebrew scheme of n points on c2 supported on the line c. And this is the line y is equal to zero. So we're looking at all ideals such that support of c of xy mod i is a subset of y is equal to zero. So all my points are supported on the horizontal line. And one can indeed check and I mean this follows from general theory but one can directly check in this case that this is indeed a Lagrangian sub-variety. It has as many components as fixed points. So for each fixed point you have a attracting subset which is like all the points which flow into that particular fixed point. And all of them have dimension n and you can study the singularities of the space and lots of other things which we will maybe review in a second. And I think that's maybe most what I want to say about the structure of symplectic resolution. So there was some maybe any questions here. So I hope that most of you have seen this picture at some point or read an academic book where all this is explained very very nicely and clearly. And maybe one last comment which is not so important. So Joel mentioned category O for this action. So we want to quantize modules or sheaves on the hybrid scheme supported on this Lagrangian and the quantization is known and well studied by many people and it's known as rational Tirdinic algebra. So maybe I'll just say it as a word. It's not important for what I will say later. It's kind of related to what I said last time. But just to connect things so spherical rational Tirdinic algebra in the quantization of this patient. And in particular there is a notion of category O where things are supported on this Lagrangian and there is a lot of interest in studies. So you can look at I don't know adding those lectures on Tirdinic algebra and all this is discussed in detail. Question. Okay. Right. So any questions? No questions. Good. So let's keep going. So we need a bit more structure there. Okay. So a bit more structure is that we have because it's not a fine we need to study some interesting bundles on the hybrid scheme and the most obvious bundle is the doftological bundle T. So I think in research notations would be E but I stick with T. So this is a rank and vector bundle over the hybrid scheme and the fiber over an ideal is just the quotient of Cfxy by that idea. So this is really vector bundle of ranking. And given T you can cook up lots of other vector bundles by taking true funtors. In particular you can determine of T or n60 power of T and this is known as O of 1. So this is a line bundle on the hybrid scheme and in fact it's an ample line bundle. So hybrid scheme is not a project variety because it's not compact but it's a you have embedding into some project bundle over something I find using this O of 1. I say this way. Said differently maybe another way to think about this O of 1 is to say that this is a resolution of singularity. This is actually a blow up of some explicit ideal in here in symmetric power. And then O of 1 is associated with that blow up. Anyway, how one has to change the underlying space to get trigonometric or elliptic? Yeah, so for trigonometric we have to consider the hybrid scheme of C cross C star or cotention bundle to C star and for elliptic, I guess I'm not an expert but maybe you need something like T star of elliptic curve. I don't know. I forgot. Anyway, so what I think what I want to say in connections to JOLIS lectures just to close this general introduction is that in fact many people started from Nakajima and then later on the other side by Breverman, Finkeberg and Nakajima identified the hybrid scheme of C2 with both the higgs and Coulomb branch of the same theory. So in the notations of JOLIS lecture yesterday, we have to choose a group and representation. So the group is JLN. Representation is the Lie algebra of JLN or joint representation plus Cn and it corresponds to this query where I have one vertex with label n, one loop corresponding to this kind of operator in the Lie algebra of JLN and this Cn corresponds to the framing which goes from the vertex, framing vertex with label 1 which goes to this vertex with label n and then again it's quite classical by now that this is the higgs branch of this theory. So if you take cotention bundle to n and quotient by G, with this I guess triple quotient whatever, so this is the hybrid scheme and this is explained for example in Nakajima's book and more recently Breverman and Finkeberg and Nakajima proved that if you do this Coulomb branch construction with affine grass miners and all this stuff from last JOLIS lecture, you recover this quiver and so for people who like geometric representations here this is affine A1 quiver with framing and so it turns out to be self-dual under this magical symmetry and we will kind of see this, some instances of this in a little bit although I mean I'm not sure you can recognize these instances of being the Coulomb branch or being the Higgs branch from my top. Anyway, okay so this is just the connection to symplectic resolutions and this is like always good to think about it and like one thing which I want to emphasize which will appear a lot today is that you have this Lagrangian sub-variety so you can look at this hybrid scheme of points supported on a line and it's not completely random thing, it comes from this very general construction of symplectic resolution and action of Hamiltonian tools and we come to the main conjecture of myself, Andrei Nguyen and Jay Krasmusyn and by now it is my understanding that it's mostly proven by, let's say, Blonkov and Lavrozansky so they have five or six very long papers in archive and they're working on even more so I think most of the details are written out already and then there are maybe some subtleties which they finish but I think mostly the conjecture is now proven by Blonkov and Rosansky so starting from a braid on N strands you build a sheaf or strictly speaking an object in the right category and I don't want to talk about subtleties and homological algebra here but for now they just see star cross star invariant sheaf on the hybrid scheme of points and not on the whole hybrid scheme of points on this Lagrangian sub-variety on hill-ben of C2, C2, C2 and so it is supposed to be C star cross star invariant with respect to both conical and the Hamiltonian action and the most important part of the conjecture is that you can actually recover back the triply graded homology of your braid from this thing so you take sheaf co-homology so this is a coherent sheaf maybe I should say this because yesterday was so somewhat rockable sheaf so this is really coherent sheaf and so you take sheaf co-homology of hybrid scheme of N points on C2 support an align and you take this sheaf and you tensor you can take it by itself that's already fine and good for most purposes but if you really want to get the full co-homology you tensor it with the exterior algebra of no teftological bundle so unlike previous talk today so this T is not the tangent bundle this is really the teftological bundle maybe I should write it here explicitly so this is a dual teftological bundle and then you take because the sheaf is C star cross star covariant and the teftological bundle is C star cross star covariant you just take the space of sections it has a natural or hierarchical homology if there are any so you take the character of C star cross C star action on that and that's your answer so this thing is naturally triply graded so a degree comes from this part from exterior algebra of teftological or dual teftological Q and T degree come from the action of C star and C star on the plane on the hybrid schema and on the sheaf and you have to be careful strictly speaking if you do have hierarchical homology here so if you don't have hierarchical homology this is a complete answer and that's well understood if you do have hierarchical homology they're incorporated all together in a slightly weird way and these homological degrees also is related to Q and T in a very subtle way which I'm not going to talk about but in many examples that we'll see today actually you don't have hierarchical homology so you don't need to worry about that part but that that's the most subtle part here and again so then the question is well so why do we care as always is the question so and again you can read it in two ways so one way is to say well so you have an interesting sheaf on the hybrid scheme can we interpret it as invariant of some link and maybe get some intuition or some kind of structural results from the link homology which might help us and we might see some example or at least I'll say some examples about this and in the other direction I mean and originally this is like was the main motivation for this conjecture is that we really want to compute the association beta or we want to understand what it is and how to think about it properly and if we can guess the sheaf of beta we have lots of different tools to compute this right inside so we have a covariant localization to fix points we have in under nice circumstances we have homology of vanishing we have other tools just directly studying geometry of the hybrid scheme to compute this thing so I will give you very very explicit examples wherever you know the sheaf of beta very very explicitly when we can compute the right inside and so historically this was a main computational tool to make predictions about HHH and in many cases like now these predictions are proven and that's the structure so what else do we know about this sheaf so the action of as I said a couple of times already there is an action of polynomial algebra and HHH of beta and if we don't close the braid we really have 10 different variables and if we close the braid they close up according to cycles and the corresponding permutation or components connected components in the link and so these x's are just positions of the points somehow here so we have hybrid scheme of n points on a line we have n points they're all in a line and their coordinates are x i and so in particular the action of these variables correspond to the choice of support of the sheaf of beta and very concretely if beta for example is a knot then all x i must act the same way on HHH of beta and that we discussed several times but on the right hand side this means that my sheaf is not a random thing it's actually supported on the punctual hybrid scheme of n points so all points must be the same and up to a shift we can assume that all points are actually at zero and so whenever we're talking about HHH of beta and beta closes to a knot we're talking about the sheaf not on the whole thing not on the whole hybrid scheme not on the Lagrangian sub-rights but actually about a sheaf just on the central fiber on the hybrid scheme of C2 supported origin and one concrete example which was again kind of motivating for most of things and developments that I've talked about in this course and most of the developments I was in last 10 years here is that if you have a torus knot of type n k n plus one then your sheaf is actually clear so your sheaf is the line bundle of k on the punctual hybrid scheme of points on C2 so again let me repeat that you have this line bundle of one on the hybrid scheme of points you just raise it into power k if k is positive let's say but actually k could be negative as well and maybe not not right not too confusing so if k is positive you know a lot about the sheaf and it's cumulonium vanishing if k is negative this is still true but it's much more subtle and I can't raise it I'm sorry and this is very explicit prediction and many people starting from Mark Heyman studied this answer and I will maybe give you more concrete example in a second but this is kind of one very specific instance of this conjecture which tells us a lot so in the right hand side we have very specific line bundle on the punctual hybrid scheme tensor with a vector bundle fine and then you want to compute its space of sections or sheaf cohomology you can do it by many tools and Heyman did it for you in some sense and so you just have an answer on the right hand side but to compute the left hand side it took really a while in about like 10 years to confirm this prediction I don't know how to raise it fine and then another feature of this conjecture and of this sheaf is that what happens if you add a full twist so I don't know maybe I should you can put t and n I think later I will call it f to confuse everyone so you have the full twist braid where f ends trends and you turn it around and then they come back to the same position okay how is this conjecture related to braid varieties it's in no way related to braid varieties as far as I know that's a very good question which maybe I'll give some comment in the end but as far as I know I mean these are just different models so you have three different models for lean homology so maybe let me just summarize this before we go on so you have lean homology is related to braid varieties and that is definitely proven and that's essentially the work of Epps-Williamson plus other work of Melletrin and others then you have these hybrid schemes and singular curves which I've talked about last time and that's a very different thing and the relation between the two is also not understood so maybe I'll okay sorry my computer is weird one second okay so you have three algebra geometric models for lean homology and so just to put everything in context so you have the braid varieties and then you have hybrid scheme of singular curve and then you have hillbend of C2 and there are three very different things so here we have homology of some smooth non-compact algebraic variety with weight filtration here you have homology of compact but singular variety and the question is what is the relation so this is known to be related to lean homology so this is done then this is still conjectural how this is related to lean homology except for examples where we can compute things and the relation between the two is supposed to be some kind of monobillion hodge theory which I don't know anything about but I've talked about last time and then you can ask what is the relation between this and the conjecture that I just said and again it's a very good question which is not I would say understood at all so physicists would say that this is some kind of geometric engineering the relation here is kind of immersion and I will talk about but like just purely by terms of algebraic geometry I don't know immediately how to see this so maybe one way to get from braid varieties to the hybrid scheme so just for Joel so there is a relation to character shifts here and one can hope that there is a relation that in some kind of derived version of character shifts in the hybrid scheme of C2 so maybe this would go through character shifts and I will say a couple of sentences later but again this is not understood at all I would say and the relation here is kind of immersion slightly better but this is only for algebraic knots and it's still not clear so I mean all this is pretty much conjectural and that's why this is hard so what Ablon Kovendrasansky did I will say in a second and they did something very different they didn't they didn't relate to braid varieties and they didn't really use geometry here they did something different yeah sorry I didn't really answer the question but I don't have much to say any other questions okay all right so I'll come back to this question and maybe say more and confuse me one more question Eugene uh-huh yeah I just didn't quite understand in your hearing the second point the link between the action of xi and the support of us just yeah so if you have any issues so you have xi or rather like symmetric functions of xi there are global functions on this heel brand of C2C right so what are the global functions on the hybrid scheme as I said there's a just symmetric function and x's and y's so if all y's are equal to zero then symmetric function and x's are global functions on this thing and so in particular any module is any shift on the hybrid scheme gives you a module over the symmetric functions and this is supposed to match to this action of xi on the left so that's one way to say it another way to say it is that you have a map from the hybrid scheme of C2C and you have just a map to symmetric power of C where any of n points on the line and so the coordinates here are the symmetric functions that I talked about and they should act or you can just look at the image and so these n points here are supposed to correspond to coordinates xi over there okay okay thank you good and so yeah so in particular if all xi act by zero on the left this means that the shift is supported on like zero zero zero zero if all xi act the same way this means that all points in the support of the shift are the same and or like push forward to SNC if you want are the same and up to a shift you can assume that the same point is zero so this is to achieve okay fine and then another general property which is very useful in practice is that if I have a braid beta and I add a full twist to the braid then the corresponding shift is just multiplied by the line bundle of one and that is in some constructions this kind of comes for free in some construction this is really hard to prove and I'll talk a little bit about that if I have time and so even more concrete examples kind of unpacking sorry can you see me hello yes okay yeah sorry my computer almost crashed and I don't know what happened so even more concrete than this example if you have a three-fold node t2 3 then the corresponding shift is of one on the hybrid scheme of two points on c2 supported at the origin so the hybrid scheme of two points on c2 is cp1 is p1 and so we're counting sections of of one on p1 this is a two-dimensional space and we know how to compute it we can do it equivalently we get q plus c and I just realized I changed my q and c from the beginning of the lecture but that's fine it's still two-dimensional space and this two-dimensional space matches the two-dimensional space that we saw before and so maybe yeah and another example that beta is t3 4 3 4 torus node that's kind of the first really interesting example and the line the shift is still of one but now the hybrid scheme of three points at the origin and this hybrid scheme of three points is well known in classical objects it's a corner or twisted cubic and you can resolve singularity to compute this sections of the shift and it's a good exercise for everyone which is really not so far is to say that the dimension of the space of sections is equal to five so you have hybrid scheme of three points on c2 at the origin takes space of sections of one on that it's five-dimensional and if you do it equivalently you can find the covariance for the sections and the character and everything and you can check directly it's also not so hard that this is the same as the link homology for t3 4 so maybe for this particular example just again going back to this equation of joel and keep going back so there are three different models for this t3 4 so here you have again heel 3 of c2 zero with o of 1 and you take homology of that shift here you would have this e6 cluster variety which i talked about or this uh with detroit variety p3 7 i guess so some open sub variety in the graspingen 3 7 and it a gets homology more or less with weight filtration and here you would have this compact fidecobian uh 4 3 4 that i talked about last time so this would be this cone over p1 cross p1 or over here to bridge surface so this is again some weird space and so here again you have a smooth and non-compact space but it's open in the graspingen and you have the weight filtration here here you have a singular space which you can analyze so this is this compact fidecobian of x cube is equal to y to the fourth and you take its homology and again it's singular but you can compute everything and here you would have this singular space cube scheme of three points of c2 zero of one computed homology and get the same result so all of them give the same result and the same as hh in this case we can check everything but again the relation between them looks quite mysterious and it still is okay any questions okay and another example which maybe i'll gloss over briefly that's fine so if your braid is the torus node in general so i talked a lot about torus nodes and the relation to the detroit varieties relation to this compact fidecobians so what do we know here so here this was started in particular in the work of myself and andreine good so the shift is actually tricky in general and to get the shift we need to consider the nested hybrid scheme or the flag hybrid scheme on c2 which also appeared this week so we look at the space of all flags of ideals i1 to the up to in and all of them are supposed to support it at the origin in this case and on this thing we have a lot of line bundles where lk is ik minus one mod ik take k minus one's ideal quotient by k's ideal and this is a line bundle in the nested hybrid scheme and roughly speaking it takes push forward of the product of the product of this li in some powers ai where ai are these fractional things so you take integer part of i m over n i minus one m over n and then subtract this integer part and then use this ai to build a line bundle on the flag hybrid scheme and then just take push forward of this thing to hybrid scheme and that's your shift so that's pretty subtle and not obvious at all and like why this answer is true is an interesting question which we can discuss and but this matches for example these constructions of oblong kofendrozansky as they checked and i can explain where these numbers ai come from typologically but not right now and the main subtlety here which is kind of the main obstacle here is that this is really really singular space as we heard from Richard so here we use probably related maybe slightly different construction but we use some kind of digit structure on a flag hybrid scheme to define this push forward and so very roughly you say that this is cut out by some explicit equations in some explicit smooth space and then you just use that to define the virtual structure shift and everything but again like it would be nice to have a better understanding for example like how this structure that we use here is related to the one that Richard talked about and i don't know and like one reason why it's not clear because here we have c2 which is really non-compact and then he worked on compact surface and that was really important for him i mean i don't know if it is the same structure in any case you have some shift here and you can also use lots of tools and previous work of andrei to compute the at least olicultural risk of the shift on the hybrid scheme over here and check that it matches what we expect and then you can ask well so all this was about knots so when m and n are co-prime what happens if you have more components and like the first basic question is what happens for identity braid so if you have just all transpirals to each other how to think about it and to explain the answer for identity braid i need to introduce some auxiliary spaces so i need to look at the following Cartesian diagram so the hybrid scheme of n points of c2 projects to sn of c2 and then you have projection from c2 to the n to the sn of c2 which just quotient by sin action and then you take fiber product which is denoted by xn and this was introduced by Mark Heyman who studied this space a lot and in particular he proved that xn is the blow up of this c2 to the n along the union of diagonals so you have n points and we have diagonals where at least two points collide and you take the union of all such diagonals and blow it up simultaneously and this is xn and then if you want to unpack this then what is the ideal defined in the union of diagonals so if i's and j's point collide this is xi minus hj and yi minus yj this is a co-dimension two hyperplane and if i have the union of diagonals i take the intersection of ideals and this is this thing j and so if i blow up i just take the approach of direct sum of j to the k so this is very very explicit blow up construction and the main thing which Heyman proved is that if you push forward the structure shift of xn down to the hybrid scheme you don't get oh in fact we have a remarkable bundle called pre-chase bundle and this has rank n factorial so kind of this map on the right is generically n factorial to one so the left hand side is generically n factorial to one and what he proved is this is actually flat and you get a vector bundle of rank n factorial and so this pre-chase bundle plays a very important role in the study of the hybrid scheme and we will see it in a second here and another thing so kind of connecting to the nested hybrid scheme which is important so you can look at the nested hybrid scheme of n points on c2 and you have a map to the usual hybrid scheme because you have a flag of ideals you just forget everything you just project to the last one to the last ideal or you can fix support of each quotient so we have these quotients but we can look at the support of these quotients there's n different points and so we have something in c2 to the n where these quotients are really supported this is ordered set of points and so you have a map here you have a map here and so by general law since you have a map to the fiber product to xn and so you can define pre-chase either by pushing forward all from here from the xn to the hybrid scheme or by pushing forward all from the flag hybrid scheme down to the hybrid scheme and this is actually the same roughly speaking because for this map push forward of all is all and this we actually I don't think heim improved this but we proved this like we needed this for something and this is in our paper with Andrei and Jake and so again like if you don't like this kind of weird singular derived dg schemes like flag hybrid scheme you can just think of this if you like it then you can think of it as push forward of all from here down to the hybrid scheme and somehow this is more natural for me to think about this this way but if you like kind of classical algebraic geometry maybe you need to push forward all from xn down to there there was some question okay anyway so there is some extra bundle of rank and factorial whatever it is who cares and so now the answer is that what do you associate to the trivial breed you associate just this pre-chase bundle restricted to Lagrangian sub-rarity hillbend of C2C and if you have t and kn which appeared a lot then you so this is the power of the full twist and then you associate to it pre-chase bundle times of k restricted to hillbend of C2C so yeah I mean and this is no easier way to describe the ship for the trivial braid and for the powers of the full twist and in general for a kind of braids associated to links with many components you always expect some kind of pre-chase bundle some kind of restricted version of pre-chase bundle to show up and this is an interesting feature of this construction okay and I want to note something because that will be important for us in a second so how do you actually like okay so we believe in this and suppose that we believe in the conjecture what does it give us so we have this hybrid scheme of C2C with this vector bundle p times o of k so the space of sections here is the same as space of sections upstairs on xn of C2C fine of o of k by projection formula so this is just the projection one and xn of C2C well or xn at least we can think of it as a blow up of this C2 to the n and so we know how to compute the space of sections on a blow up of o of k this is just the kth power of the ideal that we started from so maybe I'll write it as a separate line that if you just look at the sections of xn with o of k this will be j to the k and if instead we have the C2C well we have to question by y's and that's what we do here so we question by maximum ideal and y's and well yeah I wrote it here already that h0 of xn of k j to the k just by this blow up formula and then j to the k is actually free over polynomials and y's that's not obvious at all but that was also proved by him and so to sum up so if we believe in this conjecture this said the conjecture says that this j to the k mod y j to the k should be the invariant that was associated to nk and torus link and one theorem from lecture two is that this is actually true so maybe this is the hh of tn kn by lecture two so we prove that this is actually true and so in this case conjecture is true and that's a very non-trivial check because we have this very non-trivial ideals and stuff and but of course this particular computation was a motivation for that theorem we wanted to prove that hh h of tn kn is given by this weird formula and if you remember so I said that this j to the k mod yj to the k so the way I sketched the proof I said that you need to deform your homology so you need to introduce y's and then kill y's like this so in this sense you have a shift on the you want to describe a shift on this Lagrangian sub-variety but it's much more natural to describe a shift everywhere and then restrict to Lagrangian sub-variety and so in some precise sense this vifification business corresponds to extending the shift from Lagrangian sub-variety to the whole hybrid scheme of points of view and so right so let me sketch at least some ideas why this might be true and what is like the relation to other stuff that we know so the first approach that I certainly don't have time to talk about and that requires like the rule on course of lectures by Blomkow and Rosanski is that what they do is they define in Julian Converry so they say like forget about hh h they use matrix vectorization so over some very very complicated space roughly related to the flag hybrid scheme of points or the hybrid scheme of points but it's like a bunch of groups and the algebras and you have matrix vectorization of some complicated potential in there and then what they prove is that first of all this is a Lincoln variant and secondly more recently they prove that this is isomorphic to hh h and so then how to get from there a shift on the hybrid scheme you just kind of push words from a field to help or from whatever they're set up they're using the critical locus of their potential to the hybrid scheme but like their theory is intrinsically related to the hybrid scheme and so it's kind of natural to get a shift from there but the hard part is of course to prove that their theory is isomorphic to hh h and that's what they recently proved so a different approach which is like there are two more approaches which are not completely finished and mostly contractual but there is a lot of progress there is one work of myself and bad rich and myself and hock and come and bad rich so we want to understand lean homology and the solid torus so this is the same thing as to have a variance of beta up to conjugation so we don't require any mark of most but we require a variance and a conjugation and it turns out that like if you have seen I mean it doesn't matter but it turns out that this technology the right homological algebra language to work with this is the language of the right categorical traces and host of homologies of categories and things like this and so we do this for the category of surrogate bimonials which governs hh h we compute something and so that is supposed to be related to this arrow between braid varieties and the hybrid scheme and roughly speaking when we have when we close the braid and we look at this braid link in the solid torus you can associate to it some kind of derived categorical version of character shifts and that's related to that is supposed to be related to the hybrid scheme but maybe I don't have time to answer that but you can ask me later if you're interested like why this might be related and the thing that I really want to talk about in the last eight minutes is the approach that we need initially to with Andrei and Jake and I think this approach still makes perfect sense except that again you need to overcome some homological algebra difficulties and so what we said in the original paper we consider the graded algebra so what is the graded algebra so you have a full twist braid corresponding to t and n and so you have the powers of this full twist braid this correspond to n k and torus braids and you just take the sum of them all over k where k goes from 0 to infinity and so this is an algebra because if I tensor f t to the k if I multiply f t to the k and f t to the l I get f t to the k plus l and so by general nonsense you have a map multiplication map if I have a home from r to this guy and if I have a home from r to this guy we can multiply tensor these homes and get a home from r to f t to the k plus l and so this gives a graded algebra structure on this direct sum so in addition to all this like qt and a gradings we have this extra gradient by the power of the full twist and by lecture two or three the homology of the kth power of the full twist the homology of this guy is precisely j to the k mod yj to the k so we know this answer and which I discussed 10 minutes ago that you can prove conjecture in this particular case and not only you prove this conjecture you can and j is again this ideal of the diagonal in c to the n but this by the virtue of the proof this agrees with multiplication so you have this multiplication on the left hand side where with abstract homes from r to f t to the k and you have multiplication on the right hand side because you have powers of the same ideal and this means that this isomorphism agrees with tensor structure with multiplication so maybe let me this isomorphism agrees with multiplication and so not only we know this thing as a vector space or as an x-module or x-y-module we really know this thing as a graded algebra and given an arbitrary braid we can construct this complex of certain by modules whatever but we have naturally a module over this graded algebra because we can take home from r to t beta times f2 to the k so this home is a module over this algebra because again I can add an additional f t to del in here and get f t to the k plus l and so this is a gradient module over a graded algebra and so this gives a coherent shift on proj of a and proj of that algebra as I said this is exactly this xn of c to c so that's why I talked about the space xn because you can write it as a proj of some very explicit algebra and any graded module over the algebra gives you a shift on this xn and so in some sense we're done and again the real question is to lift it from this kind of naive computation to the level of dg or derived categories and this is still not done but I think like this is one way to go and so what we expect and we write very clearly what should be true that we have some kind of dg funter from the formative category of serving memodials to derived category of hybrid scheme of c to c and it has a lot of interesting properties which we list quite carefully in that paper okay and so maybe I have a couple of minutes before saying thank you I want to comment again on Jail's question so I didn't really explain where was Jail's question here so I didn't really explain what is the relation here and what it has to do with character writers but here we have a single curve like again how you would expect to have it here and so the idea is to look at the same graded algebra and so the same graded algebra so maybe as a note put it later I don't know let's find input here so a note and this graded algebra A below size that the projave is xn can be seen and can be identified in the offense story in the Coulomb branch story and I think Jail briefly mentioned this last time that you I mean you can build your Coulomb branch as a spec of some complicated algebra but you can also build a resolution of Coulomb branch in nice cases as projave of some algebra and this is exactly this algebra except that we don't want to get the hybrid scheme we want to get xn but this is kind of minor difficulty which can be overcome or like we think we know how to overcome and so one concrete thing about the graded algebras and graded modules we should follow up from this is that you take some gamma in gln I think I called it y last time whatever so we kind of consider the spring of fiber for gamma we can consider the spring of fiber for gamma times x the spring of fiber for gamma times x clear and so on and then we take the direct sum over k is equal to zero to infinity of homologous of sp gamma times x to the k and again this is supposed this is a graded module over the graded algebra and so this would be a shift in approach and one can ask lots of questions like how to at least to construct the action and how to quantize this and how to quantize this graded algebra to some kind of z algebra that other people started but there is some work on this direction and mostly done by Webster and others and also there was work in progress by myself Oscar in the block and so in some way I would say one of the key ideas is to identify this graded algebra and to prove that you always have a graded mollusk structure over this graded algebra and we see it here in this column branch story and again to answer Joel's question it would be great to see this algebra on this side as well I can some have some kind of multiplicative structure for starters to have to say that you have a great variety for beta you have a great variety for beta prime how to get the to the great variety of beta-beta prime and that's already not completely all of this to me but maybe that's my ignorance anyway but if all this is done properly that here from a curve you not only consider the curve you consider kind of the curve with additional full twist and this corresponds to the great modulo graded algebra and so coherency here so I think this is kind of more or less directly from here to here in nice cases which is still yet to be completely worked up and I'm over time so let me say last two things so first of all thank you very much everyone for coming to these lectures I think it was great and there were lots of great questions great thanks for organizers of the school I think it's awesome and I personally learned a lot from Richard and Joe's lectures and I hope to learn more from other lectures and as an advertisement if you want to learn more about the encomology and some connections to topology to algebra join three minutes or x algebraic geometry in particular I'd like to advertise aim research community which you can find by this link so this is sponsored by American Institute of Mathematics in San Jose and this summer we have a lot of reading groups and other activities specifically for grad students so if you're grad students or postdocs who want to learn more what is the link homology why do we care about this what are the braids right is what are other structures there are five or six reading groups specifically aimed at this problem like how to compute the encomology there is another group so if you're interested in this please register here or write me an email and I'll give you all the contents so thanks a lot and see you all later thank you very much Richard any questions online offline so that in the p-code w conjecture for curves you have three spaces you have this doble modular space space modular space dole modular space but here it seems that's different from the three spaces you have here you have just two of them I think uh so you have the nc2 no he'll be on c2 has nothing to do with like the bore the ram race no no no this is completely different story so like the great variety and the hybrid scheme of single curve they are and I always forget like there are two of the three spaces that I mentioned that you mentioned and somehow the relation to huber scheme of c2 is very very different so for kind of her character varieties and for p is equal to w I think the closest analog is the work of howzel little yeah Rodriguez Villegas and others so they say that you can compute uh one correct polynomials of character varieties with weight filtration or with this diverse filtration or the kitchen modular space with some extra data and so all these things are expressed as some complicated formulas with McDonald's and so somehow the huber scheme of points keep track of that and it says like all this you can interpret there's some shifts on on the huber scheme of points and purchase a bundle also appear in this work of howzel and collaborators and so if you've seen that like that is the closest analog but this is not part of the oops it's not part of this non-abiding horse thing so the relation to huber scheme of c2 is very different and like physicists I think say that this is some kind of geometric engineering whatever it is I don't know what geometric engineering personally but that's what they say but I want to emphasize this is a very very different thing so there you're talking about kind of related varieties here you submit the jump from like studying homology of a variety to homology of a sheaf on some completely different variety and so this is very very different thing to do okay any other questions to continue on the same question but uh sort of you know I don't think here you have like space in the middle so I do I guess drum or your space so is there analog of this like something between like great varieties and that's a very good question I don't know any other questions so I just put the link to this aim program to the chat if you're interested all right so let's thank you again
|
Khovanov and Rozansky defined a link homology theory which categorifies the HOMFLY-PT polynomial. This homology is relatively easy to define, but notoriously hard to compute. I will discuss recent breakthroughs in understanding and computing Khovanov-Rozansky homology, focusing on connections to the algebraic geometry of Hilbert schemes of points, affine Springer fibers and braid varieties.
|
10.5446/55041 (DOI)
|
Fourth and final talk and finally I should bring the ComaLogical Hall algebra in the title into the game. And so that's the last opportunity. Okay, so where are we in our logic? So we started with the slogan, we just want to count quiver representations. And as a device for counting we defined this motivic ring or this localized rotary ring of varieties. And so this led us to writing down this zeta function like or partition function like motivic generating function in the motivic quantum space of the quiver. And then we have seen three different ways of factoring. Just write factorization 1, factorization 2 and factorization 3. Three different ways of factoring aq in a purely formal sense. So this was kind of a logarithmic q derivative and it led to motivic generating function of Hilbert schemes of associated to this quiver. The factorization 2 that was the wall crossing formula and it led us to look at these modular spaces of semi-stable representations and their motifs. And then finally the third factorization brought us to dt invariance and to the intersection columnology of these modular spaces. So just from this purely formal factoring this motivic generating series lots of interesting geometry popped up and the slogan for today is we want to categorify everything. So categorify aq and the various factorizations. And what does it mean to categorify this thing? So this means find an algebra. So really an interesting algebraic object, in this case a graded associative algebra whose Poincare series is precisely this motivic generating series. So find an algebra appropriately graded whose Poincare series is aq. So and in the first three talks I really wanted to go into all the details of these shift factors, all these ugly shifts by minus square root of the left sheds motifs. This was really important for me because it is important in all this theory but just for time reasons I will not manage to do all the details today. So at some point I just, so for example in the verification that this algebra which I will now write down has this property that is Poincare series is this. I have to skip a few details and work a bit suggestively because otherwise we are just doing 20 minute calculation which at the end is almost trivial. Okay, so and this algebra is the so-called homological Hall algebra or a candidate for this, the only known candidate in general. This is the cohomological Hall algebra of q. So that's the algebra I will now introduce as precise as possible and then try to give you a bit of a feeling of the features of this and so what we do is the following. First recall, well the key to all the geometry of quiver representations, let me write this down here, the key to all geometry of quiver representations was this idea that quiver representations are a fixed dimension are correspond to the points of some affine space on which you have a base change group and the stack of quiver representations, the stack of isoclasts of quiver representations is just the quotient stack of this affine space by this group. So this will be central again. So recall this group action and for simplicity since I will write down many, many of these spaces Rd, let me now omit the q because I will work over a fixed quiver. So I should really write Rd of q but let me omit it in the notation. All right so here's the candidate for the cohomological Hall algebra which we will just write as curly H of q. Take the direct sum over all dimension vectors, what like in the genera-, in the motivic generating function we took the sum over all dimension vectors of the motive of the quotient stack and here we take the equivariant cormology of the representation space say with the rational coefficients. Yeah, so the direct sum of all equivariant cormology of these spaces. This is the space, the underlying space and we will see that this has more or less nothing to do with the quiver. The fact where the quiver comes into the game is only in the multiplication which we will now define. So define the multiplication, find multiplication on H of q by and now let me first define the multiplication by one line slogan and then give the exact definition and the slogan is by convolution along the stack of short exact sequences. This should be along the stack of short exact sequences. So short exact sequences define kind of a hacker correspondence namely from a short exact sequence you can either project to the middle term or to the two outer terms. The two outer terms are representations of dimension vector d and e say and the middle term is a representation of dimension vector d plus e and this and if you somewhat convolve along this hacker correspondence then you get the multiplication. But let's make this precise because ultimately I want to convince you that all we are doing here is basic linear algebra with a lot of conceptual overbau but at the end it's just linear algebra and also in the definition of this convolution product it's the same. So let me write down the following diagram of spaces and groups. So we want to have an equivariate and chronology class here and produce an equivariate and chronology class here out of it. So inside here we take the closed subset of now this is kind of a symbolic notation upper triangular block matrices. So what does this mean? Well rd plus e consists of tuples of linear maps. Linear maps from a di plus ei dimensional space to a dj plus ej dimensional space. Okay, since we have this decomposition of the dimension anyway we can split these matrices into two by two block matrices and we just look at the closed subset of two by two block matrices with a lower left block being zero. Okay, that's the natural inclusion and we also have a natural projection map. And this projection means forget this extra datum and only take care of the diagonal. And what you then get is here a representation of dimension vector d and here a representation of dimension vector e. Okay, that's basically the Hacker correspondence which we will use to define the convolution product but we have to decorate everything with group actions. So here we have a natural, do we need a name for this? Yes, we need a name for this, sorry, let's call this ZDE. So here we have a natural action of gd cross ge. Here we have the natural base change action of gd plus e and now we also need a group which somehow mediates between these two groups and what we take is the natural parabolic group PDE. This is also a group of upper triangular base change matrices. That's a maximal parabolic or product of maximal parabolic. And from this parabolic group you can project to the groups on the diagonal. This is the levy quotient and it also embeds into this group here and it acts here because upper triangular block matrices act on upper triangular block matrices. So that's a simple idea and this actually gives you a very nice operation in acryvaryne cohomology, so let's start with acryvaryne cohomology with respect to the group gd of rd, tensor acryvaryne cohomology with respect to the group ge of re. So we want to end up in acryvaryne cohomology with respect to gd plus e of rd plus e. And that's the geometry we can use for this. So let's first take the Cunith morphism in acryvaryne cohomology to get to the gd cross ge acryvaryne cohomology of rd cross ge. This map P is a trivial vector bundle so we can easily change to the gd cross ge acryvaryne cohomology of the total space of this vector bundle instead. Then we can change the group because well here we are just taking the factor by unipotent group and acryvaryne cohomology is insensitive to unipotent groups so we can identify this to the Pde acryvaryne cohomology of zde. Then there is a general induction isomorphism in acryvaryne cohomology. It allows you to replace this by gd plus e acryvaryne cohomology of the associated fiber product gd plus e over Pde of zde. And from there you have a canonical multiplication morphism to the whole thing induced by this embedding. And you take the composition of all these quite natural maps in acryvaryne cohomology and that's it. Every single step is a very natural operation in acryvaryne cohomology. No mysteries there. Then you have to verify associativity of this product. That's lots of fun and you have to write down huge diagrams but nothing really spectacular happens in there. So associativity mainly has the reason that you can view the space of 3 by 3 block matrices in two different ways. You can either view it as a 2 by 2 block extended by a 1 by 1 block or a 1 by 1 block extended by a 2 by 2 block. That's associativity basically. So wonderful. So fact this map here which I will now call multiplication star defines a unital associative but in general non-commutative NQ0 graded algebra structure on the homological Hall algebra of Q. All right. Well even without the reference to categorification of motivic generating invariants this looks like a fun object to study. It's one of these convolution type algebras which you see regularly in geometric representation theory and it's definitely something which deserves study. But of course we don't want to study it only because it looks natural but because it really should categorify AFQ. So I have to convince you that its Poincare series is well at least closely related to AQ. All right. So now let me do this here. So what is this, what is the homological Hall algebra as graded or in fact we will see in a minute by graded vector space. So for this we have to study this equivariant homology and now that's a huge surprise or better disappointment. This group acts on this vector space linearly. That's the linear conjugation action which we have seen many times. As a topological space this vector space is contractible. You can easily contract it to a point namely to zero. Since the action of GD is linear you can compatibly contract our D as a GD variety to zero with a trivial GD action. So just by being contractible you see that this is canonically isomorphic to the GD aquarium homology of a point. And this is really a big disappointment since the structure of the quiver is not reflected at all in this. All the arrows of the quiver are gone. The only thing we remember is how large the group is, how many vertices the quiver has. But yeah let's do it anyway. The equivariant homology of the point is by definition of equivariant homology the same as the usual homology of the classifying space. GD is a product of general linear groups and you know how the classifying spaces look like. They are infinite grassmanians. So you can really compute this homology and it is nothing else than tensor product over the vertices of the quiver. And for every single vertex you get equivariant homology with respect to GLDI of a point and that is a rational function symmetric polynomials in generators xi1 to xidi. So you have two sets of generators. You have generators that are indexed by two indices. One index for the vertex of the quiver and one index running from one to di. And then you take symmetric polynomials in these. And concerning homological degree the degree of this xik is 2k. Is k. Sorry. I am not taking the elementary symmetric function. It is k. This you can further identify with now you take the elementary symmetric functions in these and this is now in degree 2di these elementary symmetric functions. And then you write down the Poincare series of HQ in this realization is a priori some overall dimension vectors, product over all vertices. And then we have a polynomial ring and the generators i in degree 2 to di. And so it is 1 minus 1 minus q squared 1 minus q to the 2di. Recall how we wrote the motivic generating function this we wrote as some overall d minus l to the 1 half minus Euler form of dd divided by modified Pochhammer symbol. Okay. Just one quick question from a participant. Can you briefly recall what the Pochhammer series of the graded algebra is? Okay. Yes, thanks. Yeah. Okay. And there's a missing t to the d. Okay. So this I define as the sum over all d, sum over all k dimension of the kth equilibrium and the kth equilibrium and the kth equilibrium. And then q to the kt to the d. Yeah so the generating function for the dimension of the homogeneous parts. And here a priori we have two gradings. We have the grading by the dimension vector and we have the homologic grading which we have to modify in a minute. But at the moment let's just take like this. Okay. Does the Jackson integral or Mellon Barnes presentation of aq show up naturally in this context? If only I knew what this is. So could the person making asking this question please send an email to me with some details. I hope you're happy to have a look at this. No idea. Okay. So that's the Pochhammer series of that algebra. A priori we have the dimension vector grading and the homological grading and that is what we get from this elementary calculation. And while we see a little bit of the features of the motivic generating function at least the denominator looks similar. Here if you replace q by L inverse then that's fine. But the numerator, the Euler form is not yet incorporated. Of course it isn't because this guy as a vector space doesn't depend on the quiver on the arrow structure at all. So what's going wrong? Well the arrow structure is encoded in the multiplication. Namely what I haven't told you in writing down the multiplication is while you see these dots here. So this was really hiding a problem. Namely this multiplication is not compatible with the homological degree. If you start in homological degrees k and L there you don't end up in degree k plus L but so let me add something to this diagram. If you start in degree k here and degree L here then you end up in homological degree k plus L minus the Euler form of the dimension vector d and e. Ah, okay so there's a hint that really the multiplication of the quiver plays a role. So but this means in particular that the homological Hall algebra a priori is only a graded algebra integrating by dimension vector and not by homological degree because homological degree is messed up. There's one case where you can fix this. If the Euler form is symmetric globally symmetric. Then this shift here this minus dE you can rewrite as minus ed and then you can re-grade the algebra. We can re-normalize the homological grading by a term minus one half Euler form dD and then this suddenly this multiplication becomes compatible with the homological grading. You add a minus one half dD here, you add a minus one half dD to the homological grading here, a minus one half EE to the homological grading here and then here you arrive at minus one half dD minus one half EE. And this you can rewrite as minus one half dE plus minus one half ed so this whole thing is then nothing else than minus one half d plus E. Okay that was the calculation which I told you at the beginning I will not show you so I showed it anyway. Okay well. Can I ask about the shift in degree? Which one of the maps you can put into this is the shift? Okay the problem is here so here you are taking the embedding of zD in RDE and if you push forward then you get a degree shift and you also get another degree shift from the change of the group here. Okay but this is a calculation I really don't want to show now. Sorry. Yeah okay so if the Euler form is symmetric you can re-normalize the homological grading by this and then you get that the whole algebra, homological whole algebra is actually NQ0 cross z graded. You get another hidden grading and then I guess you believe me that the relation to the motivic generating function becomes much stronger because well it's precisely here this term minus one half dD. Sorry is it z or z over 2 because if you put one half of dD it might be… Yes one half z. Yeah yeah you are right I should better… Yeah yeah you are right. If I have an odd number of loops at a vertex then I have to allow half integers. Thank you. Thank you. Yeah one half z. Okay and if you do this little renormalization in the homological grading then you certainly believe that in the numerator this additional term pops up minus one half of Euler form dD and so that is roughly the reason why the homological whole algebra categorifies A of Q. Okay so we categorified A of Q. The original is almost merisimetric. Actually yes it is yes this is only for special crevice but this is interesting enough. Is this some sort of a colloquial condition? No. No. It's simple might an answer but no I don't think so. Okay. Okay. Well globally for this Euler form to be symmetric you need that the crevice symmetric the number of arrows from i to j is the same as from j to i. Globally this might be very special but we already have seen this condition locally that the restriction to a certain slope has this property. This will of course resurface now. Okay I should briefly mention something but for this I will not write down the formula because we will not use it. The fun with the homological whole algebra is that in working with it you can argue geometrically really compute homology of say certain strata in these Rd but you can also work algebraically because there is an algebraic description of it. So using a covariant localization you get an explicit algebraic description as a shuffle algebra with kernel, explicit description of the multiplication as shuffle product with kernel. I don't want to write down the formula now because it is really long and I have to explain all the terms but the idea is somehow quite natural. So as a vector space we have identified the co-har with well tuples of symmetric polynomials. Now what is the multiplication in terms of these symmetric polynomials and one can really make this explicit using standard techniques, a covariant localization. You really take these tuples of symmetric polynomials you perform something called a shuffle product but in the shuffle product there is an additional term the so called kernel which reflects again the arrow structure of the quiver. The formula is long and we don't really need it in the following because we will stick to our geometric intuition but the fun in co-har is combining the geometric approach and this purely algebraic approach. So sometimes you just do a few pages of algebraic calculations for shuffle products. So we accomplished the first thing for today namely we categorified aq we realized it as a Poincare series of an algebra which is naturally associated to the quiver. It reflects the geometry of the quiver, the geometry of this variety of representations and it reflects the category behind the quivers. It reflects the category because what we are doing is we are somehow involving all short exact sequences in the multiplication. So the whole category structure is somewhat reflected. So categorify aq and now the question is what are then the categorification of these various factorization identities which we found in the last three days and this is what I will now show. So we got three factorizations and each of these factorizations has admits an algebraic analog for the homological Hall algebra and I want to formulate this and get to the point where you see that these algebraic facts which I will write down are really categorifications of these factorization identities. To do this we need one more ingredient. So if mu is again final ingredient. So if you not only have the quiver but also a stability on the quiver in the form of such a slope function as before then you can define a local version of the Kohe hqx semi-stable so for any real slope x and the slope local version is well we had a slope local version of the motoric generating function so you can almost guess the definition. This is defined as direct sum of aqvc homology but now the direct sum is only over those dimension vectors which are of slope x and this time I am ignoring the problem with d equal 0 you can guess what it is and then we don't take the homology of the whole variety but just of the open part of semi-stable points. So in complete analogy with this slope local motoric generating function you have the slope local homological whole algebra. Set of semi-stable points is gd-equivariant that's why this is well defined and concerning the convolution along short exact sequences if you have semi-stable or semi-stable representations of a fixed slope form an abelian subcategory that's a completely general statement. In particular if you have two semi-stables of the same slope take an arbitrary extension this is again a semi-stable of the same slope so that means our convolution product along short exact sequences preserves semi-stability and we really get this local algebra. This guy is much more serious than the global coha the global coha admits a very simple algebraic description because the space whose aqvvc homology we take is just a vector space here it is just some strange Siriski open subset. So this guy you can not treat algebraically you can only treat it geometrically. Alright so let me write down categorified factorization one. So we have to remind ourselves what was this factorization one I took this motifs generating function aq with a slightly shifted variable t and divided it by another shift of the variable t and then what we got was the motifs generating series of Herbert schemes. The algebraic analog of this is if you take the direct sum over all d of all the non-acquivariant usual homology of these Herbert schemes then the homological whole algebra acts on this. So this is a module for the homological whole algebra for any n and in fact one which is even cyclic and if you take the limit for large n then you get the whole coa in a sense. Okay let me try to make this precise so let's take this thing here and now let's change n make it bigger and bigger. Well this is a module which is cyclic so you always have a projection from hq to this. Usually you have an action of hq but since it is a cyclic module you have a projection. Okay in fact there are even natural connecting maps between these so we can take the limit over all n now I have to be careful it's the inverse limit and in the inverse limit we have an isomorphism. So that means the kernels of these projections get smaller and smaller at the end. So we can approximate the homological whole algebra by all its modules and these modules are given by the homology of Hilbert schemes and that is the categorification of the first factorization formula. That's quite satisfying right we have an algebra and the Hilbert schemes give us natural modules. Okay factorization 2 is something you can almost guess from the definitions. Factorization 2 was the wall crossing formula that the motivic generating function aq is an ordered product of all the local aqx and so categorified factorization 2 is the coha analog of the wall crossing formula and it just tells us that h of q is isomorphic to an ordered tensor product over the reals of these local guys these local cohas unfortunately this is not an isomorphism of algebras but only of biograded vector of graded vector spaces q vector spaces okay but the map is somehow quite natural so that's the way you categorify wall crossing yeah and now well you can imagine if you take Pronkary series of both sides in the right ring then you get precisely Pronkary series of the left is ordered product of the Pronkary series of these guys and since these Pronkary series are the motivic generating functions this reproves the wall crossing formula or this is the categorification okay maybe we will have time for a concrete example where we will actually see this. Can you explain what the map is? Yes well the map again comes from the fact that rd is stratified into Hardenar Simanstrater how did I call this d.hn so you have the statistical position into these locally closed Hardenar Simanstrater and now you have to look very closely what this induces in Aquarian Cormology actually what we do is we first pass to Aquarian Chow groups and show that the Aquarian cycle map is an isomorphism then we work with Chow groups because well some things are just nicer in there yeah you don't have the long exact sequences of Cormology and there you can see easily that this decomposes the Aquarian Chow group into Aquarian Chow groups of these local guys but it's essentially the same trick just the Hardenar Siman stratification. Does it also preserve the degree in the shifted grading of the Cormology? Yes yes if you start with a symmetric river and do the corresponding shift here and here then this modified Cormological grading is preserved. Categorified factorization 3 now that was the Donaldson Thomas invariance yeah what we did was we took the Motivic Generating Function the local version and wrote it as a Placistic Exponential and the coefficients popping up there we called the Donaldson Thomas invariance and then we saw indeed they have geometric meaning. This also works here so and you can already guess what the assumption is assume again that the Euler form restricted to one slope is symmetric and then as I explained globally yeah I only explained this for the global Coher but you can do this the shift in Cormological degree then also locally see if this thing is globally symmetric we can renormalize the Cormological grading on the whole Coher and under this local condition we can renormalize the grading locally for the local Hall algebra so let's do this renormalize the Cohermological grading on the local Hall algebra then so we want to categorify this notion of factoring it into an exp to a Placistic Exponential and this is really a key calculation which one should do at some point if you do exp on the level of Pronchory series then what you are doing on the level of Algebras is you are taking the symmetric algebra the exp is just taking the symmetric algebra well if you have ever calculated Pronchory series of symmetric algebras well you get a nice product factorization and these product factorizations are encoded in the exp so in this case we somewhat expect as categorification the statement that this is related to some symmetric algebra well this is not literally true but you can filter this algebra and the associated graded is then a symmetric algebra then there exists a filtration F on the slope local Cohermological Hall algebra such that the associated graded is isomorphic to a symmetric algebra of some by graded space dt dot dot and a free variable z in degree 0 2 ok let's try to digest this let's compare it with with the aq side so that was that was the factorization for the aq ok so taking the associated graded with respect to a nice filtration does not change the the Pronchory series at all yeah so the Pronchory series doesn't know whether you took the associated graded or not but it has to be there and then you get a symmetric algebra because on the level of Pronchory series taking the symmetric algebra is the same as taking this plastic exponential but you have to be careful we are working with even and odd degrees so you have to understand this as the graded symmetric algebra so you have to understand in the super sense the symmetric algebra over even variables is a polynomial ring the symmetric algebra over odd variables is an exterior algebra we'll see this in the example exterior algebras really appear this free variable that that corresponds to this obligatory term here which we have seen geometrically coming from the virtual motive of a trivially acting multiplicative group of the complex numbers yeah so this appears here as a free variable and this double graded dt space well that's the space whose Pronchory series is this yeah so going through this term by term you really see that this fits perfectly with what we wrote down yesterday. There is a question yes please is this semi-stable Koho a sort of root sub algebra? Root sub algebra yes okay so I will give you in a minute the example of the Kronecker quiver then you can really see what happens yeah so in general the problem is well you have to find a stability which is well adapted to the roots of the corresponding root system this not always works out well we already discussed this two days ago but in the case for example of the Koho a Kronecker quiver it really fits nicely and we'll see this. Okay problem is this nobody knows what this filtration is this is the bandavis and call it the cause of the perverse filtration so it's really deeply buried in algebraic geometry in the weight in the weight structure of homology and well at least I can't compute it in any single case where it is non-trivial yeah so the only case where this grading I will see this in example there's one case where this grading really actually appears and plays an important role and in this case I can compute the associated this algebra algebraically with this approach by a covariant localization and shuffle products but I can't really work with the perverse filtration and so this is so this theorem is really nice and really general but with this theorem it's difficult to work out examples and this brings us to finally two examples well our standard now we will also categorify our standard examples of course yeah our standard examples always were the trivial quiver the one loop quiver and then with respect to the dt invariance we saw yesterday two vertex quivers and so what we will do is trivial quiver one loop quiver and the Kronecker quiver and the Kronecker quiver case we will see the full power of these factorization two and three so exercises examples here with the trivial quiver of course it would be great to do this in all details but no I'm not suggesting that we do this in the question and answers because all these shuffle product calculations I don't know if I can do them out of my head well okay we'll see in this case the coa is the symmetric algebra yeah without any without any associated it just is the symmetric algebra over a one-dimensional dt space that's something we already know right we know that for the quiver trivial quiver we have a dt invariant in degree one only which is one so this dt dot dot should be one dimensional space okay so it's just a one-dimensional space that's the dt and we have this free variable that so we have a polynomial ring forget the ring structure just polynomials with its usual countable basis and then you take the associate you take the symmetric algebra over it but the symmetric algebra you have to be careful about it's in the graded sense and in fact this is now placed in odd degree yeah so this q is actually placed in degree one one one for the dimension vector and one for the homological shifted homological degree so the symmetric algebra is an exterior algebra so the coa is an exterior algebra in countably many generators okay for the one loop quiver well we have also seen that the dt invariant lives only in one degree and it was minus l to the one half so also only one dimensional aha so we have again just this but now in degree one zero in the shifted degree in even degree so what you get is really a symmetric algebra symmetric algebra in countably many variables okay and seeing this the countably generated exterior and countably generated symmetric algebra appearing this of course cries for both on fermion correspondence as we have already discussed on on Monday well unfortunately that is the only example of such a duality yeah so the duality is when you turn the number of arrows in the quiver to the negative and shift on the diagonal and the only case where this is really related to quivers is this duality between the trivial and the one loop quiver and that's it yeah so unfortunately this duality doesn't doesn't generalize to other classes of quivers okay so and final example is the chronic a quiver and again we have to choose the stability it's more or less arbitrary let me take d1 minus d2 divided by d1 plus d2 and we have seen the root system yesterday yeah we have the take the a1 tilde root system or it's positive part these are the real roots they are of the form 1 0 2 1 3 2 and so on and then you see the slopes are 1 divided by an odd number 1 or minus 1 divided by an odd number so what you get is you get slope local whole algebras only in degree only in for slope x and for all the 1 divided plus or minus 1 divided by an odd number minus 1 third minus 1 fifth minus 1 7 and so on so these are the only slopes which appear so this is a set of slopes to make it precise if x is a real number which is not in this set of slopes then the local coha is the trivial algebra okay the local whole algebras for plus or minus 1 divided by something odd corresponding to the real roots they look like the cohorts for the trivial quiver that's what always happens for real roots yeah they look like the trivial quiver so exterior algebra in countably many generators and finally we have this zero part for the imaginary root and that algebra is really complicated it somehow feels yin so it is actually generated by two series of infinitely many generators and then terribly many quadratic relations which are not really illuminating if I write them down but it really looks like a very degenerate form of a yang in algebra so that's where really interesting homological whole algebras appear but in general these local whole algebras are if they're interesting they're almost impossible to calculate okay so that's all I can say about these cohorts and that's it are there any questions so I'm thinking also these whole algebras with finite fields coefficient no the whole algebra they usually define these ringelhall algebras you define in a completely different theory so ringelhall algebras are defined by a similar or let me just say by the same convolution product but in very different theories so if we stick to to the complex numbers then what you would take is the you would take the the space of constructible gd invariant functions on rd this is what you would do over the complex numbers or in the finite field case you would just work with arbitrary gd invariant functions so this is really functions this is not polynomial functions yeah I'm not talking about invariant polynomial functions just invariant constructible functions so this is what you do over fq you would just take c valued q sorry q valued is enough q valued and also here q valued just arbitrary gd fq invariant functions on rd fq and I don't know of any reasonable of any well behaved map from say a quran and comology to a quran to invariant constructible functions which induces a map in on the whole algebras on a deeper level on a very deep level a posterioria after many many computations then you suddenly see that certain algebras can be realized both as how all algebras are functions and they also pop up as common logical hall algebras but the experts for this are here in the room but it's not me yeah but at a very deep level yes but there's nothing obvious some all a priori the two dimensional cause oh yeah okay the two dimensional if you start with say preprojective algebras yeah yes yes yes so I when I when I showed you the ideas behind the proofs of these of these factorization formulas I always tried this I tried to do this as geometrically as possible but in fact it is also possible to prove these factorization identities algebraically in this comological in this this usual ringle hall algebra so to briefly indicate what you do yeah let me work with finite fields so I take the direct sum over all D and now here I'm taking the GD invariant functions on on this over finite fields and then there is a map which map the whole algebra integral to our motific ring this is not really true we have to we have to complete here let me take not the direct sum of the direct product there's a natural map from there to there namely essentially you map a function on RD to let me not formulate it sorry let me just say this exists a Q algebra homomorphism and that means you can also formulate identities in the ringle hall algebra and you'd find these identities somehow out of the representation theory of the curves and integrate them to get identities here yeah so representation theory take identities in the left hand side which is the ringle hall algebra integrate to certain factorization identities in the motivate quantum space and what he is don't see the the motives one feature of the specialization is that Q is mapped to the left that's motive which is quite natural because while the number of of of fq rational points in the affine line is Q and this should map to left that's motive okay just as a as a vague identification of how one can prove these factorization identities representation theoretically which is an aspect I didn't want to elaborate on because I wanted to emphasize the geometry and at the end the the koho categorification yes that's yeah well so so so that's the pragmatic solution what one should really take here is the motivic hall algebra there also exists the motivic hall algebra and this would be the correct thing to write down there namely the motivic hall algebra would be that we're working this completed setup again you take the not the goten die group of varieties over C but a relative version goten die group of varieties over the variety okay over the quotient stack now okay I should do stacks okay so all this is in a nicely explained explained in Tom Richelon's papers yeah you take the the goten die group the relative version of the goten die group of stacks over the quotient stack RD by GD and on this level you also have such a convolution product and from this you have an algebra homomorphism to this because counting points over finite fields is a motivic measure yeah so the map here is just to the motive X you associate well the number of rational points over FQ of X basically because this has this motivic behavior so that's yet another ingredient which we would just behind all this questions on zoom is there a double action of the core on the homology of Hilbert's a doubler okay no so there are many attempts to double this action yeah so I know the direction of this question yeah so the HQ acts on direct some of homology of of Hilbert's schemes but the fact that this is a cyclic module is a bit disappointing what you really want to have is you want to have something like a like a Drinfeld double of this for which this is well maybe even an irreducer representation so the action of aq HQ is raising the degree raising operators but you also want lowering operators you want to have this this should be kind of a fox base yeah this was defined somehow in an ad hoc way in several ways but there's no easy geometric way to do it yeah so it's not clear how to do it in general another question some people have answered it but maybe you won't answer it are the shuffle algebras which were mentioned related to those which appeared in the description of quantum toroidal algebra I don't know I'm glad when other people answered great again this two-dimensional is again this two-dimensional thing okay there was a question by Fabian yeah I wanted to ask for this integration you you don't need the assumption that the curve is symmetric or anything like this oh no no that's that's at this point it's completely general yes please so in this theorem about the associated great beam symmetric yeah so can this and gt break it that can this be equipped with some we break it so that looks tempting right I mean the the best known example of an algebra which looks like a symmetric algebra up to taking the associated graded with respect to a filtration is the enveloping algebra of a li algebra because then you take the pbw filtration and the associated graded as a symmetric algebra that's the pbw theorem and of course tempting to look for a li algebra structure I think again the answer is in the 2d setup so for pre-projective algebras there you can expect this related to the analog of the t invariance is then a cut poly normials and you can expect then interestingly algebra I doubt that there is such an interesting li algebra here in the the crew of setting the crew of setting is somehow to degenerate I guess in some appropriate modification this still holds and was proven in full generality by Sven Meinard and then Davison but it's even less explicit and I don't know of any examples which you can calculate in the 3cy setup oh yes sorry I just wanted to how does hqx on the barric sum of the words it's again by a convolution namely you you look at this basic convolution diagram which we had ID plus E and well this this helped the end this we realized as a quotient why we only did this in the exercise session yeah but we did you take a certain set of stable points in an extended representation variety and that was the trick so all you have to do is extend this convolution product to these extended representation varieties but you shouldn't extend everywhere yeah so here you take the usual unexpanded here take the extended here take the extended and some corresponding hq correspondence and then you have to think about that this everything is compatible with stability and then this convolution operation gives you this action yeah and then to check that this is really a cyclic module you have to know about the fine structure of this convolution thank you
|
We motivate, define and study Donaldson-Thomas invariants and Cohomological Hall algebras associated to quivers, relate them to the geometry of moduli spaces of quiver representations and (in special cases) to Gromov-Witten invariants, and discuss the algebraic structure of Cohomological Hall algebras.
|
10.5446/55042 (DOI)
|
Thank you very much. So due to practical reasons, we have a little delay and my talk will be a bit shorter because I want to definitely stop in time for Richard's talk, which is still schedule for three to four. And so let's directly dive into what we did yesterday. So again, a very short summary. We first introduced this motivic generating series in the motivic quantum space. This was this ring with a slightly twisted multiplication. Logarithmic generating series for quiver q. And then we saw two different kinds of factorizations. Factorization one was if you, well, okay, we also computed examples. If you take logarithmic q derivative, or more precisely logarithmic, any derivative of a q, then you get a generating function for actual modular spaces, namely the Hilbert schemes of representations of the quiver. Log l derivative of a q gives generating series of, I will not make this precise again because we'll take more, look closer at the second factorization. We got a generating series of virtual motifs of Hilbert schemes. And actually in the question and answer session yesterday afternoon, I almost gave the complete proof of this. So then we saw the factorization two. And so today I will mainly discuss factorization three. Question two was the wall crossing formula, and I will repeat the definition. Wall crossing formula, a q admits a factorization as an ordered product over the real in descending order of local series a q x s s t. That's the wall crossing formula. And this local motific generating series, that's defined as the global motific generating series, but just looking at semi-stable representations of a fixed slope. So let me repeat this. A q x s s t is defined as one plus some overall dimension vectors of slope x. And then we take the semi-stable locus inside the representation variety, and we still act with the structure group. We take the quotient of virtual motifs and t to the d. So but this only involves dimension vectors of a fixed slope. That's important. And so that means in the space of all dimension vectors, which is a space of dimension cardinality of q zero, the number of vertices, you have a co-dimension one subspace. Because the slope of d is a fixed real x, that's just one linear condition. So I have a co-dimension one subspace. All right, so and everything here depends on the choice of a stability function or slope function. We had a slope function mu of d defined as theta of d by kappa of d, where theta and kappa are just linear functions, real valued linear functions on the space of all dimension vectors. And there's one question of what the space of stability structures define this term and these terms looks like. This is something I cannot really discuss in a nice way today, but I will keep it in mind for tomorrow. But I'm afraid since I want to concentrate on factorization property three and dt in variance today, I cannot discuss this whole stability space. OK, so that was the second result. And I also almost gave you all the details of the proof, namely this is formally equivalent to the existence of this unique filtration, this harder narrow cement filtration of representations. OK, so that was a summary of yesterday. And now to somehow motivate the dt in variance, let me make one remark. In this factorization one, the surprising thing was that we were doing something very formal with the motivic generating function, namely we were just taking a formal logarithmic derivative. And out of this very procedure, we got something very concrete and geometric, namely the motives of Hilbert schemes. And the same happens here, namely if you look at these local series, they also contain geometric information. So fact is, and this is somehow then the motivation for defining the t in variance, which we'll do in the next 10 minutes, fact is that if you have a dimension vector, which is co-prime, if d is co-prime for this slope function, I will tell you what it is, you co-prime, that is for all proper non-zero sub-dimension vectors, the slope is different. The equality means that the work coordinates are the same? Yes, yes, yes. So this means EI less than or equal DI for all I, but they are not equal. So component wise. So for all smaller dimension vectors, the slope is different. And if you work this out, for example, for a two vertex quiver, then this is really like the co-primality which you know from the theory of modular spaces of vector bundles. The co-prime case there means rank and degree are co-prime. And this is something similar here. It basically means that more or less, well, for a generic choice of mu, this co-primality means that the entries of d are co-prime. So the GCD of all the entries is one. It's not a proper multiple of anything else. If this holds this co-primality, then the following holds. The T to the D coefficient in this series, then the T to the D coefficient in this local series for slope equals the motif of an actual variety, more or less. So the motif of an actual modular space. Okay and now I have to decorate this motif a little bit to make this exact. So first of all we're taking the motif of the modular space of mu, stable representations of q of dimension vector D. We're taking the virtual motif and we have to decorate this by one of these annoying little factors, 1 over L to the minus one-half minus L to the one-half. Okay. Now I will explain what this is. Well this is defined as taking the semi-stable locus and taking a geometric quotient by the group action. So geometric quotient. This is now a modular space, no longer a modular stack. Okay. So that's the precise formulation of this fact and now let me discuss it a little bit. Yes please. That is a stable locus. Stable or yes, okay so let me put brackets around here because it doesn't matter. That's the first point which I would like to explain. It actually doesn't matter. Okay so I defined stability and semi-stability. Semi-stability means the slope is weakly decreasing on sub-representations and stability means the slope is strictly decreasing on sub-representations. Now if we have a dimension vector with this special property then asking for the slope to decrease or to weakly decrease is equivalent. Because there is no proper sub-representation which can have the same slope. Oh wonderful, this means that stable and semi-stable locus are exactly the same. And now this has a wonderful consequence, namely if you consider geometric invariant theory it tells you that for the semi-stable locus you always have a very nice quotient and for the stable quotient, for the stable locus this quotient is in fact geometric. So a geometric quotient here exists. It doesn't matter whether we write semi-stable or stable, it's the same and a geometric quotient exists. The group action of GD on here is not free. There is always the group of scalars, so scalars diagonally embedded which acts trivially. Because if you remember the group action on the space of quiver representations this group action was given by some kind of conjugation. And conjugation is of course trivial if you take scalars. So the scalars act trivially, so what you are honestly taking here is a quotient by the projectivized group. But I haven't introduced this, so let me just switch back to GD. But I mean this geometric quotient is a PGD principle bundle. And taking this projectivization amounts to factoring a multiplicative group of the field out of this group. And so this ugly little factor here, that's just the virtual motive of the multiplicative group of the field. So this explains this factor. We always have an annoying little factor coming from the virtual motive of the multiplicative group of the field because we don't have a free action of the group GD. But anyway, in this very special situation if you look at this, the quotient of the motives or say the motive of the stack is then more or less the motive of this quotient space. And that's, yeah, so this is some of the, it's the lowest order term in this series. Aha, this is great news and really formally compares to factorization number one. Using a certain formal manipulation with this generating series, suddenly an honest geometric information pops up, the actual motive of a space. Here it was motive of Hilbert's schemes. And here we get as the lowest order terms of these series, we get more or less the motives of moduli spaces parameterizing stables. But only under this special coprimality assumption. Which is not too bad. And well, we know this phenomenon from moduli spaces of vector bundles. The good theorems are always about the coprimed case, rank and degree of the bundle are coprimed. And in general, you get singular moduli spaces, things are much more complicated. But anyway, we are here in the Krüver situation, which is supposed to be much simpler than moduli spaces of vector bundles because, well, at the end of the day we're just doing linear algebra. So we can be very brave and ask, what is the meaning of the other coefficients in here? So the lowest order coefficients give the motive. So this is e to the d coefficient, that's the lowest order coefficients in this local series. So what about the other coefficients? OK, and now comes the extremely brave idea of taking this local series under a certain technical hypothesis and just brute force factoring it into an infinite product. And then the exponents appearing in this brute force factorization should have a meaning. These are the dt invariance. Now we factor this local series. So in the first step, we are taking the factorization of the global series into this slope local series. And now we want to factor this series. To efficiently do this factorization, we need a bit of notation because otherwise we will have lots and lots of very ugly three-fold infinite products. And we just want to get rid of them from the notation. And that's why we introduce the so-called pletistic exponential. So to simplify, so this is a very small notational intermission before we come back to the geometry. To simplify a huge infinite product, we make the following definition of the so-called pletistic exponential. Well from the term exponential, you can already guess it's something, what is an exponential? Exponential is a transformation which maps sums to products. Now the exponential is more or less defined by the functional equations. We want to do this for such formal series, formal power series. It should convert sums to products. And then the very simple-minded ideal is to convert a monomial to a geometric series. So let me give you the axioms for defining the pletistic exponential. If you apply x to a monomial and a monomial for us would be something like l to the i half t to the d, where d is a dimension vector and i is an integer. These are our monomials in our localized motivic ring, motivic quantum space. And I define the pletistic exponential of this as the geometric series. The other thing is it should convert sums into products. It should be x of f times x of g, like you expect from an exponential. So it's really like an exponential, but the initial condition is transform monomials into a geometric series. Now this is a wonderful tool for making huge infinite probes. If f and g should commute first. Yes, they should commute actually. So we will only do this under a certain technical assumption, which I will introduce in the next definition. So this defines a continuous group homomorphism from, well, now I have to be really careful. So because I don't know what motives I have in this gridly ring of varieties, I will just take the subring generated by l and properly localized. So this is the subring of our motivic ring. And then I join all these t to the d. And I'm considering the maximal ideal in there where of all series with constant term 0. The set of all f in here such that f of 0 is 0, so which is just the maximal ideal. And I map it to the set of all f in here where the constant term is 1, because all the constant terms of these geometric series are 1. So I take power series with constant term 0 and I map them to power series with constant term 1. And it is a group homomorphism where here you take the additive structure and here you take the multiplicative structure, which is expressed in this functional equation. And it's continuous because we want to continue this to infinite sums, to all the infinite sums which we are allowed to take in this formal power series. Can you extend the predictive exponential to your whole motivic ring? Ha ha. Oh, there is lots of work on this. Yes, yes, basically you can do this. You can get these, well, you can get, okay. So the plastic exponential is defined whenever you have a lambda ring. And so you have to find the right lambda ring structure on this motivic ring. And so you have to define these atoms operations and they are defined using taking symmetric powers of varieties. So there's lots of work on doing this in the generality. And luckily for us, we only have to work with everything which is motivated by the Lefscher's motive. So we can do this simple thing here. Okay, and I'm not absolutely precise here. So this defines a continuous group homomorphism in case that this ring. f of 0 is equal to 1 on the right? Not f of 1, thank you very much, f of 0 is 1, yes. No, I better don't plug in 1 in this series. In case that this ring is commutative. Or actually we will now work in subrings which are commutative. Okay, so that's the general. Oh yes, sorry. So we take L but you said there define x also for L to i over 2. So do we? Yeah, yeah, yeah. To everything which is generated by L in here. Yeah, yeah. L and all the localizations we already have. Okay, yes, well the problem is doing this formally somehow disguises the very simple nature of this construction. So let's better do an example. Yeah, namely let's come back to the motivic generating series of the trivial quiver without any arrows which we computed yesterday as an exercise. And let's use this a plastic exponential terminology to simplify this. So we compute that this yesterday as product over i from 0 to infinity. 1 over 1 minus L to the i plus 1 half t. Okay. So now we want to rewrite this as a plastic exponential of something. And now the rule is really simple. This product converts into a sum. That's what x is good for. And here you see a geometric series and this transforms into a monomial. And that's it. Yeah? It's precisely these two axioms. You transform this product into a sum and you transform the geometric series into a simple monomial. But now you see that you can simplify this infinite sum here inside because it gives you again a geometric series. So this you can rewrite as x of, okay, so I have the sum over all i, L to the i, and then a constant part. So it's L to the 1 half times t divided by 1 minus L. And for good reasons I somehow normalize this and multiply numerator and denominator by L to the minus 1 half to finally arrive at L to the minus 1 half minus L to the 1 half. And out of a sudden we see this denominator here which somehow popped up naturally in the geometry above there. This is just dividing something by the virtual motive of the multiplicative group. So this is the first indication that this x is something reasonable. And now we have seen that this somehow magically pops up here in factoring the geometric, the motivic generating series for the trivac river and it also appears in these lowest order terms of these local motivic generating series. Now let's unify everything into one central definition. And yes. Formal definition. So assume, so we have a quiver and we also fix the stability as before because we want to consider the sloka series. Mu is stability, x is a fixed slope and now we make the assumption such that the Euler form, the Euler form of the quiver when restricted to all dimension vectors of the fixed slope, this is the co-dimension one space of dimension vectors, this should be symmetric. So this is the important technical assumption. So the Euler form when restricted to this co-dimension one space of dimension vectors of a fixed slope should be symmetric. Because this implies that a certain part of the motivic ring, namely the part of the motivic ring where we only consider monomials of the slope is commutative. Because the twist in this motivic ring was defined using the anti-symmetrized Euler form. Let's note this. So that implies the part of the motivic ring, x, x, which is the span of all t to the d by d is of slope x, is commutative. That's the reason why we make this assumption. So a certain local part of this form of power series ring is commutative. And then we define certain rational functions dTd mu of q. So it depends on the dimension vector of slope x. It depends on the stability and on the quiver. In the a priori, this is just a rational function in the half-left-shed mode. So a priori is only a rational function in the half-left-shed motive. And this you define by factoring the local series. So you just take this local series and you factor it into your product, means you write it as an exponential of the following form. Well, first a standard term, l to the minus one-half minus l to the one-half. This now doesn't come as a surprise. And then the sum over all dimension vectors of slope x, dTd mu of q, while of this of q I don't like. Let me omit it. So the quiver is not a variable. The variable is still the left-shed motive somehow. Just like this, d to the d. OK. So now that's the central definition which we will explore for the rest of the talk. And let's try to digest this and let's see what is the logic of this definition. So first of all, OK. So we take our motivic generating function. Yeah, question about notation? No, it's about the t to the d. Do you also include the neutral element? I mean, what's what d equal to? In the buff. Yeah. I guess so, yes. Yeah, yeah. OK, I just want efficient notation. Take, I don't have any idea. So let me just formally allow 0. OK. It somehow worked better with the definition of this local series where I just put this one plus in front. And here it just doesn't work. OK. So our logic is using the hard-on-hour-sim infiltration, we can factor the whole motivic generating series into local contributions from the slopes. The proof of concept is that at least the lowest order terms of this series have a geometric meaning because they encode the motive of actual modulized spaces. Aha, so it was a good idea to do this factorization. Now we want more. We want to factor this whole thing. And the universal tool for writing down such factorizations into huge products is the Placistic exponential. To have this place, Placistic exponential well defined, we need some commutativity. And this commutativity, this local commutativity is contained here in the definition that the restriction to the Euler form to a fixed slope is symmetric. And then we just take this series and factor it into an infinite product. And a priori, the guys appearing there could have all sorts of crazy denominators. So let's just say it's a rational function in the half-leafshadst motive to be on the safe side. And here we always have a standard term which doesn't come as a surprise because we already have it when we just factor, do the factorization for the trivial, for the trivial quiver. So that's a factor which we cannot avoid, first of all. And it is also reasonable to expect this factor from the geometry which we have seen here. So this hopefully motivates this very brave definition of a Motivic DT invariance of a quiver. And okay, so now we have something to explore. Do you have a question about the Placistic exponential? Yes, please. What if you confuse about this? Where does one hide the factorials? The factorial? Well, because it's exponential, you know, you have over n factorials. Yeah, okay. See, I didn't want to talk about lambda rings. So I always prefer this ad hoc definition of the Placistic exponential by saying it has the function and equation of an exponential and this initial condition, monomial goes to geometric series. When you define this in a general lambda ring, you define it as follows. So you can define the Placistic exponential as the usual exponential composed with psi, which is the generating series of all atoms operations. So psi is the sum over all i, 1 over i, and then psi i in a lambda ring with atoms operations psi i. So in a lambda ring, you have these lambda i operations and this gives the whole ring structure of a module over symmetric functions and the lambda i correspond to the elementary symmetric functions and then you do the base change to the power sum functions and these are the elements operations. So it's all this formal lambda ring stuff. And so this is the way you can do this in arbitrary lambda rings. And there you have the honest exponential. OK, which also easily proves, so in this way you can easily prove that you have an inverse, a Placistic logarithm. So log is then just psi inverse composed with the usual log, form a power series log, where psi inverse arises from this by Merbius inversion, Merbius function of i by i times atoms operations psi i. Thanks for the question because we will come back to the Merbius inversion anyway in a minute. So at length I try to explain the logic of why we define these DT invariants. And now let's see some examples. And to compute any single example is quite hard. But at least we... Yes, please. Do you know that such a polarization is possible? Yes, you are right. OK, I forgot to tell you something. I forgot to tell you that... Yes. I forgot to tell you that this not only defines a continuous group homomorphism, but actually an isomorphism of groups because the X has an inverse log. And this particular tells you you can factor any series with constant term 1 in this way. Sorry for that. That's of course the important point. So X has an inverse log, so you can just apply log here and this defines the DT invariants. So what kind of...what examples do we have? So one example is on the blackboard. Namely the trivial quiver. Then we have this factorization and so for the trivial quiver we find that the DT1, we don't need any stability. We can just take the trivial stability here. That the DT1 is 1 and all other DT are 0. DTD is 0 for all D, certainly greater than 1. OK, that's the first example. The second example is the one loop quiver. You can also do the one loop quiver because we have seen yesterday how to factor the motivic generating function for the one loop quiver into an infinite product. Now take this infinite product from yesterday, reinterpret it as an X and then you can read off the DT invariants for this, namely the DT in dimension 1 is the virtual emotive of the affine line and all other DT are 0. Now this is not too bad. This already tells us something about, or gives us a hint of the DT invariant being of geometric origin. Because what is the classification of quiver representations for this quiver? That's the classification of vector spaces. Every vector space is a direct sum of a unique one-dimensional space, which explains the invariant 1 in dimension 1. For the loop quiver, we are looking at matrices up to conjugation, think John and Kononik to form, and we have a discrete, we have a continuous classification parameter involved, namely eigenvalues. And this is the a1 encoding eigenvalues. So this is the modulate space for possible eigenvalue of a matrix. So third example, which is also on the blackboard, is where we have seen these lowest order terms here in the coprime case, and this is also something we should notate. So third example, Q is now arbitrary, and we need this assumption that D is coprime, D coprime for the chosen stability, and then the DT invariant is, as we have seen here, the motive of the virtual motive of the stable modulate space, md mu st of Q virtual motive. Thus this is precisely the lowest order term in this local generating series, and if you then rewrite this as an infinite product, then these lowest order terms survive anyway. So that's not difficult to see that this is true. So in all cases, apparently the DT invariant carries some geometric information, and I hope this prepares us all for the final theorem. Yes, final theorem for today. And the final theorem is that the DT invariant, in fact, is always of geometric origin. It's not the motive of an actual modulate space, at least not for mathematicians. For physicists it is, because physicists believe in a certain modulate space of quiver representations with which mathematicians can't define. But okay, so theorem is DT is geometric, that's the slogan, more precisely, under all the assumptions which we need to define, we have the following, DT D mu equals. Okay, so what's the geometry? The geometry is the modulate space of semi-stables. Now for general D we have to be careful. In the co-prime case, I convinced you that stable and semi-stable is the same anyway, so we don't have to care. But in general, we have to take care, and we are taking the modulate space of semi-stables. That is in general, so this is typically a very singular modulate space. Because it's really a GIT quotient, but you have different types of stabilizers, and so in general you're only, I mean, well, you have severe singularities, not just all-before singularities. These are really severe singularities, vertices of cones, for example. At least normal singularities, but that's it. So it's a singular space, and well, you can already guess that taking the motive of a singular space, it's not so well behaved like motive of something smooth-projective. So what can we do better? Well, there is a coromology theory which is perfectly well suited for singular varieties, which is intersection coromology. And so the final result is you take compactly supported intersection coromology with rational coefficients of this, and we take the Poincare polynomial of all these, and where our polynomial summation parameter is, of course, again, our minus square root of left-shakes. So we take the Poincare polynomial of local compactly supported intersection coromology. Better take a dimension. Take the sum of all i, and that's almost it except for a little twist factor. Well, you guess what? So again, you have to twist by minus square root of left-shakes to Euler form of minus 1. So this holds if there is at least one stable point, and if there is no stable point, this might happen. So if you have only properly semi-stable points, then the dt invariant refuses to exist, so it's just 0. So this is the precise result that the dt invariant is geometric. And if it's non-zero, non-empty, so if there is a stable point, yeah. And this is the formula if there is no stable point, then it's 0. And so this is the interpretation of Sven-Mynard and myself of something which was conjectured or believed in physics by Manschott, Peolien, and Zen, for example. Namely they say the dt invariant is the actual motive of some modular space. And physically, we really want to have something like the dt invariant is the virtual motive of some modular space, and let me call it the physics modular space, but nobody can define this mathematically. But this is generally believed to be true that the dt invariant is really an actual motive of some space which we just can't define mathematically. And the replacement for this, for this just vague dream, it's not a theorem. And the precise mathematics statement is that it is the Poincare polynomial in intersection homology. If we can find a small resolution of better, yeah. So of course, once you have a small resolution, you are done, because the homology of a small resolution is the intersection homology of a variety. And if we are lucky that the whole homology is just pure tate, then the Poincare polynomial is the same as the virtual motive. But I mean, think about modular spaces of vector bundles on curves. Only in Rank 2, there are known desingularizations of the singular modular spaces. For Rank 2 degree even, then you know a desingularization of the modular space with only all default singularities. There is this general procedure of Francis Carvin to desingular modular spaces. But the bookkeeping is terrible, and nobody knows what the final outcome would be. So this is unsolved. I'm not sure that the small resolution exists. I was looking for this for years, but it's not very promising. We have a question in the Q&A. Yes. Do you always need to assume that the other form is symmetric? Yes. Yeah, yeah. So the assumption which I used to actually define the dt invariance, this continues to hold. Otherwise, dt is not even defined. I mean, you could define it, but it's just nonsense. It's just not interesting. Yeah. So still under this assumption that you have this local symmetry. Yes. Is it very clear that this should live in the ring generated by the left-shut mode? No. No, that's only a posteriori. So this theorem is, by the way, proof of the so-called integrality conjecture for dt invariance, that they are really polynomial in the left-shut or in the half-left-shut mode. Yeah. We have another question. In the last theorem, is it easy to see an example where the dt invariant is not the class of a quotient stack of the semi-stable locals by GOD? Uh-huh. Yes, I think so. So, yeah, it's a bit more time. I mean, lots of interesting examples already happen. So a very interesting class of examples is if you just take a quiver, which is a bunch of loops, say, m loops where m is at least 2. And then I would say even for dimension 3, one can easily compute the dt invariant just from its definition, but nobody knows a small dissingularization of the modular space. So I mean, even dimension 3 pairs of matrices, I would say this is unknown then if it is the actual motive of something. Oh, by the way, this is a good example anyway. So now that we know that the dt invariant behaves polynomially in the half-left-shed motive, we can specialize L at 1. This wasn't clear a priori, because a priori was only a rational function in L. We can specialize it to L equals 1. And then what we get is the so-called numerical dt invariant. And at least for loop quivers, there is a closed formula. I think it's almost the only case where you have a closed formula for these dt invariants, numerical dt invariants. And let me show you the formula. So this is the m loop quiver. And the dt invariant, we take the specialization of L to 1. And this is then given by, now I'm not sure about the convention. So let me say plus or minus, because I'm always mixing up the signs. It's a merbius inversion. Namely, it's 1 over d squared sum over all devices of d, merbius function of d by e, an ugly sign, minus 1 to the m minus 1 times d minus e, and then a huge binomial coefficient m times e minus 1 over e minus 1. So that's a sample formula of how dt invariants look like. And this is almost the only case where you have such an explicit formula. We have two or three more. And well, you see first of all that the numerical dt invariants grow pretty fast, grow exponentially, because if you grow in d, then you have this leading term like a binomial coefficient md over d, which is pretty large. In general, it's very complicated because you have this merbius inversion. And I don't want to give you as an exercise that this sum is a priori divisible by d squared, which I claim here. So I claim that this is integral. So in particular, this merbius inversion over binomial coefficients should be divisible by d squared. This is terrible. It is an integer. It is actually an integer. No. I mean, wow. My proof of that is number theoretic. No, I mean, so dt at every plus 1 always must be a integer. Yes. If you look here at this geometricity statement, you just take the Euler characteristic and compactly support intersection homology. So it's really just an Euler characteristic of some IC, intersection homology Euler characteristic of a modular space. It should be plus minus 1 sum of i thermally. Yes. Actually, there is a general purity result. So actually, only even intersection homology spaces can be nonzero. Yeah. I haven't claimed this here, but there's some purity going on. Yeah. So this is how they typically look. And then finally, to finish the proof, let me give you three pictures relating everything to the huge topic of scattering diagrams. Oh, yes, please. It's on the right left, but I can't read it from here. Like this? Yeah. You wanted to do it. Oh, yeah. So does this formula appear in some geometricity theory, too? There are a few other places where such a formula appears. I have seen one or two references where this appears. This also appears in some setups as certain Gopakumawafa invariance, which looks similar. I have to check these sources again. But I mean, expressions like this, such a merbius inversion over binomial coefficients appears somewhere here and there, and the universal source somehow is always dt invariance for the multiple loop quiver. But this is really strange. Nobody really knows why this happens. Just a final time for a final picture. Is that OK? Which relates everything to scattering diagrams. So I just want to show you what happens for quiver with two vertices. So final example, q is the quiver with two vertices, which are connected by m arrows. And there's a distinction, whether m is 1, 2, or at least 3. And we need a stability function. So the stability function is the slope of d1, d2 is defined as, well, d1 minus d2 by d1 plus d2. And then you can check that locally. The order of form is symmetric. And then I will show you a picture of the support of the dt invariance. So the set of all dimension vectors, where the dt invariant is non-zero. Not the actual value, just where it is non-zero. And we have to make a distinction whether m equals 1, m equals 2, or m equals greater or equal 3, and for m equals 1, you have precisely 3 dt invariance appearing. For dimension vectors 1, 0, 1, 1, and 0, 1, which is accidentally just the positive roots for an a2 root system. For m equals 2, you have two infinite series. And a special case here. Here the numerically dt invariant is 2, by the way. So here it is always 1, and here it is the dt invariant is the virtual motive of p1. But you don't have the dt invariant for any multiple. So this is basically like an affine a1 tilde root system, positive roots, where you have the real positive roots. And here is the imaginary root, but you don't take its multiples. And for m equals 3, you have the lattice points of a certain hyperbola, and then you have a region, a whole cone, which is completely dense. So all the dt invariants are non-zero, and they actually explode. The numerical dt invariants grow exponentially in this whole cone. And that corresponds to the hyperbolic rank 2 root systems somehow. So that's an intricate relation to root systems. This has to do with Katz's theory of composable Kruber representations. And so these are lattice points of somehow hyperbola. And if you just take the rays through all these points, then you recover certain scattering diagrams, which appear in all sorts of grammar-witten theory in the tropical vertex, and cluster theory, and wells. OK. OK, that's enough for today. Thank you very much. Any questions? From general view, it's also related to root systems. Yeah, but the connection to infinite root systems is weaker in general. It's definitely weaker. This rank 2, this 2 vertex Kruber is exceptional for that. But I mean, there is still, OK. So in general, for general Kruber Q, let me just remark that if the dt invariant is non-zero, then you require that the quadratic form associated to the Euler form has a value less than or equal to 1. So you have a root for the corresponding infinite root system. But you don't have a converse. But not that. The distribution is not sufficient to cover the half-prism. But the square root is a feature value. The reason why this is not sufficient is that you have to make this choice of stability. And OK, there is a partial converse. Yes? Oh, this is yes. OK, thanks for the question, because I think this is actually not written anywhere. But there is a partial converse, namely if dd is less than or equal to 1, and the d is in so-called Schurian root, and if you make a sufficiently generic choice of the stability, so you avoid, finally, many hyperplanes in stability space, then you can conclude that the dt invariant is non-zero. But is it true that ddt is non-zero, is that dE0? Yes. If dt is non-zero, then dE0 is definitely not. Now, this is only for hyperbolic root systems. Yeah, yeah, yeah, OK, yeah, yeah. Yes, yes, even if the root. OK, yeah, yeah. Why is it called Thomas and Pada? That's a given to the hd. Yes, OK. So we have to ask Maxim. OK, well, dt invariants are the mathematical precision for BPS state counts, and there are string theorists who, to any quiver, associate some kind of a string theory for which you can do reasonable BPS state counts. But no. We have a question in the Q&A. Is it possible to decorate a x cube by chain classes of bundles over the, ah, OK, the semi-stable of Hussain SST? Yeah. OK, so yeah, on these modular spaces of semi-stables, or in case stability and semi-stability coincide, you have tautological bundles, and you have some, yeah, they have the chain classes of these tautological bundles. And you want to, OK, I don't know. So the general answer would be there are, of course, many, many possibilities of somehow decorating this whole theory by taking another base ring than our mod. So not working just with this pretty coarse information of the motive of the variety, but working in some, well, bringing some, bringing some k theory into the game using chain classes or using classes of bundles or sheaves. That's lots of things to do, I don't know. Yeah, but OK, yeah, you could think about reasonable replacements for the space ring or mod where you can do all this. And if I have a question, is there any relation with these tt invariants and the virtual motive or hiver scheme that you described? Yes, OK, so that's the DTPT story which I have briefly mentioned. Yes, OK, so what you can do is, so you can figure it out as an easy exercise. I can also give you a reference. But OK, so we have this, we have factored this local series. We have factorized this as the x of 1 over L2 minus 1 half minus L2 1 half sum over all d dTd times t to the d. So that was our factorization. We also have a local analog of the thing I explained yesterday of factorization 1. Namely, you can define this logarithmic q derivative. This is the logarithmic q derivative we have seen yesterday, but now we define it on the local level. And what we get is indeed a generating function of local for the slope x hiver schemes of the quiver. Now combine these two. So plug in this factorization of the local motivic generating series into the numerator and denominator here. And then you have a relation between the dT invariance and these local hiver schemes. And this is where dTpT for quivers comes from. And this is great for computing these numerical dT invariance. That's the way you compute the numerical dT invariance. Because if you combine these two formulas, then you can directly specialize L2 1. But what is the pT side in this equation? Well, OK. This terminology is very formal. No, there is no explicit pT side. I have a different question. So how computable are these dT invariance for general quiver, let's say? Is there a way to go from one quiver to another if they are somehow related? No. No. You have to do it really separately for every quiver. And I mean, everything is like nested recursions, doubly or triply nested recursions starting from the motivic generating function, which is explicit, and a computer can easily do the calculations for you. But there's no way to pass between different quivers. Because if you do some easy operation just on the quiver, adding a vertex, adding an arrow, deleting an arrow, you completely change the representation theory of the quiver. So I mean, for example, Gabriel's theorem tells us that if the underlying graph of the quiver is an A.D.E. Dinkin diagram, then we have no moduli at all, then the classification is discrete. But the underlying graph being A.D.E. Dinkin is absolutely not stable under any arrow insertion or a stable under deletion, but not under insertion. You can easily take an A.D.E. Dinkin diagram, add a single arrow, and it is completely out of the Dinkin or affine classification. So unfortunately, you can't do things inductively over the quiver. You have to do everything separately for any quiver. Unfortunately, let's postpone everything else for the other question that I will show you later. Okay. We postpone it to the exercise session that will take place in an hour anyway. So let's thank Amarkus again. Thank you. Thank you. Thank you. Thank you.
|
We motivate, define and study Donaldson-Thomas invariants and Cohomological Hall algebras associated to quivers, relate them to the geometry of moduli spaces of quiver representations and (in special cases) to Gromov-Witten invariants, and discuss the algebraic structure of Cohomological Hall algebras.
|
10.5446/55043 (DOI)
|
OK, so let's first summarize what we did yesterday. Also to somehow refix notation, we had this motivic ring, which will be our coefficient ring for all the computations we will perform today. And this I wrote as R mod, which is the growth leak ring of complex algebraic varieties. So freebeading group in all varieties, up to isomorphism, modulo, cut and paste relation, and then localized at motif of the affine line, which we want to have invertible, and we want to have a square root of it. And we want to have natural denominators, 1 minus L to the i inverse, for i greater equal 1. And this is a ring where we have virtual motifs of varieties. That's where the virtual motifs live. OK. That was the one thing I explained yesterday. And the second thing was, if q is a quiver, we can attach to it a category of finite dimensional representations, which has very nice homological properties. One homological property was even something like a sad duality. Which I only briefly mentioned. But the most important thing was that we have an explicitly computable homological Euler form. Category of representations, it is hereditary. So of global dimension 1. And the homological Euler form is something we know explicitly. Maybe I will redefine it today. And then finally, we consider the stacks of isomorphism classes. So the modular stack of isomorphism classes in this thing is nothing else than, well, we have seen a certain affine space with an action of a nice algebraic group. And we can take from the quotient stack and take the disjoint union over all dimension vectors. So this is the quotient stack. This was an affine space. And this was a product of general linear groups. OK. And using this, I then defined this motific generating function, which I will define in a minute. But that's the summary of what we did yesterday. And this is always an irritating moment for me in a lecture series that I summarized what I did in the previous talk, and it's just like five minutes. And then I always think, OK, I could have done this in five minutes, but it took me an hour. So what was going wrong? I am OK. It was the explanations missing. And trying to give you a feeling for what all this is. OK, so I hope this whole hour yesterday wasn't useless for you. OK, so anyway, now we will come up with a formal definition of the motific generating series in the motific quantum space. So it's a generating series. Some of it feels like a zeta function, but very important is also the coefficient, the ring in which this generating series actually lives. And OK, so I will define the following. So this will be the motific quantum space. And this is the following. Well, this is a formal power series ring with as many variables as you have wordy c's in the quiver. And the coefficient ring is our motific ring r mod. And everything is slightly twisted by the quiver structure. And this we will see now is the complete r mod algebra complete because we are looking at formal power series, where the base is t to the d, where d is a dimension vector. So this is this one discrete invariant of a quiver representation, the dimension vector, encoding the dimensions of the vector space is involved. And multiplication is defined as follows. So this is something like a quantum version of a formal power series ring. We take minus l to the 1 half, anti-symmetrized Euler form t to the d plus e. That's our base ring for all the computations which we follow. So it's formal power series in as many variables as we have wordy c's in the quivers. And instead of writing down t i to the d i, product over all t i to the d i, we formally work with such monomials t to the d, which is just short. It's the minus. Yes, it's the minus. It's the anti-symmetrization of the Euler form. It's the anti-symmetrization. Yes? Because 0 seems to be the n. Exactly. Exactly. It's the anti-symmetrized Euler form. And this we take as multiplication. So it's like an associative. This is an associative unit algebra. The unit is just t to the 0. And it's complete with respect to the, well, the augmentation ideal is the ideal everything generated by t to the d for d non-zero, and everything is complete with respect to this ideal. We have formal power series, which we'll use. There's one question. Yes, please. Introduction of this. Ha ha, kind of multi. Well, I think, yes. Yes. Well, the most tricky point is somehow why do you anti-symmetrize the Euler form? And well, then this anti-symmetrized Euler form is like the honest Euler form of some three-calabi-Yau situation. So you're somehow imitating, kind of imitating a three-calabi-Yau situation with a quiver, which is hereditary of global dimension one. That's somehow philosophically the explanation. But the practical feature is that just the formulas are as smooth as possible with this twist. Yeah? It's a logical quantum torus. This is, yeah, I call it the motivic quantum space, because for quantum torus I would like to have inverses of the t to the i maybe. So that should be over some Laurent polynomial. But motivic quantum space. So and inside this, yeah, so OK. So very pragmatically, the motivation is the formulas are very smooth when you work in this ring. For me personally, for many years I worked with a different ring. I just worked with a twist just by the Euler form, not by the anti-symmetrized Euler form. And I always had problems with the formulas, which were not really nice. But this is the way the formulas are easier to remember, although you always have this ugly little minus square root of the left-shed motif inside. This you cannot get rid of. OK, so and now let me finally introduce this motivic generating function, and then explain why some of knows the river. So Aq is then defined as sum over all dimension vectors. And then we take the virtual motif of this space of representations divided by the virtual motif of the base change group and form a variable t to the d. So this is the motivic quantum space. And the series is the motivic generating series. Instead of an A, you could also write a Z. So that really feels like, well, whatever you prefer, a zeta function or a partition function is really some of this kind, as we will now see. Because that's the series we will try to factor. That's the real n-smartifling with some power of that. Excuse me? The virtual motif, yes. OK, the virtual motif was you normalize the motif by an appropriate half power of the left-shed motif for x irreducible. So that, for example, Poincare duality manifests itself as invariance under exchanging l and ln-worth. This was the example we have seen for projective space. That was the twist which is necessary. We will actually use this twist now and make this motivic generating series more explicit. Yes, let's do this. That's maybe the perfect point to formulate a little lemma. And you know the proof of this lemma when you have done this exercise, which I gave you yesterday. The exercise was to compute the motif of GLN, of the group GLN. And so let's compute this thing here. Let's first. OK. So this active ring does not depend on the quiver, but the function does, right? OK, so we will see after this calculation that the function depends on the symmetrized Euler form. And the ring depends on the anti-symmetrized Euler form. So altogether, the datum of this series in this ring recovers the quiver. We will see this in a second. The definition of the ring can be recovered. Now, the multiplication in the ring, the multiplication in the ring, it really depends on the anti-symmetrized Euler form. And this function only depends on the symmetrized Euler form, which we will now see if I formulate this lemma. OK, so the lemma just is an explicit calculation of that. So this rd of q, if you recall the definition from yesterday, maybe I should recall it, why not? This rd of q is just an affine space. rd of q is just direct sum over all the arrows in the quiver of the space of linear maps from a di-dimensional to a dj-dimensional space. So this is just affine space of some dimension. And its motive is just the power of the left-shed motive. The group gd, which acts on this, is a product over general linear groups for the vertices. So its motive is a product of the motive of general linear groups, which we have calculated yesterday in this exercise. OK, and if you just plug in the result from yesterday's exercise into this, then you see that we can reformulate this aq just as sum over all d into the q0. And then the numerator is minus l1 half to the power minus the Euler form applied to dd, t to the d. And then the denominator is just a product of Pochhammer symbols, where you plug in the left-shed's motive, or the inverse of the left-shed's motive. So it's just a product over the Pochammer's 1 minus l inverse times times 1 minus l to the minus di. So this is something like a l-hyper geometric series. Like in q-hyper geometric series, except that our quantum parameter is now always the left-shed's motive. So this really feels like a l-hyper geometric series. OK. And so this lemma is just really direct calculation with the motives of this affine space and this group. And from this, we can now draw a remark that the datum of this series and the ring knows the quiver. So this motivic ring, so this motivic quantum space, knows the anti-symmetrized Euler form. Because it's really in the definition of the multiplication. OK. The motivic generating series only involves the quadratic form defined by the Euler form. It's only about the quadratic form. But from the quadratic form, you can recover the symmetrized Bailinier form by polarization, the usual polarization trick. So aq knows about the quadratic forms. And then after polarizing, it knows about the symmetrization. So both together know about the whole Euler form. So both together know about the Euler form of the quiver. Because, well, we can standardly decompose into a symmetric and anti-symmetric part. So we know the Euler form. And from the Euler form, we can directly recover the quiver, because the Euler form just encodes the vertices and the arrows. OK. So this answer to this question from yesterday of how much of the quiver is seen by this. So aq itself only sees the symmetrization. But the important thing is to have it in this ring, and then we really recover the whole Euler form. OK. So time for first examples. Namely, for examples where you really don't see all this quiver machinery. Yes. How we are? When you say that it knows about the quadratic forms, I mean, when you take the coefficient of this power series, then you want this exponent of the L part. But the coefficient ring is probably not like a unique factorization domain or something. So how do you get this out of this? Well, take the coefficient of t to the d and develop it into a formal power series in L inverse. Then the lowest degree coefficient is this one. And from the exponent, you can read off the Euler form. Then you can read off the quadratic form, for example. Notice really means, well, you can just read it off from the definition. So how much of the quiver is contained in the definition of the motivic quantum space and the motivic generating series? OK, now we have to do two classical examples. There's a question? Yes, please. Is there some difference equation satisfied by AQ? Yes. Yes, there is. I once used it in another version, where I have different twists. But yes, you can characterize it by some nice Q difference equation. And we will see other Q analysis business popping up in a minute, for example, taking the logarithmic Q derivative as something which plays a role here and leads to Hilbert Schemes. Yes, we'll see these in a few minutes. So I mean, this is some of the right keyword. So this brings you into Q or L analysis. And you can think about all possible tools from Q or L analysis, which you might want to apply to the series to study it. OK, so example. But I think I need a larger blackboard for this. So in this example, I will do the case of the quiver, which is just a vertex, and a vertex with a single loop. Because these correspond to classical things, namely, let us recall the so-called qubinomial theorem. And it's a theorem which exists in two versions, luckily for us. One version is, let me first write down the classical version. So you take the following hypergeometric series, sum over all d, t to the d divided by the Pochamer symbol 1 minus q times 1 minus q to the d. And this summation factors into a nice product, product i from 0 to infinity, 1 over 1 minus q to the i t. This is usually proved using just partition combinatorics. So on the left-hand side, you see a generating function for partitions by weight. And this is one way you can prove this. And another nice identity, which also goes under the name of qubinomial theorem, is if you have some quadratic form in the numerator, which is q to the d, d minus 1 half, t to the d, and then again the Pochamer. And this is product over all i from 0 to infinity, 1 plus q to the i t. And now if you think, well, OK, this might generalize. Maybe I can put some other q to the d squared here in the numerator or not. Then number theory people will tell you no. That's a completely different story. If you don't put q to the d squared half here in the numerator, but q to the d squared, then you are in the realm of Rogers, Romano-Jamm identities and all sorts of really difficult number theory stuff. So there are just precisely these two values for the numerators where you have such a nice factorization. OK, so from this it follows quite easily that we can factor the generating series for the quiver, which is just a single vertex, into I have to read this off because I have to be extremely careful about signs in this whole theory. Product over all i from 0 to infinity, 1 over 1 minus l to the i plus 1 half times t. This corresponds to the first identity and corresponding to the second identity, which is the quiver with just one loop. You have aq equals product over i from 1 to infinity, 1 minus l to the i t. OK, so a good advice to get into this motivic invariant business is to do the calculation. Everything's here. So for example, OK, let's look at the second case here. For this one loop quiver, the Euler form is actually identically 0 because you're taking a contribution d squared for the vertex minus d squared for the loop. So the Euler form is identically 0. So this term vanishes and then you have just this quotient. And then you rearrange, bringing, so pulling enough powers of l out. And then you'll see you arrive here. And it's really recommended to do the calculation. It's completely elementary. But you will see that you have to be extremely careful about all the signs. And this is a typical phenomenon for this theory of motivic invariance, that all the signs and these ugly little half powers really are really important. So you should do this calculation. I mean, it's completely elementary, but we should do this. We will need this later. There's one more case of a connected quiver for which you can write down such a product factorization of the motivic generating function in an elementary fashion. Namely, the third case is this quiver. And if you know about quiver representations, the trivial quiver, the one loop and this one, these are the only three symmetric connected quivers which are not wild. Because this is an a0 tilde, this is an a1 tilde, and this is an a0. And all other symmetric connected quivers are wild. And it's really a tame wild thing that you can write down the factorization. So there exists an explicit factorization for this, which I don't want to recall now. But I will recall the categorification of this maybe in the last talk. And OK, so these are the only elementary examples of these motivic generating functions. What does wild mean? What does wild mean? Oh, what does wild mean? OK. Yeah. OK. Good point. This is minus. Surprisingly, it's minus. Although here in this identity, you have this distinction between the minus and the plus. But here it is minus both times due to this funny little twist by minus square root of left-shut. Wildness. OK. Let's just check for the time. OK. A category C, abelian finite length, very nice category, like modules over some algebra, is called wild. Well, to be precise, strongly wild, but let me not explain this distinction, if you can embed the category of representations of a free algebra in it. If there exists an embedding of the category of representations of a free algebra in two variables, faithfully, into C. So it means the classification of isomorphism classes in the category is as least as difficult as classifying representations of the free algebra. And it is known, for example, that if A is any finite dimensional algebra, you can embed its representation category inside the representation of a free algebra. So having solved the classification problem for the free algebra, you have solved the classification problem for arbitrary algebras. So this is some kind of recursive thing, and that's what it's supposed to wild. Geometrically, you could say object appear in arbitrarily high dimensional moduli. So equivalently, moduli are of arbitrarily high dimension. Not moduli of all representations. So for example, if you look at smooth projective curves and the category of coherent sheaves, then you have two cases which are not wild, namely P1 and elliptic curves. Because for P1, the classification of coherent sheaves is discrete. Everything is direct sum of torsion sheaves and ONs. And for elliptic curves, you take the ATIA classification, and the moduli are harmless. The moduli of bundles are only the elliptic curve itself, basically. But starting in genus 2, you have moduli spaces of coherent sheaves of arbitrary high dimension. That's called wild. And for example, such an explicit embedding of representations of the free algebra into coherent sheaves on a smooth projective curve that was really used by Cichardry in studying the biorational geometry of the moduli spaces. So this implicitly really appeared in vector bundle theory. Markus, we have a question in the chat. For a thing, category is the dimension of representations always bounded by 1. The dimension of, well, in the composable or stable representations, the moduli. So yeah, OK. You have to be precise about what moduli you consider. Usually you consider stable objects. And if you're strict to stables, then for a tame algebra, you only get one dimensional moduli. Yes, indeed. That is the famous, I mean, infinite dimensional algebra theory. There's the famous tame wild dichotomy. Either your algebra is wild, moduli of arbitrary high dimension, and you can embed the free algebra. Or you are tame, which means moduli spaces are only one dimensional. Maybe with many, many different irreducible components, but only one dimensional. Where can we find this equivalence? Sorry? Where can we find this equivalence? Ha, ha, OK. I will try to find a good reference. Yeah. I mean, there are many technicalities around this. There is really a distinction of being wild and being strongly wild. And what I call wild here is really strongly wild. But OK, I will check for some nice references. And I also maybe slightly really question. So I look at these two functions, and I think boson partition function is very un-partisan. Why should I think about the point of being boson and the loop? OK. Well, it's an accident, actually. There's nothing behind this, I think. So, namely, what you can do is you can exchange L and L inverse in the generating function. And this changes the Euler form to what? To the diagonal form delta minus the Euler form. So usually, if you take the Euler form of a quiver and instead compute the diagonal one minus the Euler form, you no longer have the Euler form of a quiver, because negative numbers of arrows don't exist. The only exception is that you interchange the trivial quiver and the one loop quiver. So it's an accident. Yeah. So I was always hoping for some boson fermion principle. Well, we'll see this boson fermion thing also when we categorify all this in the homological whole algebras, because then we will get free bosons, free fermions, as homological whole algebras. But it's just a coincidence in this very special case. No more questions. OK. Let me check the time. Yes. OK. Wonderful. So let's start with actually working with this generating function, A of q. And yes, it's always very interesting to see, for me, that usually you, meaning the audience of some lecture series, is more interested in the qualitative aspects, like this word and this thing. Whereas I'm really into this quantitative aspects. I really want to do some hardcore calculations with this series, what I really like, what the audience usually doesn't like so much. So that's always a conflict, and we have to navigate through this. But I'm always happy if you ask lots of questions and I can explain different things. So that's actually fine. And if I then have to skip some of the dirty calculations, then that's fine. OK. Let me give you something which is some or halfway between qualitative and quantitative. Let me give you. OK. Now this is all about factorization. Part one. Factorization just means I want to do some multiplicative things with Aq. So for example, I want to factor it into a product. This we have seen in these two examples, that we can factor this into an infinite product. We will later see how to do this in general, but this I first want to motivate. We can also do something else. This is a form of power series whose constant term is 1. The constant term is just for t to the 0. And for t to the 0, you just have a constant term 1. Any form of power series with constant term 1 is invertible. So Aq is not only an element in this motivic quantum space. It's even an invertible element. So we can look at its inverse. And so here's a theorem. And later, when we have seen how to define dt invariance, then this theorem is exactly on a very formal level, the dT-PT correspondence. Although I can't call it dT-PT at the moment. So it just says, OK, so let's take this motivic, which we're around. Let's take this motivic series, not in the variable L, but let's twist the variable L slightly a little bit. I will explain this notation in a second. Let me twist the variable t, the formula variable t, by minus L 1 half to the n, where n is a vector. And I have to explain what this notation is. And let me multiply it by its inverse with a slightly different twist, minus L 1 half minus nt inverse. I don't want to write this down as a fraction, because I'm still working in a non-commutative ring. This motivic ring is still non-commutative because we are working with this anti-symmetrized Euler form there. So I shouldn't write it as a fraction. If you calculate this fraction, then you get a motivic generating series for Hilbert schemes, virtual motive of Hilbert scheme of the quiver t to the d. So that's the formula. And now I have to explain all the terms. And actually, I will explain the idea of the proof, which is essentially really simple, and the rest is just dirty calculations in this motivic ring. OK. If you know Tom Bridgeland's papers about motivic Hall algebras, this is what he calls the quotient identity in the motivic Hall algebra. So let me first explain the terms. What does it mean to plug in not t, but minus L 1 half to the nt into this? Well, try to guess the definition. And your first guess is the correct one. So f of minus L 1 half to the n times t is defined as, well, OK, let's assume f of t is a series with coefficients ad, t to the d, just some formal power series. And then plugging in this, this we define as sum over d. And then we take the coefficient ad, and then we take minus L 1 half to the n times d, t to the d, where the series is defined like this. OK. Let's see if this makes sense. So let's assume it's L to the n over t. Our usual variable minus L to the 1 half. That's always the standard variable here. So assume f of t is a formal power series, sum over all d, coefficient ad, t to the d. And this d is always a vector of length q0. And now I want to define what it means to have this twist in the variable t. And this twist in the variable t means you take the coefficient, you take t to the d, but then you twist to minus L 1 half n times d. n and d are both vectors of length q0. And this is just the new scalar product, sum of the product of the entries. So that's a way to twist a formal power series in many variables. OK. And that's what we do twice in defining the left-hand side. And we use the fact that aq is actually invertible in this ring. And then we get a generating series for Hilbert schemes. And I have to explain what Hilbert schemes are. Now, again, think about coherent sheaves on a curve. And think about how you define the moduli spaces of vector bundles on curves. This you usually do by GIT. So first you find a large enough Hilbert scheme, which parameterizes all your semi-stable bundles of a fixed rank and degree. And then you're mod out by some further structure group, SL, and do the geometric invariant theory. And this kind of Hilbert scheme is precisely what we need here. So to define the Hilbert scheme, if we have such a dimension vector n, we have an associated projective representation, projective representation of our quiver q. More precisely, this quiver q has one projective intercomposable representation for each vertex, which is usually called pi. And then you just take the direct sum over all pi to the ni. And that's what you call pn. Well, we'll not see these projectives, and we don't really need to see them, so but there is a projective representation. This will somehow play the role of o of 1 in the vector bundle case. And now in the vector bundle case, to realize all semi-stable representations of fixed rank and slope, you present them all as a quotient of some large power of some o1 or on or whatever. And so this is what we do here. I define Hilb dn of q as a set as follows. It is the set of all presentations of a representation of dimension vector v as a quotient of a fixed projective representation up to a natural notion of equivalence, which you can already guess. Namely, two representations are, two such presentations are equivalent if you have a diagram intertwining the two presentations. But that's what you would call a Hilbert scheme also in the coherent sheaf setup. And that's what you call a Hilbert scheme here in the Quiver setup. And this exists as an algebraic variety using GIT. So you can realize this as a set of stable points in some large vector space. And then you mod out by the structure group. And well, now we have such a nice identity. So this is the full Quiver scheme. Yes, yes, the full Quiver scheme. Yes, yes, yes. Yeah, we're not taking care of any more numerical invariance like Hilbert's. Yeah, yeah. OK. Well, OK. To call it the Quiver scheme would somehow be a better analogy. Yes, that's right. Yes. OK. So we're performing a very simple algebraic operation here. And in terms of Q analysis, you would call this logarithmic L derivative. That would be the Sloan. Well, in Q analysis, the logarithmic derivative would be F of Q times T divided by F of T. You take some twisted version of your former series and divide it by that. That's some more analog of that. OK. OK, so that's our first identity using this ring. And actually, it tells us that what is important about this is that the left-hand side is something very formal. AQ is defined in terms of quotient stacks. So they don't have a direct geometric meaning. But the right-hand side is something totally explicit. It's really about motives of actual varieties. In particular, I noticed this just to finish the sentence. On the left-hand side, you're not a priori allowed to specialize L to 1. We have talked about these motivic measures yesterday. One motivic measure was the virtual Hodge polynomial. This is always OK. But to take Euler characteristic requires specializing L to 1, because the Euler characteristic of the affine line is 1. And a priori are not allowed to do this if you localize, like we do in our motivic ring, for localize by terms like 1 minus L to the i. So we are usually not allowed to take Euler characteristic. And this you can really see from this explicit form of the motivic general rating series. Better not specialize L to 1 here, because of these denominators. But taking this logarithmic L derivative, somehow mysteriously cancels out all the denominators. And what you're left with is these Hilbert schemes. And that's the honest motive of a variety of which you can take the Euler characteristic. And these are very interesting numbers. So the left-hand side is completely formal. And the right-hand side is something concrete. And that's the surprising thing, that you're doing very formal things in this motivic ring. But you get out some actual geometry. And that we will now see in factorization part two in another case. But first, Fabian's question. What do you mean we take the left-hand side's motive to the exponent of the managing vector? Not to the left-hand side? Yeah, yeah, yeah. OK, this notation which I defined here. So this is really the formal definition. So this power only makes sense in this formal substitution of the variable t, which is also nonsense. There's no single variable t. There are variables ti for the vertices of the quiver. But this is defined as twisting the coefficients of the form of series by minus square root of left-shed to n times d, where this is the naive scalar product of two dimensional vectors. So this is really just a formal definition to arrive at smooth formulas and to make the analogy to q analysis some of it clearer. OK, so that was this first instance of this factorization thing. And let's do a second one. So this blackboard will reappear in a second. So usually when you want to define some modular spaces, you have to choose a notion of stability. In this factorization part one, we were quite lucky that we didn't have to choose a stability. It's somehow implicit in defining the service scheme. But so there's no stability involved here. And now in factorization part two, we will involve a stability. Factorization part two. And so the usual modern formulation of stability would be with a central charge function. I will do the old-fashioned slope stability because this may be closer to the coherent sheaf intuition, which some of you might have in mind. So let me just assume we are given a slope function, mu, from non-zero dimension vectors to the reals. And this should be axiomatically. It should be something which looks like the slope function for vector bundles. So it's of the form degree divided by rank. So mu of d should be theta of d divided by kappa of d. And this should somehow play the role of degree and rank for vector bundles. So it should be linear functions on the Grottenie group. It should be linear on dimension vectors. So theta and kappa should be real valued linear functionals on the quiver. And of course, you need some more condition. Namely, this denominator should never be 0 for a non-zero dimension vector. So we require that kappa of nq0, except 0, is contained in the positive reals. OK. So whatever is formally necessary to define something which behaves like a slope function. So that's our abstraction of slope function. And if you just take theta and kappa as the real and imaginary part of a complex valued function, then you have a central charge. So if you take theta plus i times kappa, which is a complex valued function, then this is what is usually called central charge function. And this is some of the modern formulation of stability, which usually works with central charge. But I will keep this old fashioned slope function. OK. Just as an aside. OK. Now we are given a slope function. We have a notion of semi-stability of quiver representations. As usual, a representation v of q is called stable or semi-stable if the slope decreases or really decreases on sub-representations. If the slope of any sub-representation is less than or equal the slope of the representation for all non-zero proper sub-representations. This is exactly the notion of semi-stability, which you have seen from in vector bundles or coherent sheaves in whatever setup. Just take the abstraction of not choosing the slope function a priori. And of course, well, if we make the wrong choice for theta and kappa, then this notion might be trivial. It might happen that any representation is semi-stable or just no representation at all is semi-stable. So all the subtlety is in choosing these linear functions for a given quiver. That's a completely different story. And we'll not touch upon this and just say, we want to prove a result which does not use any special feature of the slope function. Should be just true for whatever slope function you have, even if it produces a trivial notion of semi-stability. All right. Oh, OK. So finally, then, inside rd, inside the space of all representations, you have a semi-stable locus. This is the set of all representations which are semi-stable. And because semi-stability is a generosity condition, this is a Zariski open subset. Yeah? You're just trying to avoid certain bad dimension vectors of sub-representations. You're trying to avoid dimension vectors of sub-representations which do not fulfill this property. And avoiding certain sub-representations is a generosity condition. So it's open. OK? Final ingredient is that, so now that we have this relative version of this representation space, we can define a relative version of the motivic-generating function. So if x is any real slope, we define the x-semi-stable motivic-generating function. Now, this deserves a larger space. Sorry. So if x is a fixed slope, we can now define a relative or semi-stable generating function. So if x is a fixed slope, we can now define a relative or semi-stable generating function. Semi-stable motivic-generating function, which is aq, but now x-semi-stable of t. And this is, well, you can already guess what it is. Namely, you take the virtual motive, not of this whole representation space, but just of its semi-stable part. And you divide by the same structure group. OK, but now we have to bring the concrete slope into the business. So we are doing the summation only over those dimension vectors, where the slope is x. So where d is in mu inverse x. So where the slope of the dimension vector d is x or 0. OK. All right. And then the result is what is now called a wall crossing formula. And this is somehow the, it's the prototype of all wall crossing formulas you can encounter in this theory. And I will tell you what it formally amounts to, namely almost nothing. So it could be spent with d in mu. Yes, OK. So OK, so I should be, I should write it in a different way, equivalent but different, because this was not nicely written. So I would like to take the sum only over those dimension vectors whose slope is x, because I want to do everything local in the slope. But then I have a problem, namely the slope is only defined for dimension vectors which are non-zero. And I definitely want to have my motivic generating series having constant term 1, because I want to do multiplicative things. But OK, so let me just formally write 1 plus. 1 plus all the dimension vectors, which are non-zero, but have this fixed slope. So that's maybe a more honest way of writing this down. OK, and then the result is the wall crossing formula. And this is a big advantage of this motivic ring, that this has an extremely smooth formulation. Aq is the product over all reals of these local functions. OK. Ordered. Ordered. Ordered. It's an order product, and it's ordered by a decreasing slope. And while you have to think a few minutes about well defining ordered products over the reals, obviously. But the important thing is that all these are power series on the right hand side. All are power series starting with 1. And everything is graded by the dimension vector. And then if you think about it for a minute, you can easily see that such an ordered product over the reals, in descending order, is actually well defined. Yes? The slope only takes various international numbers, right? No, because I allow theta and kappa to be real valued. A priori. Yeah? You gain anything by doing that? Yes, because later on I would like to make a generosity assumption. And sometimes to make things as generic as possible, I want this to be real valued. So that, for example, the fiber for fixed slope is just one ray. And if I just make these reals which are independent over the rational, then that's definitely fine. So I gain a little bit. Time is almost over, but let me just show you the proof of this. That's not my terminology. Well, OK. So it's OK. Before explaining the proof, let me show you what is the wall crossing aspect. The wall crossing aspect is not this identity, but well, on this fixed category of representations of the quiver, you can look at different notions of stability. So you could have two different slope functions, two different slope functions, giving you really, really different stables or semi-stables. But this formula tells you that this ordered product is always the same. So the ordered product over this series with respect to mu is the same as this ordered product with respect to mu prime. So instability space, in the space of all possible stabilities, you have many walls defining the transitions where changing the stability function really changes the set of stable or semi-stable objects. And it means that crossing a wall in stability space might drastically change the class of semi-stable representations, but there are certain things which are unchanged, namely these ordered products, because they just evaluate to the same motivic generating function. So that's where the name wall crossing comes from. And well, formally, two minutes, the proof is formally equivalent to the existence of the Harden-Arras-Siemann filtration. So every object v admits a unique so-called Harden-Arras-Siemann filtration. And this is incredibly strong. I mean, if you look at things like the Jordan-Höller filtration, they are not unique. The subfactors of the Jordan-Höller filtration, they are unique up to isomorphism and permutation. But here it is the filtration itself which is unique. And if you define it correctly, if you index the terms in the following filtration by the reals in a tricky way, then it is even functorial. You get a functorial filtration. There exists a unique Hn filtration. v equals 0 equals v0 in v1 in vs equals v. It's defined by two properties, namely first property. The subfactors are semi-stable, foyer-fixed slope function for some slope. And second property, the slopes are decreasing. So this already appears in the work of Harden-Arras-Siemann on calculating the number of points of modular spaces of vector bundles on curves. And it's really an incredibly rigid structure on the category because it's a filtration which you can functorially attach to a representation. OK. And this basically proves this motivic identity. And namely these decreasing slopes you see here in this decreasing order of the product. And everything you have to do is you have to stratify your representation space. So your representation space, Rd, is a union of strata of a fixed Harden-Arras-Siemann type. Where Harden-Arras-Siemann type just means you record the dimension vectors of the sub-crochets. This is a finite filtration, a finite stratification by locally closed. So you get a corresponding identity in the Goten-Degringer varieties. The motive of this is just the sum of the motives of these. And these are, well, they're isomorphic to base change from the group Gd to a certain parabolic group. And then you have a vector bundle over a product of semi-stable loci for smaller dimension vectors. Vector bundle over a product of semi-stable loci for certain dimension vectors. Smaller dimension vectors. I don't want to be too technical about the notation here, but that's the principle. You stratify this affine space into locally closed things. Each of these locally closed things is a vector bundle over the product of semi-stable loci up to some change of group. You induce from a parabolic group to our base change group. And this is a very simple geometric fact, which is a consequence of the existence of the Harden-Ever semi-inflation. And writing down what this means in the Goten-Degringer variety immediately gives you this product identity. So that's basically the whole proof. And while the most important feature is buried in here, the fact that this is a vector bundle, that's equivalent to the category which you're considering to be hereditary. So this is more or less equivalent to the global dimension of your category of representations to be one. So that's the limitation of this. The global dimension of this category of representations is one. Global dimension of rep Cq equals one. Yes, right. Global dimension equals one. Exactly. OK. So that's basically enough for today. And next time we will start with this identity and explore these local contributions, these functions, and relate them to modulite spaces of semi-stable representations. And then finally define the dt invariance. OK. Thank you very much. Are there any questions? Of course, it is a standard key, just scheme for the plane. Of course, it related to the motion of the state and what are the criteria. So it is just ideal for dealing with them. Yeah, yeah. So that's not related to a particular dimension of the dimension of the state. So that's the question. So it is just ideal for dealing with them. Yeah, yeah. So that's not related to a quiver, but to a quiver with relations. You can consider this Hilbert scheme, which I introduced, for the quiver with one vertex and two loops. And the dimension is d and this extra datum m is just one. So this parameterizes left ideals of finite co-dimension in the free algebra in two variables. Of co-dimension of, exactly. Thank you. Of co-dimension d in the free algebra. And now inside this, you can consider those points. I mean, formally this is represented as equivalence classes of two linear operators, a and b, and a cyclic vector, v. v is cyclic. And then you can consider the closed subset, which is defined by the equation a, b equals b, a, commuting linear operators. And that shows that inside here is a closed subset. You find the usual Hilbert scheme of points in the plane. Yeah. This has actually much simpler structure than this one. You might have the same definition, but that's it. Ah, OK. Hiya, OK. The reason why this is the same as my definition is, well, the free algebra is one of these algebra which has the property that any projective representation is already free. So as projective representation, we just take the free algebra itself, and then let's have a look at what is the surjection from the free algebra to v. Well, let's take the image of one. And the image of one is then a cyclic vector. And x and y are mapped to two linear operators, a and b. Such that v is a cyclic vector for this. And that's the connection to this other way of writing the Hilbert scheme. Yeah. There are some questions online. Yes. Can you please get a bit more detail on what the Hadan-Aziman is defined to be? Hadan-Aziman type. Type, yes, OK. Yeah, OK. For time reasons, I tried to avoid this. Thanks for the question. And now I can give you the definition. OK. OK, so I'll just elaborate on this thing here and do the precise notation. So OK. So the union you take is over all decompositions of your dimension vector into non-zero dimension vectors for which the slopes are strictly decreasing. Yeah? Because that's the one condition which appears here. And then we define r dq, let me just say h and d dot. D dot is this decomposition. And h and d dot, that's the set of all representations v, such that the dimension vector of the i-th sub-crochant in the Hadan-Aziman filtration is precisely di for i from 1 to s. Yeah? And this is the i-th Hadan-Aziman sub-factor, sub-crochant. OK? So from the Hadan-Aziman filtration, I just record the dimension vectors of the sub-crochants. They have to necessarily have to fulfill this condition. And this is in fact a locally closed subset. Yeah? So existence of such an agent filtration, that's in fact a locally closed condition. That's not difficult to see. And then what you do is you have a, well, OK. So you identify this with an associated fiber bundle. Namely, inside the group GD, you find a parabolic subgroup corresponding to this d dot with respect to this decomposition. That's like the standard parabolic you find in a GL for any decomposition of your dimension. And then you have a certain vector bundle of known rank. So the rank is easily computable, but if I write it down, I make a sign mistake of known rank over the product i from 1 to s of the semi-stable locus in the representation variety of vi. And the basic idea is that if you have something in here, then to the representation v, you attach the tuple of the Hardener-Simons sub-crochets. That's the basic idea. That's what you want to do. But you can't do this literally on this level. You can do this literally on the stack level, but only, literally, but only tautologically. So you cannot attach to the representation its sub-crochets because the sub-crochets do not have a preferred base, linear basis. And that's why you have to perform some base change here. But that's essentially what's going on. And from this, you just compute the motive. Motive of rd is the sum over all these. Motive of gd divided by motive of the parabolic times the motive of this product times the power of the left-shed motive for the vector bundle. And if you just write this down and simplify everything, then you get to this product formula. Another question. Is there a connection between a variation of stability x and the monodermy of aqx? The monodermy of aqx. Sounds great. I have no idea what this could mean, but sounds great. It would be wonderful if it's true. So if the one who asked this question could give me some details of what this monodermy could be. It's a anonymous participant. OK, sorry. So if this anonymous participant can hear me now, if you could just send me an email, anonymous if you want, of what this monodermy could be, I would be happy to think about it because it sounds like a great suggestion. Yes. And then one more. Could you explain how to obtain the answer for aq for the one point quiver without an arrow directly from the definition of rdq? Is an rdq trivial for the quiver? Yes, it is. Yes, it is. OK, yeah. Let's do this calculation. Yeah. I was hoping for something like this for the question and also session this afternoon. But if we have time, I can do this calculation now. Then maybe also I can do this. Yeah? Let's do one or two of these tricky calculations using all these signs and half powers of L in the Q&A session this afternoon. I have myself one question. Are you going to prove the first structuralization? If you're interested, I can try to squeeze it in. Fine. OK, can do it. I have a few other questions. Can you do this for the stack of quiver on the move project card instead of the quiver? Maybe define the genetic series. No, there are tons of convergence issues. I mean, just in defining the modular space, you have all these problems of boundedness. The chosen one in the stack is not finite type. Exactly, because it's not finite type. All these boundedness problems which you have will appear here. And things which you write down naively are usually not convergent because things are not of finite type. So one has to be very careful and just first approximated by restricting to bounded values of the slopes and then study the convergence for slope going to plus minus infinity. So one has to be very careful. Any other question? Yeah, just like the first formula. So the second formula geometrically means that you have this starting on a semistrification. And does the first one have some geometric meaning? Yeah, yeah. OK. That's also more or less your question. I don't know if I should do it now or? You think you can squeeze this in the question. I will do it in the Q&A session. Yeah, I'll give the proof of this and the calculation for the trivial quiver and maybe some other calculation. We will do this, yes. OK, then let's move on to the game. Thank you very much.
|
We motivate, define and study Donaldson-Thomas invariants and Cohomological Hall algebras associated to quivers, relate them to the geometry of moduli spaces of quiver representations and (in special cases) to Gromov-Witten invariants, and discuss the algebraic structure of Cohomological Hall algebras.
|
10.5446/55046 (DOI)
|
I wanted to correct one typo from the last lecture. It wasn't relevant really for the last lecture, but it will be relevant for this one. So for the stable pairs space, there's the notion of descendants. And there's two different symbols I introduced. There's the tau. And that's given by this correspondence and you pull up comology, and you then multiply with K plus to the turn character and push down and the reason for that is you want this town to have the same grading as the one in grommel witten theory. And that part was correct, but then in the notes I just cut and paste to the churn character notation which I said is better. And I concentrated on the fact that I changed that she to the complex. But of course I'm supposed to keep the K the same there's in the notes yesterday I had this this K to K plus two and that makes no sense to have the turn character K here and K plus two here so that the actual thing is K and K so I don't know if anyone noticed that but anyway, they're corrected now. And as I said it didn't matter for the last lecture because the last lecture was about formal properties about rationality and things like that. It didn't really play a role what the actual specific meaning of that index was but it will now matter. So now I go to the last lecture. No. Okay, it's the last lecture the fifth lecture. So here I have to tell you about the stable pairs versus our constraints. And in some sense that's the whole goal of this, the series of lectures to get here. And this is relatively new stuff so most of the things in the lectures I've been speaking about well it's been back and forth but a lot of it has been pretty old but the actual work on the various our constraints for stable pairs is relatively new. But it's a long project. So I want to explain what that is how to get there and what the formulas are and there's some surprises. So the X is a non-singular projected threefold with only PP comology. So this is a, we already on the grammar of Witton side, I had this restriction that to help us avoid the sign rules and to help us avoid the inclusion of the hodge grading. So here, and the main example, and it's the place where actually the theorems are is a historic threefold. And now I want to write the various our constraints and by now, you should have some idea what these things are going to look like well at least the shape. So the various our constraints will take the form of universal relations among the descendants series. And this last time is when I put these descendant insertions in like a descendant series which is a Q series this is some sum over Q to the n where and as the holomorphic or the characteristic of the sheath. So this is every one of these and grandma Witten theory when we had this bracket was a number, but now on the stable pairs side these are every one of these your q series. And by the conjecture as a result, depending on where you're operating these this is actually not only just the q series that's actually rational function, every one of these is a rational function in q so the various our constraints were here are going to take the form of certain relations between rational functions in q. And I think it's fair to say that the algebraic form that the various our constraints take on the sheath side on the stable pair side is simpler than for the grandma Witten theory. But still they require some some terminology to explain. So the constraints. So before I write down the formulas and how to prove them in the cases we can prove them. I would like to say some general things first. So the constraints are conjectural almost all cases. So, that's it. So where we are in the subject and the main theorem the main general theorem is that the constraints hold whenever x is Toric. And unfortunately not for every single constraint that what's called the stationary constraints I'll explain these two things during the lecture, well not Toric. I'll explain what stationary constraints are but that's the case where it's proven. If you're interested in the case where the comology as x has an interesting hodge decomposition, then there's another piece of this this this part was proven in a paper with Miguel more era and then a Blomkova couldn't cover myself in 2020 you can find it on my web page or on the archive. So if you're interested in the case where x has more interesting hodge decomposition, then there was a subsequent paper by more era where he explains how to write down a proposal for the very sorry constraints with the assumption that x is simply connected. So you get some hodge decomposition there. And the last topic I'll talk today is about this more general. Well the very sorry constraints in general here. These descendant integrals unstable pairs in the virtual class there. And you might. If you are find yourself a little distant from that world. There's a way to connect it to even more concrete algebraic geometry which is, since this is a theory about three folds, you can dimension reduce it to a theory about surfaces. And when you do that the various our constraints constrain, total logical integrals on the Hilbert scheme of points of surfaces. And this is a topic of, of, well part of the topic of Miguel's paper, where he actually proves the result for all simply connected surfaces so I will explain that. A little bit in the end but this is a if you're interested in Hilbert schemes of points of surfaces this gives some kind of new results about total logical integrals there that comes as just a corner of the stable pairs theory so I'll explain that also. Okay, so to, before I tell you how to write the various our constraints. The formulas depend on some algebraic construction so the first thing is where are we going to work. So we're going to work in an algebra of descendants of X. And this is a very simple algebra to think about and silently we've already been working in that algebra is that the generators are these symbols with the churn characters. And there's a churn characters and then they. Inside the parentheses is any comology class of X. So, but more or less you take the free algebra polynomial algebra and those symbols, but of course there's some obvious relations like if you scale with a rational number inside the parentheses that's the same as scaling and also if you add inside the parentheses that's the same as adding outside. So as I said this is the kind of algebra of descendants that, in some sense we've been silently working in anyway and this is just making it explicit. Okay, so in order to define the various our constraints we need three algebraic constructions related to this algebra. And so the first is some derivations. So for every K greater than equal to one greater than equal to minus one. Sorry, here for every K greater than greater than equal to minus one, we define a derivation. And so the derivation secure derivation so annihilates q and it's from the algebra to itself so all I have to do to define this is to give you the action on these vectors. And that's given by a very simple formula if I take this RK, and I feed it one of these descendants. Then what I get is some combinatorial factor, and then the turn index on the descendants promoted by K, and this combinatorial factor depends on K, of course, and it depends also on the degree grading of this comology class. And here it's always a complex degree since we have PP. Everybody's in even and that's the complex grading in the, if you want to consider more general hodge decomposition then you have to pick a hodge grading here. That's discussed in Miguel's paper. If you're nervous about the minus one case then of course minus one there's no product that's a one, and then this doesn't promote the turn grading but it demotes it. And then the convention is you can never have negative turn characters. So that's the definition of these derivations. They're pretty simple actually they're pretty easy to think about. They just act as derivations and just have to remember to put the right combinatorial factor here. And that's the first one that wasn't so bad. Then there's a certain kind of diagonal plating splitting this is more notation. So, the generators look like this, as we've discussed, so maybe a gamma these are the generators the algebra, but for notation we want to introduce a different symbol where I put two churns together and gamma and I just have to tell you what I mean by this, and it's some kind of co product with the diagonal so I have to define this is the definition. And what you do is you just, you take a community composition of the diagonal which looks like this there's some left people and some right people. And you just put it in the churn the first one takes the left and the second one takes the right so these are now generators. So this makes sense as an element in the descendant algebra. So that's the second point so we've already gotten two thirds of the way. So then there's some more notation this notation is a little more complicated but again it's usually not much going on so I wanted to find this crazy thing here. So we've already, we've already kind of seen what this churn a churn B of gamma is that splits. I want to sum over the community composition, but I want to now somehow wait this sum by various factors. So when I write this expression what I mean is I sum over the community composition with the left and right as I did before, but each term. I have some combinatorial prop combinatorial factor for the left and the right. And that depends on the degree of that term. And then finally there's a sign. So this is just shorthand for that. So you can look at the slides it's not necessary to to somehow absorb all the notation at once but I'm just trying to give you some notation so when I write the formula that's small. It's complex degree factorials with negative arguments you define the vanish. And then the operator that we want with all this notation is just the simple element well it's not so simple maybe but it's an element and operators multiplication by the element and I tell you what this element is tk. So, well, I just explained what this symbol means here, where I take this. I take the Hodgson composition and I, I, sorry I take the community composition diagonal with C one and I split it here and then I waited, wait the factors by wait the summands by these factors and I some anyway, this is now been completely defined it's a particular element. So the single element of the, this algebra, what well depends on K. And then the sums are some rules about that I wanted to have the a greater than equal to zero a is in B is greater than equal to zero. And then of course this is the interesting part that the, what I'm submitting here are the first and second churn classes of the tangent bundle. Okay, so we have a that was kind of fast we have a quick review. And there's a derivation and the derivation is very simple that just basically this case bumping the turn index with some factor, and then more complicated. So that was the RK, and then more complicated if I do it backwards is this element we have to define this particular element for every K a particular element, the algebra, and this notation defines this element. So it's first of all I have to some over a plus B equals K plus two and a plus B equals K. And then inside the sum is are these symbols I've defined. So inside the sum is a second sum and the second sum is over these symbols this community composition of the diagonal times this fellow that I submit to it. And then I have to wait every one of the terms by some sign and some combinatorial factors. Okay, so as I said you can look at this if you're if you're curious about that but this is just a formula for a particular element. And once you get once you get used to this notation it's not really that scary. And that's it actually that's all I have to tell you and now that the the Versailles constraint operator that's LK. It has this element. So if I ever write an element that's an operator by multiplication by the element that's a derivation. And then this is a composition multiplication by this very particular element with that's the point class I should say that point class. So that's a particular multiplication by this particular element then composed with the derivation, then a factor. So that's the whole operator. And I would say that this is simpler, a simpler thing than that operator that I had defined in lecture two for the first time in my experience and come over in theory. I mean it is true that this element has a certain complexity. I would say this derivation is as simple as possible. And then after all here we're just multiplication, multiplying by a simple descendant of a point and then another derivation. So in some sense if you want to look where's the complexity this operator perhaps it's here that's the most complicated term. But anyway it's not such a bad. And the very sour conjecture in this form I mean this is this this conjecture goes back many many years to roughly speaking the origin of this conjecture was once we understood this was I mean a long time ago and we're in Princeton that once once Andre and I understood that there's a correspondence and they're at the same time the very sour constraints held in grommet witten theory, then just just by just, it just must be that they were very sour constraints on the, on the descendant side, and we were at that in those days there were the ideal sheaves. And we didn't have enough knowledge to actually do the transformation then. But still we could just guess that we could just you could just. Once you know that there are these constraints you should just try to guess them and we did had guests them in many cases. That's the beginning, but the fight the formulation here is formulation from 2020. And it says it affects has only PP comology, you get to pick any curve class you want. You get to pick any element of this algebra, then you submit this element to that operator, and you put the bracket and this is zero. So this is the descendants series and it's just always zero. So I'm kind of miraculous conspiracy among the send and series for three folds, stable pair descendant series. They conspire in this way to always give you zero. And that's this is easy to play with so well I mean certain part of it's easy for example, x equals p3. So if I pick whatever D I want of course if I pick some D that has the wrong dimension and everything then this thing is identically zero. So you don't want to pick a stupid D, but if you pick, if you pick a D that's interesting so the dimension constraint satisfied. So for example in this example for x equals p3. I pick this D I pick L one, I get to pick also which very sorry operator I want to pick some L one. So I pick this D equal this particular choice. And then I apply L one to D, and that's just mechanical, because I've just given you the specific rules. So it's some derivation some multiplications. And if you follow those rules you'll find that it's this operator applied to D. And then you can just do it. And this is the left hand side is what you get. The left hand side is this LKD inside brackets I've written it into three terms because there's three terms here. And what this what this conjecture says is that if I take the descendants series for these three with these with these combinatorial factors that should sum to zero. And this is about lines in p3 so it doesn't it's not so hard to check this by hand. And it turns out that for for these line invariance that actually the series is not a rational well it is a rational function but in fact the polynomial. So it's three terms in each one of these. It's a very simple geometry lines in p3. And each one of these numbers can be checked because it's each one of these numbers is some integral on some stable pairs space. And you can just, well these numbers you check them and you do this precisely this combinatorial weights and you get zero. It's a, it works perfectly in this case. We have a question. Yeah. I think it's known to be false if X isn't. I mentioned pray. I mean, I think that for the most part. So I about dimension to I told you that I'm going to make some comments. And a version of this is just exactly true for dimension to dimension one I haven't thought about, but dimension for it's not clear what it means. But you know, dimension for it's not so clear what there's some virtual class here I mean you could hope for maybe the question you're asking if I can interpret it is that maybe you could you could try to ask whether this whole thing is just zero specifically. And that stuff is not going to be true although I'm not prepared here to give you an example of that but then the grimoire within side. It's not the case that. Yeah, I mean, usually it's not the case that they vanish without the virtual class. And if you have in dimension for we don't have a virtual class in general so it's not clear what it means. I don't know if that's an answer. But in dimension to it's definitely true and interesting. And I will mention that at the end. Okay, so that's the, that's the conjectural part, and that's the formulation. And as I said this is really it's really kind of mechanical and not at all hard to wrap your head around, you just take any acts and I said for for the formulation I'm picking here you should have PPComology. Then there are these operators on this descendant algebra. And they, as I said they're, they're pretty easy to understand them. And then once once that once you have that somehow later loaded into your brain, then this conjecture starts giving your relations among the different the different descendants series and it gives you lots of them. And they're non trivial. We have one whole question about this example I think the question is, how is this some zero, I guess somebody actually tried to compute it. I didn't get zero. Well, I don't know it's like you took your minus three plus one plus two that zero six minus 10 plus four that zero minus three plus one plus two zero. Looks pretty zero to me. I think her last exponent to should be a sorry. That's a good point. Now at zero. Yeah, maybe that brings down the whole conjecture. Sorry, that these are each one of in this line case you get one number for the first three exponents. Thanks I'll fix that on the slides. Okay, so. Yeah, as I and I will say that also in Miguel so you, I think maybe another question you might think is that what happens when it's not PPCom so when it is PPComology the tort cases. There's a bunch of examples like this of course, as the degrees and things get harder and then it becomes harder to compute these and Alexei has some software Alexei Oblomkov has some software but of course the software doesn't go to infinity. Eventually, the computer breaks eventually, but for the tort cases you can check a lot. And so we're pretty confident in the tort case because a lot of checks besides there's a theorem which I'm going to explain in a second, but in the case where it has interesting Hodgdy composition, then the examples are much harder to check because there's not so many good tools. And Miguel Morera has done some basic checks for the cubic three fold, which has some you know some interesting Hodgdy composition the middle, and it passed those checks in that formula. And also in the on the grandma of inside. The number of cases where the of your sorry constraints constraints are proven where there is some interesting Hodgdy composition is basically just for curves. I mean there's some kind of case some trivial cases but the interesting cases just for curves, because it turns out. And besides, it's pretty hard to compute for such varieties, because they don't have this localization and there's when for two reasons particularly basically is that it means that the variety does not have some kind of good tourist action. Well, certainly not a tourist action with isolated fixed points. And that's one thing. That's one tool that you can't use immediately, or at least off the shelf. And the second is then another tool that one likes to use is degeneration taking break the variety. And typically what happens if the variety has some interesting hard structure you can't you can often break it, but there'll be some vanishing cycles and you'll lose some of that comology and then you'll you'll lose it'll just run away from you in the degeneration the generation will tell you about some stuff but it won't, it won't tell you about the cycles you've lost in the generation. Okay, so this was an aside so what's the theorem, the theorem from in our paper last year says that if access Toric. And of course I'm always assuming nonceing their project is, then these years are constraints hold in all for all K for all x Toric for all curve classes, but the only restriction is that the D you submit must be stationary. And what that means is defined here is it's the sub algebra defined by all descendants. So there's no restriction on the. The return index, but there's only one restriction on the comology index and that's its degree has to be greater than zero. So what does this mean, it just means that this, this, this thing can't be one can't be the identity class. So this says that if if as your, the red D that you submit as long as you don't involve descendants, don't involve anything like descendants of identity class, then it's fine then it's proven. Of course, we think it's true also for the descendants of identity class and you can check it by hand. So I'm not, I'm not saying that there's anything wrong with the sense of identity classes are proof doesn't capture them. Okay, but this is a lot of cases that basically it's almost all of them is just you just can't put the sentence that any class which is a shame and I'll explain why that that that one runs away from us. Okay, so there are any questions about this theorem in particular this theorem does cover this example, meaning that independent of this kind of blue check this thing is covered by the theorem. So shouldn't identity be the simplest case. Yeah, yeah, that's that's it's a it is a, it is a reasonable philosophical position that identity is a simplest case but it's also equally defensible that enemies the most complicated case and I'll make the second argument. And that is that, if you look at this threefold it's some, you know, big space, you know Maria it's like Russia. And the point can go anywhere at once. Now if I give it a cycle, then it has to live in a smaller place. So that somehow it's movement is constrained if I put the identity. If I put the identity condition then it can move anywhere it wants. And from this point of view, I've given it the most freedom and then and and from the point of view of controlling it, it's the least control. I don't know if you accept that argument but that's, that's actually some sense of the relevant one here. So it's, you know, this notion stationary comes from the study of P one. It's like an old word that came from how we used to describe Andre and I describe the study of P one P one only has two classes, it's the identity class and the point class. And if you remember in the earlier lectures I gave you some specific rule about how to deal with the sentence of the point class, how to relate them to her wits theory. And, and it is the case that, I mean if you're, if you're faced with a calculator and grandma wouldn't invariant of P one you could have some descendants like that needs as a point, the point ones are the easier ones to deal with. We know how to deal with everything now but there is some kind of staging of complexity and the points of the easier one and the descendants of identity the harder one. And it's precisely because that I that the center that I need the point gets to move wherever it wants. If you take a design of a point that the mark point has to be fixed that point, and you control that that control it a lot, in particular if you take a point that you can tell it where to go etc. Okay, I think I said enough but it's certainly true that often one feels that the identity is the easiest come all the class what happens here it's the hardest. Okay, is that enough. So, how this proof works for the statement is it's kind of three steps and so the first step is that so we say with exit torque threefold. And the first step I already told you is that the grommet wouldn't get the grommet wouldn't various our constraints hold. So this is because every torque threefold semi simple and that's a result of your attorney I think and every every semi simple, comological field theory. Well, though there's that you have to use the classification for semi simple, comological field theories given to tell them on classification and tell them on proves that the grommet of written various our constraints hold. Okay, so that's the starting point. So then the issue is that if we believe their equivalent we have to somehow get these various are the grommet written constraints to the stable pairs. So how is that going to be done. So you need to have a grommet written pairs descendant correspondence this has a promotion of this grommet written DT correspondence that I explained in the previous lectures have to have a full promotion of that. And this has taken really a long time to develop. I would say maybe 15 years or something like that. First prove this thing exists. Oh, first you imagine its form. You have to prove it exists. And in some papers with Aaron Pickston starting around 2012 we proved we we well we conjectured a particular form we proved that form holds for in the tort case but we didn't compute the coefficients of that form of the correspondence and then returning to the actual problem of computing the coefficients there. And then some progress was made in 2019 with Alexei and Andre, and the idea for that was that by then we know there exists. We know the form, and we know that it's true that this abstract correspondence is true for torque varieties. And it's just to need to compute the coefficients and to compute the coefficients. If you're clever enough you can just, you can learn a lot by doing this example, this old example P one with two, the total space of two line bundles on P one actually they're actually the trivial you take P one cross C two that's pretty simple thing. And you have to compute it completely in some sense on the from a wooden side, and also on the stable pair side. And we know how to do those things very well. But you need to keep track of some equivariant aspect of this. It turns out to do these things very well. In some sense you have to take opposite weights here. And this opposite weights looks like a drastic thing. But roughly speaking you do the calculation the descendant calculation with opposite weights on the ground of wind side and on the stable pair side. And what happens on the ground of wind side is you get that entire theory that I discussed, which is the growing up in theory of P one that I had solved with Andre years before that all those formulas go in. Actually on the stable pair side it's a little bit easier so anyway you have these three you have these three things the general form of the transformation that's proven but the coefficients aren't known. The calculation of this on the ground of wind side and the calculation of this on the stable pair side. And this gives you some mathematical constraints for that what that correspondence is and it turns out that the outcome of that as we can calculate that correspondence, except we lose control the descendants of one, the sense of one somehow sneak out of this, and they, in some sense sneak out because of this T, T, T minus T local specialization. So that's some kind of summary if you can, if you want you can read the discussions in the papers, but roughly speaking that's the development first you have to prove that there exists a correspondence with undetermined coefficients but nevertheless that exists. And the second thing is you have to compute it, and the computing uses all sorts of tricks, I mean some years of tricks, but the outcome is that the descendants of one escape those tricks. So that's only the second part then there's a, then there's something that one has to do which is kind of huge thing. In some sense algebraically is that there's some crazy correspondence which I'm going to show you the formulas for it. And then you have to take the virus our constraints which have their own complexity, and then you have to transform them by this correspondence. Then you get something that you know is true and then you hope it's this one. We'd conjecture this form long before we've done these calculations from just data. You have to hope that you're going to end up on this. So you have to go through this, this path and then you have to hope you're going to end up exactly here. And that's exactly that's that is what happens but the calculations are kind of long and I will show you. I'll show you the complexity that happens I mean somehow as I was trying to say is that these various our constraints on the on the sheaf side are algebraically simpler. These formulas are kind of nice and simple. The on the one side is somehow complicated and the correspondence is very complicated to somehow the correspondence undoes some of the complexity of the way the constraint the various our constraints are born and grown up with in theory. And that's why this derivation is rather complicated. I want to show you some of the formulas. We have to show you some of us here. But maybe before I do that, I wanted to make some kind of other point which is for this whole subject this proof is slightly backwards. So, this is another point I should have made earlier is that many people who work on these things would view that the geometry of the stable pairs to be simpler than grandma wooden theory. And in fact that's the one of the nice facts about this correspondence that there's various things in stable pairs theory that one likes to study and you can prove and then you can move them back to grow up with in theory and and many results this is this idea is used. So one could hope that the descendant behavior and stable pairs is simpler than going with in theory and actually the form of various our constraints confirms that belief. And so given that it's a little bit strange that the proof has to go through going with in theory. And so that's, I would say that I would like to run the whole argument the other direction to have some geometric reason for the descendant court, the descendants to respect the various our constraints on this on the stable pair side then use that rather to prove the grammar but I don't know how to do that at the moment all the arrows are pointing down. I don't have the errors turn the errors around but I'm not able to. So that's the main challenge here proved various our constraints are stable pairs directly using the geometry of the modus face of stable pairs. And this is a problem that you can just, there's something there's some relief when you can think about this problem, especially I think that if you're, if you haven't gone through the past 20 years there's some relief when I put the form in this form because you can just start here. You don't have to listen to the first five lectures. You just start with this constraint operator and think about stable pairs. The first four lectures are. But I don't know how to do it. A couple of questions. Yeah. One of them is, is there some interval hierarchy around. So, yeah, I don't I mean you know, it is the case that if you look at like the Hilbert scheme of points there's all sorts of algebraic structures that are there. And I don't know how to lift, you know, I'm not sure exactly what you're asking also but maybe maybe you're asking, does this satisfy the virus our bracket. And it's close to doing that that's a separate topic. These constraints are very close to satisfying the virus are back you have to correct them slightly because of the, the correspondences don't quite respect that. I don't know if that's where your question the other question is that, like the Hilbert scheme of points has all sorts of algebraic structure on the full comology. And I hope maybe that these various are constraints for stable pairs would be some shadow of some parallel algebraic structure on the comology stable pairs. And there's some difficulties to go down that path although it seems like a, an interesting line of thought and the one of the difficulties is that stable pairs as an actual physical space or they're kind of terrible it's only the virtual class that's good. And it's not, it's not the case that, that I know how to promote that virtual class to a whole comology theory. This is the direction of refined invariance. And there are ways to do that when access clubby out. But that's precisely not the case we're studying here. Other than this, I don't know, but it's a good question I think. I think it has to do with the argument that you just gave us the proof. Is there any hope to instead transfer the argument proving to be a sort of a strange to the extremely simplicity and give it up for one is a good directly to the stable pair side. So I somehow you're, you're, you're speaking too softly. I heard some words but not all of them. The question is, is the Q&A. So, Okay, maybe I can try. Yeah, I can't read the Q&A now unless I stop my screen share. I'm telling you my zoom is not optimal. Can you transfer the argument proving that it has sort of constraints using semi simplicity and the given. I get it. Yeah, I know. We don't know how to do that. That's actually a good question. You know, I say that the model in. The model in which the grammar written theory is has extra structure which is the genius and the semi simplicity is about. That's about Gina zero and then having Gina zero control higher genius using classification that's kind of a structure that's in grammar written theory. So if you think about grammar written theory you start the simplest that's Gina zero and then you start moving up higher genius one genius to what happens with stable pairs or the she's theories they're not like that. There isn't some kind of, you know, I don't know how to form it a classification theorem. I don't know how to grow up with these sits over the modular space of curves the modular space of curves is something that's independent of the target. You can try to think about things like this and it's profitable to prove and it's used to prove some things but if you take it, a target X, and you take some modular space of she's on X, like an arcade stable pairs that's a sheaf with the section. What do you want what for what structure do you want to first of all what's the simplest one there's no sense in which is the simplest one. It's not like there's a genius zero genius one, they all come at you at once. There is the oiler characteristic index, but you don't really even know where that starts, it starts somewhere negative. And I think you could make a case that the lowest one is the simplest one but there's no the lowest one depends on beta it's not so clear what it's not uniform like in the curves with this genius zero. So that's the first thing the second thing is the modular space of maps, maps the modular space of curves and that's come how crucial part of the idea of this classification that's used there. And it is an interesting thing to think about that kind of idea for stable pairs, it takes table pairs. It's a sheaf in a section so what could I do what structure could I forget well I could forget the section. So stable pairs in some sense should map this to sheaves. And that's a that's a very prominent I mean that's a very profitable line. Of course it becomes more delicate immediately because then what's the what's that space of sheaves well maybe take an art and stack of sheaves, or maybe stability conditions. But this is a this is a useful line to think about stable pairs mapping to just modular space of sheaves. And that's that's, I mean that's used, for example, in this to prove the queue goes to one over queue and variances in the club be okay. That's the map that's available but that map has a very different flavor than the map from mg from modular space of stable maps to modular space of curves. It's just a bunch of much. It's a for example it still depends on x. I don't know how to do how that what that question is asking which is the which I interpret this somehow finding all the parallel structures that are used on the modular curve side to find those structures on the sheaf side and then use the same argument. I don't know how to do that. Okay, that was a long answer. So I tried this of for this like lecture series the main challenges to prove the various are constraints or stable pairs directly using the geometry of stable pairs. And I tried to present this as a, a appealing path to think about because as I said if you're entering the subject for the first time you can work on this problem without studying all of that history and of grimova theory. So I think that sub challenges to control the sentence of one. I think that if, and then the advantage for this is that if one can find some kind of geometric argument where it's true then maybe that geometric argument won't use anything. Well, maybe we'll be able to prove the various are constraints in all cases not just a semi simple one. That's the practical hope. Okay, so then I want to say a little bit about this correspondence. This, this is going to hurt a little bit. So, you know, in fact, it hurts so much I didn't want to write the all the formulas myself, because they're already written and it took actually took a long time to get these things correct. So I, this grimovin descendant correspondence in the form we use is a rule it's just explicit rule by explicit formulas. And that rule is going to tell you how. So can you see this or should I make it bigger. Yes. Okay, yes. So it's a rule that so this is the stable pair side. And this is the grimovin side. And in the, in the, sorry, I don't know what happened there. I think we lost. Yeah. So in the original grimovin DT correspondence. What we had roughly speaking was that if you just take no descendants because it's clubby out on the stable pair side. And that's just the same as the series with no descendants on the grimovin side. And then the, all you have to do is change the you here. Sorry, the queue here and the you here and that's given by this change of variables. It's a simple you can't, you can't make any rules simple that just one goes to one in our current language. But now we have to find a rule this correspondence says that if we start with say an arbitrary descendant on the stable pair side, and these are our churn descendants it's the same notation except there's a tilde. And what is this tilde is just the old one shifted by a little bit. It turns out that the formula is a little bit better there it's not anything to worry about. But anyway, the rule we have to find a rule where if you give me some descendant polynomial, then I have to use my rule to turn it into some descendant polynomial on the grimovin side. And after that it has to satisfy the correspondence with some specific rule to turn it to this side. And then when I apply the brackets I should get the same series after this change of variables. There's some auxiliary queue and you coefficients, but that's the, that's the challenge to find some kind of universal rule that transform descendants on the stable pair side to the sense on the grimovin side. And as I said there has been a lot of thought about that and MNOP two was the first ideas about the structure of the such a rule the existence and structure of this rule. And my papers with Pickston, we explain a particular form this universal form this rule takes we conjecture that that form holds for all three folds and we prove it in the tortase. And then the last step is actually calculate specifically the coefficients of that form. So, so that's the goal and that's what's needed to transform because once you have this rule you're really in good shape because exactly this rule prop tells you if you want to compute stable pairs descendants, and you know how to compute grimovin invariance this rule will just solve your problem it just tells you how to do it. So once you have this rule then you can prop you can take knowledge about the grimovin stable pairs and the grimovin descendants and move them to stable pairs. And the knowledge you wanted to move in this direction or the very sorry constraints and grimovin theory. Okay, so what is the what is the nature of this rule. Yeah, the formulas. They look complicated but you know after thinking about them for a long time. They could be worse but at this way. So these are the descendants and grow up with in theory and the first idea is you should switch to a different basis and this is an old idea and I think these were first appeared in papers of as we get slur. So sometimes this is called getslers renormalization, but there's some specific change, and it's given by what I circled in red here. It says that if you want to know these towels, I explain in these formulas are explained how to write them in terms of these a descendants. So it's not such a big deal and these formulas aren't too bad like these are, you know, some summations with some comment or the factors. And in some sense the reason for this is that it was, it was known that's why as we introduced them, the grimovin theory of curves is best written in terms of these a operators and as I explained in the overview that at some point we're going to define the grimovin theory of curves to to nail the coefficients of the descendant correspondence. So this is the first idea. And the second idea is that, okay I have to find this rule if I take this rule takes a stationary descendant on the, on the other side and moves it to the grimovin side. And that rule is this math frack seed, I don't know how to pronounce that. But this, this seed dot rule, that's what we're going to define and the way to define this is, first you sum overall ways of interacting. Actually in the MNOP2 we have this kind of discussion of this rule is in terms of chemistry is where we have these, these churn characters are little particles and they interact with each other and so here you have to sum overall interactions. And this, the open dot is the kind of connected interaction, and that's given by three explicit formulas where you have one interaction that's a self interaction, two churn characters can interact, or three can interact. Why is it only of these three because these are, I said they're stationary so the, this comology grading has to be, it cannot be, it's not an H0, it has to be an H2, H4, or H6. And one of the, one of the nice features about these interactions is they can only, these interactions are only supported where the actual comology classes interact nonzero. This is another reason for another answer to Maria's question is if I, if I want to put in identity class then there's an infinite many, I have to consider infinitely many chains of these because you can have the comology class can always keep interacting. So it's actually, from this point of view dramatically easier to think about avoid this comology class because once I have the comology insertions being stationary at most three of them can interact so I only have to solve for the correspondence for these three things and, and here are the explicit ones. And as I said it take a lot of work and it takes an incredible amount of attention to get these formulas correct and I would say it took years, maybe a decade or something like that. I mean we weren't working on it all the time but yeah I would say that's fair statement roughly speaking. So this tells you how to take these churn characters and move them to this get slur. So the first one tells you how to move the get slur to the towels. And the outcome of that is a theorem and it says that that theorem says that in this torrid case and we conjecture in general but in the tort case. That's the formula for the correspondence. And if you've seen if you're coming to this the first time you think okay that's crazy it's going to these formulas are terrible they're not going to be useful for anything. I think that's a fair, fair first view of these formulas that they look terrible and they're not useful for anything but in fact, when you really get into the subject they're not so bad actually. And they are useful, and to give to prove that they're useful. It's the last step here. In some sense the, these formulas prove themselves to be useful because you can complete this last step using these formulas you can exactly you can transfer the various our constraints from growing within theory, which have their own complexity to the various our constraints I wrote for stable pairs. And as I said, part of that, when you when you look at that in this paper to that that's this transfer is done in this paper from last year and if you look at that paper. There's a lot of kind of long calculations which take these formulas. Basically you take, you have to take the long formulas for the various our constraints and grow within theory, and you have to crash them against these formulas for the transfer. And the inescapable feeling doing that is that somehow this correspondence is undoing the complex some complexity and the original grimo in grimo written verse our constraints. But that's kind of a pretty serious algebraic calculation, which, which succeeded after some years also. So that that's the end of the proof of the theorem. And I wanted to say that. So, that's, that's the end of the proof of the theorem and that's almost what last thing I want to say but I do want to say one more thing which is about the Hilbert scheme of points. And that's in Miguel's paper. So, you have a question. Also, how to get a sort of constraints past through cutting and doing with respect to a three four. That's a that's an excellent question. And the sad answer is that we don't really know very well and actually the, the, the final way I think the fundamental way to ask that question. The, the, the, the question that we have in the algorithm of written theory is that if you take a smooth variety, like I said, that gives a usual grimo written theory and there's various art constraints for that usual grimo written theory. But over the years, has developed also a log grimo written theory and the simplest one is to take some variety respect to some divisor. There's a log relative grimo written theory now it's called log grimo written theory and log grimo written theory you can take more complicated targets. I think the fundamental way to ask question is that, is there a way to state the VIRUSR constraints in law Grimow-Witton theory? And the answer is that we don't know, but there's two pieces of evidence. The first is in the proof that Andrei Okunkovin I did for the curve case, that's dimension one. A crucial step is to formulate the Grimow-Witton invariance for log curves, not just curves, but curves with, well, or log structure at one point. That was a crucial step and we formulated it and it's part of the proof. You have to, I mean, it's one of the things we use. So the answer is yes in dimension one and it's crucial. In higher dimension, we don't know how to do it. Although there's been some effort. More recently, there's been some ideas about using negative tangency conditions. So various people have been working on this, and Honglu Fan and Longting Wu, to use negative tangency conditions to try to write VIRUSR constraints. There's been some, I think, some success in Gena Zero. I don't know, but the answer is that we don't know how to do this. I don't know how to do this. There's been some effort. It would be great if someone wrote down VIRUSR constraints for an arbitrary log target. I think that would help in the proofs, help in the idea of proving it. In some sense, that's one of the reasons why there's not so many cases outside the Torah cases. OK. And these parallel statements on the stable pair side, I mean, I haven't really discussed the stable pairs theory. Stable pairs relative to a divisor. This is actually one of my favorite parts of the whole subject, three-fold relative to divisor. And there's some kind of miracle that happens where the Grimow-Witton-DT correspondence is sort of can be formulated in this log context. So the classical example is a three-fold relative to a smooth divisor. And that's been studied a lot. And one of the really interesting things there is that the boundary conditions on the Grimow-Witton side, if you're familiar with relative Grimow-Witton theory, transfer themselves to these incidence conditions on the Hilbert scheme. That's kind of what Hilbert scheme of points of that surface. That's what receives the boundary conditions in stable pairs. And it precisely goes to the Nakajima basis there. And more recently in the past year, there's a paper by Devesh Malik and Dhruv Rangbanathan, which defines this log stable pairs ideal sheet theory for a arbitrary log target. Or maybe there's some conditions, but more or less a log target. And there also you can lift the Grimow-Witton-DT correspondence. So if you go to these degenerations, there should be no essential problem with the Grimow-Witton-DT correspondence. That survives everything. The difficulty is the very sorry constraints. We don't know how to formulate them. OK, so that was another long answer to the question. I hope that conveyed some information. So the last topic I said I want to talk about is the Hilbert scheme of points of a surface. And one of the reasons is that there's something very concrete and nice here that comes. And the other is that somehow brings this stable pairs, the subject to Earth in some way in the sense that if people have not thought about stable pairs and might think about some abstract space you might not bump into. But in fact, in one corner of the theory is the very familiar Hilbert scheme of points of a surface. And how that works is not surprising. If I have a threefold, if my threefold x is of the form s, that's my simply connected non-singular projective surface cross p1, then the result, so if I'm interested in the surface, I can make this threefold, which is just crossing it with a p1. And then moreover, I look at the curve class, which is just the vertical curve class, just the p1. And I look at n times that. So it looks like n, the curves look like this, the sheaves look like that. Or they could be all clustered together. Then the first geometric fact is the stable pair space for this threefold and for exactly this curve class is as a scheme, just the Hilbert scheme of n points of s. So I leave that to you to check. There's not really much there to check. And then something else, which is nice, is the virtual class of this stable pair space is just the usual fundamental class of this, which is well known to be smooth dimension to n. So not only is the Hilbert scheme, but the virtual class, it's not some wild class on the Hilbert scheme, but it's the usual fundamental class. So this means that these total logical integrals over the Hilbert scheme are a corner. They're just included as a certain corner of the stable pairs theory. And so then it makes sense to say that if we know, if we can constrain the descendants by these VSR constraints, and they should also say something about the Hilbert scheme of n points. And this is exactly the path that Miguel takes in that paper. That's this is the paper. VSR, Conjecture for Stable Pairs, Descendant Theory of Simply Connected Threefold. So he proposes constraints for simply connected threefolds, which then have to do with some of the hodge grading. And then he specializes them for surfaces in the way that I've suggested. And you get these constraints for surfaces. But moreover, he then proves them for surfaces. And the way you can why he can prove it for surfaces with this non-trivial hodge grading is because we know how to prove it for Toric services by our Toric results. And then you can use some universal properties of the Hilbert scheme to show that that actually proves it for all surfaces. So here, this is also evidence that the VSR constraints for threefolds are right, the ones that he wrote with hodge decomposites. Give some kind of confirmation of that. Anyway, so I wanted to write something about this. So what is the Descendant of the Hilbert scheme? So this is this. You can now just forget about stable pairs. What is the Hilbert scheme? The Hilbert scheme is some space of points, subschemes of n points of s. And then there's the product and the universal object there is the universal subscheme. And what's a descendant here in this language? You take churn character. Well, you take a homology class on s, you bring it back to the product, then you multiply a churn character of this universal subscheme. That's the universal subscheme. It's not smooth, but it has some finite resolution. And you can take churn characters as well defined. And then as I said, we always shift it by the trivial one. It doesn't matter much, but it's a little bit nicer. And then you push it down, and that gives you some total logical class on the Hilbert scheme. It's some churn class of this universal subscheme. It's a slightly non-trivial thing, because this universal subscheme is singular. So that's the descendant acting in the homology of the Hilbert scheme. And what is the theorem? The theorem says that if you take a simply connected surface and you put in some monomial, it doesn't matter. I mean, in the previous way I wrote it, I put anybody in the descendant algebra, but you put in some monomial. Then there's an operator that acts on it. And once I finish this action, then I integrate every term. I should get 0. And every term is a descendant integral. So the only question is, what is this operator? And you get this operator by actually just going up to this. You take x cross p1, and you take this curve class, and then you go take the operator that we know there. Since the virtual classes are doing the things you want, this will give you. So what is the formula for this operator? So there's going to be the T and R. And these are very similar to the ones we had for three folds. I don't rewrite them now. You can look in Miguel's paper. And then there's one more, which takes a slightly different form. So I thought I'd just write that last one. SK, to define SK, you take this derivation on the algebra. And here's the descendant algebra with generators for the surface. And we have a slightly different thing here. We have this descendant operator with a homology class. And there we bump. And we also multiply the homology class. And this S operator, then, I just wanted to show you one place where the hush decomposition matters. This S operator is this derivation now, but with a homology class who's going to bump. And then the churn character here. And the sum runs over all terms of the decomposition of the diagonal, but where the left hand one only has 0 in the hodge on the left side in the hodge decomposition. So if it's simply connected, there's not many choices. It could be 0, 0, or it could be 0, 2. But this is roughly speaking how these things look. They look very similar to the three-fold one, except there's some dimension reductions, or some twos turn to threes when you go through this. And then Miguel also puts the hodge weights. So the hodge weights will go through all of these. And they come in in this way somehow. In order to find these operators, you need to know something about the hodge decomposition. OK, but then that's extremely explicit. Moreover, this is a theorem. And it gives, in practice, what it does is you give the surface. You take these descendant operators, and it gives you some non-trivial relations after integration. There, and they're true. OK, I think I'm going to stop here. So it's too bad that I didn't come to Paris, or to be able to be nice to meet people, especially students I've not met before. So if you've been to all these lectures, I invite you, if you're in Zurich, to knock on my office door. And then after a short exam, we can get a coffee or something like that. Short exam on the course material. And so here, it's been raining all week. And here's the view, Zurich view on a kind of stormy day and sunset. And that's it. That's the end. Thank you very much, Thao. Any questions? Yes. You mentioned that you knew about the Virasol operator for PT sites before the correspondence. Could you explain how you could guess before? Oh, you mean before you're saying we knew? Oh, yeah. How would we guess? So yeah, I know exactly how we guessed. Andre, this was a really long time ago, maybe 2005, something like that. Andre had written a small box counting program for DT. And then you see this answer is very simple. There's only one thing complicated in it. Oh, I'm sorry. I stopped sharing. It doesn't matter. So I guess for P3, and we just wrote down the terms that we thought would occur. And you put them with undetermined coefficients. And then you start generating data. And you see whether, I mean, if there's going to be one, then you can solve for it because the system is massively overdetermined. If there's not going to be one, then you're just going to be wasting your time. But it turns out we found very quickly, we found L1. And once you find one, you can start taking the bracket. I didn't really explain this. But even on the, yeah, I should say this now, but on the Gromov-Witton side, L0 and L-1, they're kind of closed under brackets. So once you find these two, you don't get any more. You make no profit. And even L1, you get a little SL2. So if you find L1, it's not enough. Because when you start taking the bracket, you'll just get more L0s and L-1. There'll be a little SL2 there. But if you find L2, then you win. Because you can start taking brackets of L2. First, you find L1, then you take brackets of L1 and L2. You can generate the whole, all the relations. So actually, guessing these relations, you really only have to guess one or two of them. So that's what happens, basically. By using just data and undetermined coefficients with a lot of hope for what the method undetermined coefficients, of course, you have to be correct on what terms you allow. But that's how we did it now. At the computer, without the computer, it would have been impossible. Any other questions? Yeah, we can. So invariant, if you take serial power, it's not on C cross P1, but on some non-trivial vibration in P1. Does it talk something about Hube-S? Sorry, I'm having a difficult time understanding. You're somehow too far from the microphone, or it's not? So I understood there were supposed to look at some different vibrations over P1. Yeah, no, over S. So in the end, you consider it. Yeah, S. Yeah, I don't think you learn too much about that. I mean, if you want, whether that gives you some new information about S. Oh, yeah, that's a good question. You should ask Miguel. I think he's thought about those things. In fact, I think I asked him that once. And my memory is the answer is that you don't learn anything else from S, about S. But if Miguel's here, maybe he can say something. Or maybe he's not in Bure. Anyway, my memory is that Miguel has thought about that, and the answer is you don't learn anything. He answered in the chat. Yeah, OK, good. So we have some more questions. If you're going to have you thought about some more geometric correlation for the LK operators on the PT side? No, that's a good question. I mean, somehow having some geometric construction that gives this without me having to define all the terms, right? Yeah, it'd be good if someone could just tell me what this class was. Yeah, go for it. I don't know how to do it. But it's a good thing. I guess that there are some. No, maybe I have some ideas about that in the ways that these were formulated. I think in the Hilbert's case, the algebra of derivations of the cup product, a study that this paper of Gilber, where he studied this re-algebra action. In the case of K3 surfaces, at least. So maybe there is some hope to connect them with the curve. It's a good direction. I mean, I think this is a question, the direction of trying to prove the Versailles constraints even the geometry of sheaves, which, as I said, it's a direction that I'm very happy if someone goes in that. And somehow, sometimes I feel that knowing all the grammar of wooden theory makes it harder to work on that problem. It's better than maybe just not know anything about curves and try to think about some structure on the sheaf side. There's another question. Has there exist some blow-up formula for PT counts? The way one thinks about blow-up formulas, the way one can think about blow-up formulas is in terms of degeneration. That to find a blow-up formula, you need to know it's some kind of universal formula about projective space blown up. And I don't know a closed formula for that. But using the degeneration ideas, you can make some progress in the sense that if you're interested in a particular invariant, you can often get it. But I don't know a general formula. Any other questions? For PT or for the Hulbert scheme, do the viewer circumsprints uniquely characterize the descendants series? So I don't think so. I would say that the answer is no, at least for the stable pairs. You could ask that on grammar of wooden theory. And I think the answer there is also no. It's in the semi-simple case. It's yes. So if you ask in the semi-simple case, maybe there's a bigger chance. So maybe the answer is yes in the semi-simple. If you take a semi-simple, if you take a toric threefold, then it is the case that the V-R-S-R constraints on the grammar of wooden side, they do characterize it. But you have to use one piece of additional geometry, which is the topological recursion relations. This is explained in the paper of Gottman. There's different ways to think about it. But the most elementary one is that you use these topological recursion relations. So on the stable pair side, it won't be the case that, yeah, I guess that means it won't be the case that the actual V-R-S-R operators will uniquely determine it. You'll have to think about some replacement for these, something that will play the role of those topological recursion relations. It's a good question. But my guess is maybe one could think of some geometry that will play the role there. I don't know what though. But for the Hilbert scheme of points, I haven't thought at all about it. It's, again, something you should ask Miguel. Any other questions? All right, if not, let's thank the whole again. Thank you. Thank you.
|
The main topics will be the intersection theory of tautological classes on moduli space of curves, the enumeration of stable maps via Gromov-Witten theory, and the enumeration of sheaves via Donaldson-Thomas theory. I will cover a mix of classical and modern results. My goal will be, by the end, to explain recent progress on the Virasoro constraints on the sheaf side.
|
10.5446/55047 (DOI)
|
So this is now where I stopped in the second lecture, and I wanted to just make a comment about the deformation theory. So the question was about how to write explicitly, because I didn't really say explicitly how to write this deformation theory for fixed domain. I'm not going to say much, but what I tried to say was that, and here I wrote it now carefully with better handwriting, is that on the modulus space of maps of a fixed curve, what you have, the only structure you have, and I screwed it up here, x. The only structure you have is this universal map over the modulus space to x. And from this universal structure, the deformation theory is this object here, this r pi lower star, f r star tx, and in the dual, and you have to map it to the cotangent complex, and how to construct this map. So in some sense, the question about what the deformation theory is specifically is how to construct this map. This map tells you everything. If you want to know why this map tells you everything, then I recommend reading the paper by Bernd and Fentechi. This map tells you everything. The question is how to construct this map. And you construct that completely, totalologically, from this geometry. And the only thing you have is tangent bundles and differentials. And I was going to write out the whole thing, but of course it doesn't fit in the margin here. But the fact of the matter is that I had written this thing out for a class that I taught in the fall, and I gave Andre the link. So if you follow this link for my class notes in the fall, you will find two or three pages which just tell you how to go from this geometry to this map. And it's not original. I mean, that stuff was already explained in Kai Bernd's paper in slightly tercer language, and I think it goes back to Ilusi. So that's a longer answer to that question. And another comment about this is that if you are new to the subject of deformation theory of maps, which is very similar to deformation theory of sheaves, but has a slightly different flavor, of course, then a place which is things are written out very explicitly and nicely is in the first chapter of Janusz Kollar's book on rational curves and varieties. There's kind of an explicit discussion. Of course, it's not exactly in this. He's interested in Hilbert schemes, but there's a pretty nice discussion, very explicit discussion of deformation theory of, well, you can interpret them as maps because a map can be interpreted as a sub scheme. So I recommend you look at that. All right, so that's the finishing the business of the previous questions. And I encourage people, if you actually look at these slides to send me corrections, I can't say there's been a lot of that. Okay, so last time I finished basically with a discussion of the formula for this Virisar conjecture and then some philosophical comments about whether it could be true or where it's true or why people think it might be false. But I wanted to make a couple more points. The first one is that you see that this matrix here in the definition is the action of the canonical class or the first-turn class of the tangent model, the dual of the canonical class. So you might think that the simplest case is when you have, if the first-turn class is zero. And it is a very nice thing if the first-turn class is zero because it wipes out a lot of these terms. And you can't say all of the terms because you could have an i to the zero. That's the matrix to the zero. That's always identity matrix. Well raised or lowered. But still, if you have this, the first-turn class is zero, it wipes out a lot of this, makes the formula much better. But as I tried to explain last time, unfortunately for Klavier threefolds, this, this, the Virisar constraints, you won't learn anything interesting. But you could think about other examples. Who else has C1 zero? Well, of course, a point does. And then an exercise just to make sure that what I've been writing makes any sense is you should, you could try to reduce the equations here to the equations that I wrote in the first lecture. I mean, this is just basically an exercise in understanding the definitions. But then a case which is interesting, which is genuinely interesting and new with C1 zero is of course the elliptic curve, E equals elliptic curve. And that's an extremely interesting case. It has another feature, which it has the odd comology. But the Virisar takes a rather simple form there, simple non-trivial form. And you can use it to solve the entire gram of it in the elliptic curve. And this is what is done in one of my papers with Andrei Kunkov. But if you're interested in the Virisar, the case of the target being elliptic curve is a very good case to study because a lot of terms go away. Then so we've gotten through dimensions zero, one and three. Well you could say, what about K-trivial varieties in dimension two? Why am I skipping them? Why are they called K-trivial varieties, for example, the K-tri surface or the abelian variety? And sadly, well it depends on what you're trying to do. Either sadly or happily, the gram of it invariants, they're mostly vanish because they're holomorphic symplectic. And so these cases are essentially wiped out and there's nothing really much to say. They have an interesting gram of it in theory. That's what's called the reduced gram of it in theory. And one can ask about Virisar constraints in that reduced geometry and that's a different direction and it's not really fully understood. So that's the discussion there. Okay, and I wanted to give a couple examples of how one actually uses these constraints. So in the case of a point, I think I did explain this, that they're extremely useful and in particular you can even code them and they can very practically solve the problem for lots of genera, I mean really pretty high genus. But I wanted to give an example where it's not a point. Of course the easiest example is the curve but I wanted to give a higher dimensional example so we're going to consider the example of P2. And that's a beautiful example. It's a classical example. It connects to lots of things in mathematical life. So the project of playing P2. And the basic Grammar-Wittman variance there counts of very degrees. And this is the language of these brackets and this I'm putting a zero here. I'm only going to think about zero now for this example. So that's just point conditions. So we asked for these genus G degree D plane curves to pass through a certain number of points. How many points? Well this many number of points because that's the virtual dimension. You get a number. This number is NGD and this number is the classical. So it's a little theorem that one has to prove but that the Grammar-Wittman number here is actually enumerative and it actually counts the number of these genus G degree D curves through these many general points and in fact they all have to be nodal curves. And these numbers in algebraic geometry have some history and they're called the Severe degrees because they were studied by Severe. And one can ask to what extent these equations we know constrain or determine the Severe degrees. And the topic, I mean if one wants to give a lecture about this, the very first place that started was of course G-na-0. And there's a fundamental equation here from quantum comology that's very well known. It goes back, well it's an application of WDV equations and goes back to Konsevich. And you get this very nice equation that calculates the G-na-0 invariance. And here is the first two of them. And there's many good expositions of how to find this equation from the properties of Grammar-Wittman theory. So I won't spend much time on it. And the answer of how to find this equation is that you have to look at the splitting axiom in Grammar-Wittman theory. And what that means is we take the, so we take the modular space of curves, the modular space of maps, maps the modular space of curves. But the modular space of curves have some divisors as we've discussed before. For example, a boundary divisor where I split the genus into two parts and I connect with one node. And I can take some kind of fiber product. And we would like to, this would be D, we would like to have some rule for how to compute the Grammar-Wittman variance after intersection with this divisor in terms of the Grammar-Wittman variance of two pieces. And that's what's called the splitting axiom. And it says that, well it says that if you want to know the Grammar-Wittman variance of the full curve, you can sum over the Grammar, the virtual class restricted to this divisor, that's this pullback, you can sum over the distribution of points and degrees on these two sides. This is a splitting axiom in Grammar-Wittman theory. And it's kind of interesting. It says that, in particular, it says that if you know things about MGM bar, then it gives you some constraints on the modular space of stable maps and Grammar-Wittman variance. And this is explained in terms of splitting axiom. And so I'm going through this a little bit quickly because there's been, there's lots and lots of places where you can read very nice discussions about this. And the WDV equations are obtained from the splitting axioms by taking the simplest possible relation M04 bar. That's the relation where this boundary point is equal to that point in H2. And so I leave this basically as an exercise. We have this recursion from this relation and this behavior of the splitting axioms. And honestly, this requires some geometric ideas. You'll have to think about it. But this wasn't somehow my point to do the Gena0 again. My point was to apply the Virussarra constraints in P2. And the interesting application of that happens in Gena1. But before they do that, I'll make this comment that you see here the comology of the modular space of curves constrains Grammar-Wittman theory. That if I, using the splitting axiom, if I know things about, the relations about the modular space in comology of modular space of curves, I can get some actual constraints on numbers of Grammar-Wittman variance. But in fact, the opposite is also true. That's what, in some sense, in the last decade, there's been a lot of work on that. The opposite is true that the geometry of the modular space of stable maps constrains the comology of mg and bar. And the examples of this relate to the Faber-Zagier and Pickson's relations. That's not the direction of that I'm going in these lectures. But in fact, the relationship is kind of mutual, that the modular space of curves constrains maps and maps constrain the modular space of curves. You can learn about, you can learn both things, you can learn things going both ways there. Okay, I'm sorry, that was a bit fast in the Gena0. But nevertheless, we go. So what I really wanted to explain from Virasar is genus one, and this is a much more subtle thing. It's an extremely beautiful equation. It's for, again, for CP2, the projective plane, and it's an equation in genus one. And it says that there's an equation of the same flavor. So the flavor here, this associativity of the quantum product equation is somehow it's a recursion and has a quadratic flavor with some polynomial and binomial coefficients. And this is the equation in genus one. And as I said, this equation is much more subtle. And it's less well known. It was written in one of the papers of a Gucci-Hori-Shong because they themselves saw it as an app. And they derived it as an application of the Virasar constraints, which I'll explain about. But one of the reasons it's kind of interesting is you might think, okay, if this is true, I should try to prove it the same way that the Gena0 equation is proven by taking some, maybe some equivalence in the boundary geometry of M1N bar. I'm taking some relation in H2 of M1N bar and pulling it back. But this doesn't work, actually. Part of the reason is there's no interesting relations. There's no interesting boundary relations in M1N bar. So you can't even really start this idea. There was one in M0N bar that was this cross-racer relation I explained. But in M1N bar, there isn't even one to start with. So that then leads to a little bit of a puzzle. Why could this relation looks like it's a shadow of something like that, but there's no so-to-speak object to be shadowed? Nevertheless, you can look at it as I said, that it appears in this paper of Uchi Horishong as an application of the Virasar constraints. When they wrote it, maybe it wasn't so clear the Virasar constraints were true for P2, but they did it and then they calculated some numbers and you see that it gives the correct numbers. This is maybe the first interesting one. And so I want to explain how you get this as a consequence of the Virasar constraints. You can also do it from Getzer's relation that was later found by M1N4, but I don't want to explain that. That's a much more complicated relation. And the derivation there is much more complicated. The best derivation of this equation is from the Virasar constraints for CP2. And since those are proven, this is actually a proven equation. So how to prove this? So I give you some steps. So the first thing is which of the Virasar constraints am I supposed to write down? Well, I'm telling you, write down L1. And then what do I do? Well, I take Virasar constraints, say L1 of this partition function is 0. So I'm going to study this L1 partition function divided by the partition function. This is a standard thing to do. And I put some slides in the first lecture to explain why this is a standard thing. But it's easy to explain. So I explain again here. Is that the Z is kind of X of F. We like F. F is the connected invariance. And they're a little bit nicer, easier to think about. And when you take some differentiation of Z, this is the same thing. And it will not surprise you when I write this. It's the rule for differentiating the exponential like this. So it's nice to consider terms like that because this is equal to just the derivative of F. So it's kind of standard when looking at these Virasar constraints that you look at it like this. So I say you take this L1, you look at this expression, and extract a particular coefficient. Why lambda to the 0? Because I want genus 1 and the exponent there is 2g minus 2. Why this? Well, these are 3d minus 1 point conditions. So it's one fewer point condition. But this L1 is an operator and it'll put one more point condition. So if you take this particular exact expression and extract a particular coefficient, you'll find an equation. And this equation will have leading term in some aesthetic sense. But anyway, one of the terms will be exactly what you're searching for. It's exactly, this is the term we're searching for. This is our goal. And it'll have some coefficient. And when you do this, you'll be happy to find a 9. That's good news because this has a 9. This formula has a 9 in it. There's the 9. And this 9 is produced from these combinatorial coefficients here. And Virasar, maybe I'm being too specific about this, but it's produced by these terms. Actually, it's produced by this one. Exactly at the dilaton shift. So that'll make you happy. You'll find the 9. That's like half of the proof already, finding that 9. And then there'll be a lot of other terms. And you have to go, you have to go exactly write every single other term that occurs. Because after you finish writing it, then you get to say that the sum is 0. And all of those other terms will be inductive terms. They will fill out these terms. And when you do that, you'll find that you'll need to be able to remove things like this. A tau 1 with a hyperplane class or a tau 2 with a identity class. And these insertions are hard to remove in general. We're in genus 1. And then there's another idea about how to remove them. I mean, the old ideas were string and divisor equation. But there's a new idea called the topological recursion. And I've written it for you. And it comes from a certain boundary equation in M11 that the cotangent line is related to the boundary like this with a 112, I guess. So in the end, you do use a little boundary geometry of M11. I've written that equation for you. And this is a really good exercise to do if you really want to know what's going on. And in some sense, I've told you every step. If you just apply it, you can't go wrong. But it takes some time to get all the constants and everything correct. And it's very satisfying then to find the answer. And it makes you believe the Vero Saric constraints more if you do it. And if you're too lazy to do all that, at the end of these notes, I've appended the full derivation written by Longting Wu. So he just does it all. It's not that long. But in order to get it right, you have to be pretty careful and get all the coefficients right. All right. So that's the first, in some sense, one could say, the first application of Vero Saro past the point. And it produces something that's really geometrically non-trivial. In fact, I only know two proofs of this, one from the Vero Saro, one from Getzler's relation in an algebraic codimension 2. And the derivation using Getzler's equations is pretty complicated. This is much better. But for example, there's many other approaches to counting curves in the plane. And I don't know how to get this equation from any of them. So maybe if someone knows, you could tell me. Like for example, the tropical curve counting or the caporoso-harris recursions and things like that. If you know, tell me. But it's a little bit of an interesting equation. OK. So that is more or less the end of what I want to say about, oh, no. Yeah, you could ask for genus 2. Sorry. We discussed genus 0 and genus 1. And it's natural to ask for higher genus. Are there any higher genus recursions? And the answer is yes, but they're much more complicated. They're not of the simple form that I just explained for genus 0 and genus 1. Or at least I don't know any. They are more complicated forms and they involve additional recursions for descendants. And it is the case that you can solve everything from Gromov-Witton theory in many different ways, but in particular using Virasaro and some generalizations of the TRR. And Gatman has a nice article about that. There's other ways to do the severity degrees. And these other ways tend to have to do with degeneration of P2. And you can do that within classical algebraic geometry or using the degeneration formulas. And you get some degenerations which are effective and solve the problem also. But these degenerations also introduce many more new problems you have to solve recursively. So you never find these very clean things that we found in genus 0 and genus 1. All right, so that's the end of the discussion there. And now I think I finished the lecture from yesterday. So I encourage you to either do this yourself or read Longting's derivation here carefully. Okay, so I have to get a new slide. And I think that for this, let's see if I can do it here. Maybe it's this one. Oh, wow, it worked. All right, so we finished two lectures now, maybe a little bit late, but we did finish them. And they were about modulite curves, modulite maps with a lot about descendants. Maybe I'd say a focus on descendants. I tried not to be too distracted in going in other directions. And the reason for that, and of course, about Virisauro. And the reason for that is now we're going to switch to sheaves. And again, the focus eventually is going to be on descendants and also Virisauro. And somehow that's the part of this lecture, which is relatively recent. It is about transferring the Virisauro constraints to sheaf theory, the descendant theory of sheaves. And you get some very nice equations there. And I wanted to explain that, what happens there. So to do that, at first I have to explain modulite curves, modulite maps, and Virisauro there. And this is complete. And now I have to tell you about the numeration of sheaves. So we start from the beginning. So a fundamental property of Gramov-Witton theory, which one takes for granted when one studies Gramov-Witton theory, is that it's uniform for all x. And it turns out this is a really great thing about Gramov-Witton theory, that if I take any non-singular projective variety of any dimension, there is a modulite space of maps. It has the same definition. There is a virtual class. It has basically the same definition. And the theory of integration against the virtual class exists. And that definition is uniform for everybody. And as I said, that one takes that for granted. But it becomes clear that that special one starts counting sheaves. The sheave problem is a little bit different. So the idea of a map is that you're counting maps from curves to x. If the sheave counting problem is more delicate, and the standard series of sheaves, counting sheaves on x, are for dimensions less than or equal to 3. Recently, there's been work by Richard Thomas and now very interesting work on Clabi R fourfolds. And so we could say, why is this? Why is it for Gramov-Witton theory, we can take any target. For sheave counting, we have to take low dimensional targets. Well, one way to say that is that, well, Gramov-Witton theory is really getting around the problem. Because in Gramov-Witton theory, we're only counting maps of curves. And when you count sheaves, sheaves could be also any dimension that could be supported on any dimensional subvarieties. This is part of the answer. It's not the full answer. Because even the sheave counting problem, which is related to one dimensional subvarieties, does not work in all dimensions. So that's only part of the answer. But when we get to the kind of the real nuts and bolts of the answer, it's about this deformation obstruction theory. So what's the reason for the difference? So as I explained before, when I look at a map from a curve to a target x, the deformation theory is controlled by H0. That's the deformation. The deformations are given by tangent fields. And the obstruction space is given by H1, have the pullback of the tangent sheave. And this is always two term. Well, you could say, what about the infinitesimal automorphisms? And those are killed by map stability. Map stability kills those. So we have exactly this two term obstruction theory. And that's a uniform for all targets. And so what goes wrong or what happens for sheaves? So for sheaves, we have some x. And let's just say it's x is non-singular projective variety. And we want to count some sheaves. So that means we should have some modulate space of sheaves and have some kind of counting theory. And so let's look at what the deformation theory for that is. Well, first of all, there's infinitesimal automorphisms. And sheaves kind of always have some, and that's given by harm of the sheave to itself. And sheaves always have some harm. Unless you, I mean, just if you have just a naked sheave, it'll always have some harm because you can always have the scalars, the complex numbers acting by scalars. If you have sheaves with sections and other things, you maybe can kill the harms. But if I just have this bear sheaf, it always has an harm. And we try to kill these harms. And this is mostly killed by sheaves stability. If you have a stable sheave, then the only harm can be the complex numbers of scalars. That almost kills it. But still, it leaves a little bit more we have to fix. And this leads to the idea of traceless x and things like some additional complexities. The deformations of the sheaves are x1, the obstructions, and x2. So this is, I think, the subject, and Richard spent some time on this. At least that's what I've heard. And then there's some higher obstructions. So you have to at least consider these x's. And the traditional way to confront this problem is you kill the infinitesimal obstructions. You kill this basically by stability in the traceless condition. And you kill these guys, the higher x's, by dimension constraints on x. So you somehow run out of room there. And then in good situations, you have only these two. And you get a deformation obstruction theory, this two term. And you obtain a virtual class and have a good counting theory. But you see that this is now much more delicate than what happened here. So I wanted to do some examples. And these are fun examples. So this is what we're aiming for is dimension three. And we'll get there. But there's a lot of nice things along the way I wanted to point out. So the first example is dimension one. That's when x is dimension one. Has dimension one. So what is dimension one? You have a question here. Yeah. So these higher obstructions, I think we discussed last week that we said something like all the obstructions that live in the x2 space. But this is for extending from some order k deformation to k plus one deformation. So maybe this means something else. Yeah, I mean, it means that you have to consider if you want this, well, if you want this x theory to give a perfect obstruction theory in the language of Berend and Fantekki, then you need something like this. You need these higher obstructions to vanish. But it is true that somehow they're not all necessary. But it's not the case that they all vanish. I mean, if it's not the case, I can just forget this and just say that I have a obstruction theory for x1 and x2 for any case. So that's just false. And one symptom of that being false is that one thing about a two term obstruction theory is that you always have virtual dimensions or actual dimensions are always higher than virtual dimensions. And when you have these x higher, you can have some problems with that. But I think it is true that you don't really need to consider all of them. But to the extent exactly which ones that you have to consider, it may be a little bit delicate. My view is that I'd like them all to be zero. OK, so dimension one is in some sense the easiest case. And then x is a non-singular projective curve of genus G. And we're looking at sheaves on a curve of genus G. And this is a really old subject. Well, 50 years old, maybe more by now, 60 years old. And what you get there, the first, when you apply the ideas of stability, you get the modular space of stable bundles. And the simplest case is when the rank and degree are co-prime. And this is already non-singular with expected dimension since this higher x all vanished because we're on curves. So from this point of view, everything's perfect for the modular space of stable bundles. The virtual counting is the actual counting. From the theoretical point of view, there's no problems. And of course, there's many variations of this, like Higgs bundles and things like this. Another problem in dimension one, not as well studied as the modular space of stable bundles is the quote scheme. The quote scheme is you have some non-singular, you have this curve and I take some vector space Cn. You can make more general ones. This is the basic one. So you have some rank and trivial sheaf and you consider the quote scheme of quotients of that. And then you have to fix some numerical invariance. And for a curve, you just have to fix the rank and the degree of F. And this obstruction theory was considered by Marion and Oppra. And the idea there is the deformations. So now we're not doing deformations of sheaves exactly. It's a quote scheme. The quote scheme is a deformation of this quotient sequence. But that has a deformation theory that's given by X of the sub to the quotient. And the deformation space is this hom space and the obstructions are this X1. And for our curves, there's no higher X. And so we get a two-term obstruction theory there and it's not exactly this one because we've, it's not the same problem. It's a quotient problem. But anyway, it's a, it gives us a virtual class and this is an interesting virtual class. Unlike here, the virtual class just gave the ordinary fundamental class, which is interesting its own right, but it's not new and interesting. Here the quote scheme is in general singular of mixed dimension. But because of this deformation theory, this, the deformation theory analysis, it carries a virtual fundamental class, which is a very interesting thing. Even though the geometry might think is simple, it's just, it's just the sheaves on curves, which is a very under, well understood, I would say by now geometry. But this virtual class is a new class there. And the first exercise, if you want to, if you want to have any idea of what I'm saying, or maybe you already do, but you want to learn it, compute the virtual dimension. It's not a hard exercise. You just have to apply Riemann-Roch to the right thing. And you find the virtual dimension is given by this formula. On an open set, this quote scheme is just a modular space of bundles with sections. But and Alina, Marianne and Dragosh Oprea use this idea to transfer integrals from this modular space of stable bundles to, so I should say on an open set is a modular space of stable bundles with sections. That's why we have an open set. They use this idea to transfer integrals on the modular space of stable bundles to this quote scheme. That's a really interesting idea. Because the quote scheme, you know, just, if you just look at it from far away, it's just like bundles with some sections. It's a tricky thing to transfer the integrals because you have to make sure whatever inter, whatever intersection problem you're doing, if you want to transfer it, you have to make sure, well, maybe there's a dimension shift, you have to shift the dimension of the problem. And also you have to make sure that all the action is happening in the locus where it's stable bundles with sections. So there's some analysis of how you have to throw out the pathologies, but they get it to work exactly there. And then they can actually, then you can use this path to transfer integrals to this against to this virtual class. And then, and then they can actually compute the integrals. This gives a proof of the verlandiform. There's many proofs of the verlandiform, but this one uses the virtual class as code scheme. So why is this useful? Because the point is, once you get to this code scheme, then there's a C star action. And this, you can look at this Taurus localization and the virtual, the localization of the virtual class. I haven't discussed that here, but there are techniques. So this localization as a basic technique by a TIA bot, bot and a TIA bot that tells you how to calculate intersections and or integrals on varieties with Taurus actions in terms of the fixed point data. And there's a version of that theory with respect to the virtual class that I developed with Tom Graber. And why that's useful is because while these spaces are kind of complicated, the fixed points with respect to the C star action, C star action scales the different coordinates of this complex vector space. The fixed point actions are very simple. They're just symmetric products of the curve. And so this idea is used to transfer complicated integrals to maybe complicated integrals and then localization that transfers them to integrals and products of the curve. And there they can be solved. So that's something in dimension one. And I wanted to just explain one more thing in dimension one because it's kind of fun. So why you get the symmetric product of the curve is that if I take the quote scheme with just one copy of the trivial sheaf, that's quotient of the trivial sheaf, that's a symmetric product. But you can do something kind of a little bit stranger, which is you can do quotients of n copies of the trivial sheaf. That means you look at here. This middle fellow is still CN. And I want to look at some quotients, but I asked for this rank to be zero. So it's like some kind of generalization of the symmetric product in some strange way. I look at the rank zero quotients of the n dimensional sheaf for curves. So this is some kind of punctual quote scheme. I don't know exactly the name for this thing, but it's in some ways generalization of the symmetric product. And exercise here, the deformation theory exercise is that this object, this punctual quote scheme, if someone has a better name, you can tell me what that is. This punctual quote scheme is a non-singular of this dimension. And this virtual class is the usual fundamental class. So the virtual counting here agrees with the actual what's physically happening. And then there's a lot of stuff that happens on this space. So for example, there's total logical bundles. And if you're familiar with the theory of the Hilbert scheme, none of this will surprise you. If you take a vector bundle of rank e on x, I can move that to be a certain total logical bundle on this punctual quote scheme. And it's the rank of the total logical bundle increases by a factor of d. It's the bundle whose fiber over a particular point of this quote scheme is H0 of F tensor e. So this is a kind of standard move in the study of Hilbert schemes of points. So this has not been so well studied, I would say. And I'll give you an interesting property that came up in a paper with Dragoosh. It's some kind of exchange property. It says that if I want to calculate the integral over this punctual quote scheme for this total logical bundle associated with the line bundle, and I take the segary class, that's the same thing up to some sign as doing it in rank one over the symmetric product, but taking n powers of the same segary class. The segary class is the segary of a bundle is equal to one over the total trim class. So there's some kind of interesting symmetry here. And we proved it by doing some calculations, but I always wondered whether there should be some kind of proof of this without doing any calculations. So that's the challenge. Then what I've said and find a conceptual proof with no calculations. Because you're kind of swapping this n. This is some kind of higher rank punctual quote scheme, and this property swaps it down to rank one. But of course, a little bit, there's a price to pay, let's put the n here. All right, so that was dimension one. Dimension two. There's a lot of interesting stuff in dimension two, and I think a lot of Richard's lectures were about that. I will go to this dimension two with a slightly different focus. So let x be announcing the projective surface. I think the simplest theory for this dimension two is again the quote scheme. And precisely this quote scheme. So x is the surface now, and I again look at quotients, the trivial sheaf. And this quotient has the quotient f and has the kernel g. And unlike the case for curves, which I didn't have to make any assumption, I just take any quote scheme for curves. The obstruction theory is two-term and I get a richer class. Now for surfaces, because the dimension is increased, I have to pay for that some way in the sheaf theory. As I said, in the sheaf theory, you always pay for increasing dimension. And how I pay for that is I ask for this quotient to be rank zero. That means generic rank is zero. And what that means in practice for a surface is it's supported on curves. So I don't ask for every possible quotient. I just want for quotient. I only consider quotients that are supported on curves. So such a quotient has a first-term class and has an order characteristic. It doesn't have a rank because the rank is zero. And in papers with Marianne Apreah, we investigate the obstruction theory for these quote schemes. And this is motivated by several things, but in particular their work in dimension one. And the deformations are as before. It's given by the harm. And then there's the obstruction so that what we have to worry about is killing these higher obstructions. And so this X2 here is a dual to a certain harm. And it's precisely to kill this harm that we have this rank zero because then F is a torsion sheaf, but the right-hand side is a sub-sheaf of some torsion-free sheaf. So this harm is killed since F is a torsion-free. Since F is a torsion sheaf. So in this generality, I said here, this X is killed. And every time you see such a quote scheme, it has an obstruction theory and a virtual class and a full enumerative theory. You could say, can we ever remove this torsion hypothesis? And the answer is yes. If X is fauna, I think we can remove it or maybe to some other small assumptions. But the idea at this direction of removing this torsion hypothesis and paying for it with some additional geometry on X is an interesting direction that's mostly unexplored. I think we did a few calculations there. And you can calculate the virtual dimension of this quote scheme and there's a formula and it's essentially Riemann-Roch calculation, which I leave to you. And the interesting thing is what are the integrals here? And this is going to be some kind of foreshadowing of what we do for three-folds and descendants. And there's different ways to study this theory. You can study it in a numerical theory or a k-theory. But the way to get some total logical objects on this quote scheme is by the construction in some sense we already saw. So you start with a k-theory class on X. That's what we did for a curve, the k-theory class on X with a line bundle or a vector bundle. You pull it back to X cross the quote scheme and X cross the quote scheme has a universal quotient that's kind of drawn in this curly. And then I can take our pie lower star and get a k-theory class on this quote. So it's a way of taking a k-theory class on X and getting a k-theory class on the quote scheme. And that's just that this is rather familiar operation and we're going to interpret this eventually as descendants. And if you want to write an integral, the most general kind of descendant integral you can write is I take this quote scheme. And I have to sum over something. So I'm freezing the curve class and I sum over the order characteristic. That's this degree sum. And then I put in the churn characters of this k-theory, these total logical k-theory classes I've constructed. And as I said, we will later interpret these kind of things as descendants that are parallel to our descendants in Grumovut and theory. And that parallel will be made rather precise. But here it's a little, here it's in some sense more just words. I view these as a descendant insertions. And since I have a growing, I have a growing virtual dimension, I need to put some other class in. So the standard thing to put here is the total churn class of the virtual tangent bundle. And the motivation for this comes from different ideas. But one of them is if I just get rid of this, like this thing gives me the, if I don't have any classes here, this gives me the virtual Euler characteristic. So that's the, that's the, you know, full theory for quote schemes of surfaces and the generality that you can do for any surface. And you could say, can you solve this theory? And the two basic ideas that come in this, in the, when one tries to solve it, there's two basic ideas. The first one is rationality. And that says that this generating series in Q, the one we've just defined here, maybe it looked crazy here. And you thought that this integrans were inserted haphazardly, but there's something about that, which says this is definitely not the case because the first conjecture is that this is always the Laurent expansion of a rational function in Q. So this is, it's born as some kind of infinite series. It can have some negative parts, only finally many negative parts. But in fact, the, we conjecture that this is always a Laurent expansion of rational function Q. And in fact, this, this conjecture is proven in many cases. I didn't list all the cases, but you can look in these various papers. The last one with Noah, so there's, well, there's, there's four papers here. Wunan Lim and Johnson and the last paper with Noah Arbusfeld, that's in a K theory, but it proves this kind of rationality things in many, many cases, but not exactly all cases yet. You have to go look and see if you want to find out where we know, we don't know it's a rational function, but most cases it is. This thing's actually true for the most part. So that, that already shows that there's something magical going on with this kind of insertions. And then you could say, can you actually calculate it? And that's one thing nice for these quote scheme theories, which I indicated before, in fact was what Alina and Dragosh used is that they're very computable because there's this torus action that reduces to rank one. And in some sense that's the, that's the kind of nice thing about these quotes scheme theories in some sense that they're higher rank because they allow for, they allow for these quotients to be higher rank objects. In this case, higher rank on the curve. But on the other hand, they have this torus action which reduces them to some kind of rank one object. And so you can actually do this calculation and I gave one example, but we've done many calculations in different configurations, but the calculation, the most general calculation that shows the theory is solvable is this exact solution for the case when X is a simply connected minimal surface of general type with non singular rational, non singular canonical curve. So you can forget all of this, you can say it's a minimal, if you have a surface of general type, that's a lot of surfaces. And to write this formula nicely, and you don't need to have all these hypotheses, I just wanted to write it nicely. So it's a minimal surface with a non singular canonical curve and the non singular canonical curve of course has a genus, so I'm going to write the formula in terms of the genus of that. And it says that this, this series, and now I'm not putting any of the descendants in this is just the virtual, this is a series in some sense for the virtual Euler class. The basic series is given by some sign, which is an interesting number related to the cyber written invariant, some Q shift, and then some functions and this should be a rational function, how are we going to get a rational function? It turns out you have to look at the roots in W of a certain polynomial, it's this polynomial equation. So if you, this is a polynomial equation for W in terms of Q, and has n distinct roots, and to get the rational function you have to sum over all choices of these many of the distinct roots, and what function do you sum as well, it's interesting function, it's this function, that's a symmetric function, and you can, this, this, so this shows that the answer is a rational function, because it's a symmetric function of these roots, and it gives a complete solution to this problem for these virtual Euler characteristics in all cases, in all of these, these minimal surface, sorry, simply connected general types, services. This is kind of showing this series really computable. And in the papers there's lots of other computations, but what, what are we after in these computations? Well, so the first thing that comes in this is that you see in this is that there's this factor minus one to the Euler characteristic of, the holomorphic Euler characteristic of O, and this is the, turns out this number, this very simple number is the Gromov, is the Gromov-Wittman variant, and this is an old relation that goes back to this Gromov people's cyber-Wittman theorems of tabs, but for such a surface, a surface of general type, there's one basic Gromov-Wittman variant which is counting the genus G curves in the class, in the canonical class, and that's given by a sign. And actually that's the whole action in this, this whole, the whole action in this formula is that the rest is something universal. So what this says is that actually that the Gromov-Wittman theory and this very, I mean, this, it's saying somehow that this enumerative geometry of this quote scheme is some kind of complicated manifestation of the curve counting in Gromov-Wittman theory. So that, that already shows that there's some kind of connection between these curve and sheaf counting, and what we're going to do for three-fold is somehow a much richer exploration of that. Now, I wanted to make some comment about some of the topics that Richard talked about. That's the sheaf counting on surfaces, a more sophisticated theory, significantly more sophisticated, was proposed by Vafa and Witton and defined mathematically by Tanaka Thomas, and it takes a lot more delicate work to define the virtual class there. And one of the things that you get out of it, which is nicer, is that it's more closely related to the moduli bundles on, on the surface. And the idea for this Vafa-Witton construction is that you should move to dimension three, you somehow approach this sheaf counting on this surface by looking at sheaves rather not on the surface, but the total space of the canonical bundle. So it's harder to define, and it's harder to calculate. And one of the, one of the things that comes out of it is the rational functions that are found in the code scheme theory are replaced by model in some sense replaced by modular forms in this Vafa-Witton theory. And I refer you to various results and conjectures of Goetge and Kool. But one of the hopes I've always had is that there should be some way to transfer, maybe not the virtual Euler characteristic, but transfer integrals from the moduli bundles to this code scheme theory as what happened in dimension one. And the advantage for that would be that these are very computable. All right. So I leave you with some ideas there you can explore. So this is to show that this sheaf counting is already quite interesting in dimensions one and two. But we are primarily going to be interested in dimension three, and dimension three is the most interesting dimension for counting. It's like the perfect dimension for counting. And there are many directions of study, and I'm not going to cover these because there's too many. You can think about this in terms of meri symmetry or DT wall crossing stability conditions, refined invariance, there's a long list and it's an incredibly interesting subject that's going developing many different ways. But I'm going to start at the simplest place. So I want to count sheaves in dimension three. And the simplest moduli space of sheaves in dimension three, which more or less is familiar in some ways to most algebraic geometries is the Hilbert scheme of curves. The Hilbert scheme of curves, so I take X to be announcing in the projective threefold. And I use this notation here. I, the I is for ideal sheaves because I'm going to view this Hilbert scheme as a moduli space of ideal sheaves. And there's two discrete invariance, n and beta. So n is the holomorphic Euler characteristic of the quotient curve. And beta is the fundamental class in homology of the curve. So an element of the Hilbert scheme is a quotient where I take, well, the quotient has a quotient and an ideal sheave. And it's an ideal sheave because it's a one, it's not the quot scheme. It's the Hilbert scheme. So there's only one copy of O. So we can consider the Hilbert scheme as a moduli space of ideal sheaves. This is kind of an important conceptual point. So I would say that normally in algebraic geometry, when one looks at the Hilbert scheme, one looks at the Hilbert scheme as some parameter space of quotients. But you can also look at it as a parameter space of ideal sheaves. And not always, but in this case, it turns out that's a very profitable thing to do. If you help Hilb has a moduli space of ideal sheaves. And you could say, is it, so this is, this statement has some kind of theorem in it, which is to say that if I take this ideal sheave and I look at its intrinsic moduli, I try to deform it. This theorem is saying that actually in this case of curves on threefolds, that moduli space of ideal sheaves is this actually the Hilbert scheme, which is to say that when I deform this ideal sheave, when I finish deforming it, actually it stays an ideal sheave in a canonical way and gives me a quotient. So there's something to prove there. And of course you might object. You could say that I could change the first churn class of ideal sheave. And then of course it won't. And that's true. That's why we have to say actually trace-free deformations. So in actually in the moduli sheaves, there's always the issue about whether you're controlling the deformation of the determinant or not. Okay. So this is a kind of technical point, but it is the case. It's true here. It's a little bit remarkable that the moduli space of ideal sheaves, the moduli space of this sheave coincides with the Hilbert scheme. And just to make sure that we are on the same page, we really consider the entire Hilbert scheme. So if you take X and you look at the Hilbert scheme of curves, a lot of fuzzy stuff can happen. You can have some kind of smooth curve that looks pretty good, but then you could have some embedded points, or you could just have some non-reduced points running around by themselves, or you could have some monster where you have some very singular curve and there's a nilpotent structure generically and then additional nilpotent structure at the singularity. We're considering it all. And this was a topic developed in Richard Thomas' PhD thesis, which was to show that, I mean, the first part to show that this object has a two-term obstruction theory. And there is some subtlety in that. So first of all, as I said, that we're supposed to, the first thing, and this is, you know, if you go look in Richard's thesis, you won't find all these statements I'm saying. But the first thing is to consider this as this modulate space of ideal sheaves. He did that at the beginning. I don't think he was, he didn't write about this as the Hilbert scheme. The modulate space of ideal sheaves. And then, as I said, we have a problem with these scalars. There's x0, there's x1, and x2, we're happy with that. And there's x3, which we should kill. And how do you kill x3? Well, that, so it turns out both x0 and x3 are killed with the same idea, which is that, that you have to look at not the deformation theory of ideal sheaves, but the theory of traceless deformations. And this is explained very well in Richard's PhD thesis. So I encourage you to look at it. So there's the traceless deformation theory, deformation that preserves the determinant, and you want to express that into x, just like normal deformation theory of sheaves is just allows determinant to wander how it wants, and that's given by this x. If I want to fix the determinants given by traceless x, and this traceless x kills the c, so it's going to kill this guy. And by ser duality, it turns out also kills that. So by going in the traceless deformation theory, it's two term perfect obstruction theory. And the traceless x is defined using x and the kernel of the trace map. And as I said, I recommend looking at what Richard wrote. That's a very, that's, that's developed very nicely there. So the summary is that this modulate this, this Hilbert scheme of curves in x, which is kind of classical object, I would say something familiar to algebraic geometries. If I do the first move is look at it as a modular space by ideal sheaves, and then do the second move, which is what Richard does in his thesis is look at the traceless deformation theory of the modular space by ideal sheaves, then this Hilbert scheme carries a virtual fundamental class obstruction theory and virtual fundamental class. So it's kind of a little different way of looking at it than what's traditional in algebraic geometry, I would say. It's the same object, but you look at it slowly differently. And that's a beautiful thing. And I mean, this virtual, this deformation theory in the virtual class, and the first exercise is calculate the virtual dimension. And if you do, you'll find something kind of remarkable, the virtual dimension, this thing has two discrete invariance. This is what the characteristic in the curve class. And if you calculate the virtual dimension, that you'll find that it depends only on the beta. It doesn't depend on n. It's independent of n. So this is kind of a little bit strange. You might think that as you're giving, you know, this n is the Euler characteristic. And you could say you're somehow twisting your sheaves by some point somewhere, but the virtual dimension doesn't care. It doesn't give you any more virtual dimension for doing that. Even though the moduli spaces might be growing a huge amount, but virtually they don't. And if I want to integrate against this, I'm allowed now as a virtual class, I can integrate against this moduli space, this Hilbert schema or ideal sheaves, and that's integration is called Donaldson Thomas theory. And what about grimo-witten theory? It turns out grimo-witten theory also has an independence property with virtual dimension. If I look at the moduli space, a stable maps to x, where x is a three-fold, it's also independent of one of the parameters. Now it's independent of the genus, which is also a strange thing. Because normally you think when you're mapping curves to varieties, the genus is going to matter because it controls how many line bundles and there's brill no other theory, but virtually it does not matter. And moreover, these dimension formulas are the same. The dimension formula for the Hilbert scheme and the dimension formula for the moduli space of curves, they're the same. They're the same in the sense of the right-hand side is the same. The left-hand side is different and it's not even clear what I mean by they're the same because there's a g here and there's an n there. But I would say that they're kind of strikingly similar. So the simplest place to do this now is for Klabijau three-folds, but maybe I should stop. We can do that tomorrow. It's also clear from this, so here we made no assumption on Klabijau-ness of x, but it's clear from if the Klabijau condition is going to help us because it's going to make not only the virtual dimensions independent of n, it's just going to make them all zero, independent of beta 2. And this is some kind of sign about the Klabijau three-fold. So first of all, if you're doing an iterative geometry, dimension 3 is the best dimension. And if you're counting in dimension 3, Klabijau is the perfect place. And roughly speaking, the reason for that in one line is that if you're almost all the I mean, okay, I'll try to say it in an exaggerated way, that all counting problems have dimension zero on Klabijau three-folds. This statement is kind of deferred order true and that's why it's the perfect place for counting. So we'll start on that next time. So I'll stop now. Any questions? We have one other today. So the question is how do you have sort of constraints depend on stability? Of stability of the curve? So I think that maybe you could say, okay, maybe then I just try to interpret it. So I was talking about a certain, I mean, like for curves, it was the Deline-Mumford stability. You could try to change stability conditions for curves that you have choices there. Actually, I'm hearing some kind of echo. Should I just ignore it or what? Anyway, so you can change stability conditions. Like if you have a MGM bar with points, you can put some weights on those points and there's has it stability. And there has been work on how the cotangent lines change. I mean, the integrals, the cotangent lines change with this different has it stability. And I think even some work about the constraints, my memory is some paper of YP Lee and some others, but I'd have to look. But I think the answer somehow is that when you change the stability conditions in the way I was saying about MGM bar, the cotangent line integrals change in a more or less controlled way. And then you can propagate that through the constraints and somehow. But I think that YP wrote something about it, but I'll have to go look. I'll bring it to the lecture tomorrow. Okay. And sorry, and if you do that for arbitrary targets, I would expect something similar. Maybe just say that one more thing about that when you change these stability conditions in this way, maybe you have a more complicated way, but the way I'm talking about it's changing Gina zero stuff. So you have kind of control about what's going on. We have one question in the chat and I think we're going to have to finish afterwards. The question is what properties of invariance do you expect for the code schemes of follows on of non-torture sheaves? So I would, I mean, I would guess the rationality would hold also in terms of exact formula. So yeah, the general question, the general claim I would make is the rationality that that generating function is, is still true. I don't know if that's true or not. We have a little evidence. I don't think we have much evidence. Using exact calculations in rational and final cases turns out to be harder than in the general type cases. And one finds out also in the Vafa Whitten world. And somehow the philosophical reason for that is that, well, my view is that the curve counting and rational services is very rich and complicated while the curve counting on surface of general type is very simple. So I don't expect they'll be as easy formulas to find or as, as universal formulas to find in the final cases. But I would expect the rationality at least. All right. Thank you very much. Okay. Thank you. Okay. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
The main topics will be the intersection theory of tautological classes on moduli space of curves, the enumeration of stable maps via Gromov-Witten theory, and the enumeration of sheaves via Donaldson-Thomas theory. I will cover a mix of classical and modern results. My goal will be, by the end, to explain recent progress on the Virasoro constraints on the sheaf side.
|
10.5446/55051 (DOI)
|
Thank you very much. Thanks to the organizers. I'm very happy to be here and to give a talk. It's actually my first talk in person after one and a half years as for most of the speakers, I guess. So my goal in this lecture is to give very down to earth, very approachable tourist guide type of introduction to these three subjects that are in the title. So pretty many delicacies I will hide, but I want you to give some, to have some picture of these, I have some understanding of these after the talk. Let me acknowledge my quarters on different papers that are related to this story, Rosensky, Varchenko, Smirnov, Zhuh, Weber and Xu. This picture is the plan of the talk. So first focus on that black arrow on the top left part, which says 3D mirror symmetry for characteristic classes. So that the explanation of that double arrow will be the first half of the talk. So I try to explain what it means, what characteristic classes for some certain singularities in a space are, what they are and what does it mean that for two different spaces, in this case, in this example, I chose the Grasmagnian of two planes in C5 and another space, which is a Nakajima variety decorated by that, or defined by that red quiver. So these two are related in terms of characteristic classes of singularities. So that is the first part of the talk, explanation of that black double arrow. And the second half of the talk will be explanation of the rest of the picture. I want to talk about Cherkis' Bo varieties. It's a, it will turn out a very convenient rule of spaces, more general than Nakajima quiver varieties, and looking at spaces like Bo varieties instead of quiver varieties or homogeneous spaces, it sort of explains why, for example, those two spaces are mirror symmetric and also it will answer other questions. So this is the plan. Any questions? Then let me jump into, well, let me recall this notion that was already defined in the excellent talks of Joel Kamnitzer, the quiver variety. So if you see a picture like that on the left, so we call it a type A quiver. As a side remark, everything in this talk will be type A, just for simplicity. So this is a type A quiver, which means just a, you know, a combinatorial object which you see on the left. These W's and V's are non-negative integers. Associated to that picture, as we learned last week from Joel, there's a variety which I call script N of Q, the Nakajima quiver variety. So I'm not repeating, I'm not going to repeat the definition, but here are some examples. The examples on the left are familiar spaces. They are cottingen, total spaces of cottingen bundles over partial flag varieties. For example, the Grasmania, you get by just one portion of a quiver and here is an example of a partial flag variety, or cottingen bundle of the partial flag variety. Quiver varieties are of course more general than cottingen bundles of partial flag varieties. Here's an example. So this quiver variety is dimension two and it is the usual resolution of a Kline and Singularity, C2 over Z mod 3. More interesting than the actual definition, let's see some properties of quiver varieties. In this case, type A, they are smooth. They carry some holomorphic, carry a holomorphic symplectic form and there is a tolu section on it. So let me spend a little time on the tolu section, so at least naming what the tolu section is. At each of the framed vertices, the framed ones are the little squares, you imagine a tolu section of that dimension, of the dimension of W and the product of all these tori act on the quiver variety plus there is one more factor, one more C star, which comes from the fact that in the definition you considered the cottingen bundle of something before you cut it down by a group action and the cottingen bundle, in the cottingen direction you multiply by an extra C star, which we denote by C star H. So in this setting, the number of tolu fixed point is finite and we have some tautological bundles for each vertex on top of the quiver. So of rank V1, V2 and V capital N, there's a tautological, there's a bundle over the space. So this we are rather familiar with since last week at least. So eventually I want to talk about the cohomology ring, the torus-equivariant cohomology ring of the Nocajima quiveriates, look at the top left part, so that's what I want to explain how I will talk about an element of the equivariant cohomology ring. So the way we will name elements in the equivariant cohomology ring is we will name their images under the localization map. So consider this map on the top left called LOC lock. So that is just restricting a cohomology class to the torus-fixed points. It is just the most innocent mapping cohomology, the restriction map. It turns out that in very general situations, for example here, the localization map is injective. So if I name the image under the localization map of an element of the cohomology ring, then I name this. This is an injective map. And why is it a simple thing? Because we have finitely many fixed points, so the restriction to the torus-fixed points is the sum of the restriction for the cohomology ring of each fixed point, and the equivariant cohomology ring of a point is just a polynomial ring. So in this case U1, UN, H bar. So the picture on the right is of course an example. It is T star GR grass 1, 2, C4. It has six fixed points. These are the vertices of this graph. So to name an equivariant cohomology class in the cohomology of this space, I need to name a polynomial at each vertex. So to name a colloquial class is a tuple. At each vertex I name a polynomial in the U's and the H. I cannot name any tuple of polynomials. That's what I'm addressing in the bottom left. The image of the localization map is not the whole thing. There are constraints among the components. And this is what is explained or indicated by the edges of this picture. So there are invariant curves in the space, and those are the edges of the picture. And they come with some decoration. In this case the decorations say 1, 3 on the leftmost edge. It says that an element at top, okay, so I coordinate at that fixed point 2, 3, and I coordinate at that fixed point 1, 2. They cannot be independent. They have to satisfy constraints. And the constraint is that if you plug in U1 equals U3, then they have to be equal. So these kinds of constraints must be true for an image in the localization map. Here's an example. So you see everything that is in blue is a sixth tuple of polynomials. And because they satisfy these consistency conditions, they do represent an element in the equirincology of the Grasmanian. So let's check one constraint. So the bottom right constraint, the edge decorated by 2 and 4. So that means that the two polynomials associated to the two vertices, they have to be equal if you plug in U2 equals U4. And indeed you will see that there's a U4 minus U2 factor on the polynomial, so they are indeed equal. And you can check all the others. This particular sixth tuple is an element in the equirincology of the Grasmanian of 2, 4. This particular sixth tuple is actually something that will be called stable envelope. It's an example of a stable envelope later. This slide is just a warning that I said that the components are not independent. And I said that for every edge there is a constraint. Unfortunately, the edges actually are not always discrete. In this example, which is a Nakajima quiver rarity, the edges come in moduli. There is a one-parameter family, a pencil of curves you see in the middle, actually at two places as well. And in those cases, the constraints on the components are deeper than just coincidence under some linear form. The coincidence have to happen not only for the polynomials, but for some higher derivatives as well. So I'm not giving the concrete statement. I'm just saying that the constraints are more involved than just the coincidences. So now that we have a way of thinking about the equirincology ring of the Grasmanian or Nakajima quiver rarity, I want to name some special elements in them, which will be named stable envelopes or cohomological stable envelope classes. There will be one for each Taurus fixed point. So the notation is sub p. So that's what I want to define. The definition is not on the slide yet. So it will be on the next slide. So these definitions on this slide is just preparation for defining an element in the equirincology ring. So the preparation is the following. I'm going to fix a one parameter Taurus subgroup, which say, for example, this u maps to u to the first, u to the second, u to the third, and so on. For h bar, I plug in one. If that is fixed, then we can talk about this so-called bjarjnyski Biroula cells. For every fixed point, p, so I'm talking about the second pink line, for every fixed point, t, I can define the leaf itself, the collection of those points that under this one parameter, this one-dimensional Taurus, they flow into the point p. That's what the definition says. So the limit of sigma z times x is equal to p. So it's called usually bb cell, but we call it the leaf, the leaf of that point. Actually, I will come back to the rest of this slide, but I'm going to jump ahead by one slide and show you an example. So look at only the left. This is, of course, the moment graph. This is the skeleton of one of the Necajunic varieties. One of the fixed points is called 1,4. I don't know how much it's visible, but there is something shaded blue, or it looks like green for me here. All the points that flow into the fixed point 1,4. That's the leaf of 1,4. I'm going to go back. So if we have these notions of leaves, we also have a partial order by just taking that which fixed point is in the closure of another fixed point. So that way you get a partial order. And if you have a partial order, then we are ready to define the bottom line on this slide, the slope of a fixed point. You take the leaf of your fixed point, but also then you look at the points which are in the closure of that leaf, and then take the leaf of those, and then you look at the points which are in the closure of that leaf, and then those are fixed points in the leaf of those and so on. So it's not just the closure of the leaf. These leaves all have the same dimension, the Necajunic varieties. So if you are familiar with working with homogeneous spaces, think of the leaves as the conormal bundles of a Schubert cell. And conormal bundle of anything is always same dimensional. And so here you take the leaf is the conormal bundle of your Schubert cell, but then on the boundary there are some other Schubert cells, and take the conormal bundles of those as well, which have the same dimension, and then in iterative you do the same thing. So that's the slope. And it is illustrated on the right hand side of this picture. So the slope of 1,4 is the leaf of 1,4 of course, but in the boundary there is 1,5 or 2,4, and then take the leaves of so-do. So whatever is painted blueish here is the slope of 1,4. So this is the geometric part of a definition that is needed to define the stable envelope. This is the Maulik-Okunkov axioms definition of stable envelopes in cohomology. So the stab, the stable envelope associated to a fixed point P is a unique class that satisfies three conditions. The first one is the support. This must be supported on the slope of that fixed point. The second axiom is a normalization that the stable envelope of P restricted to P itself. It has to be some expected obvious thing, namely the Euler class of the normal bundle of the slope there. So the slope is not a smooth manifold, but at P it is smooth, so it has a normal bundle of it. It has a normal bundle in the ambient space, and you take the equivalent Euler class. And there is a boundary axiom that if you restrict the step of P to anything else but P, so I'm reading the bottom line, so anything else but P, then it will be divisible by H. The stable envelope will have a degree half of the dimension of the space, so all these restrictions are of the same degree. So it's divisible by H means that if for some person who doesn't see H, just put flux in H equals 1, then this means that the step restricted to Q is smaller degree than expected. So the fact that you like to call it a degree axiom, that the step of the number of restricted to anywhere else but P, it collapses a little bit. So it's a smallness condition, always you think of it as the restrictions are small. Actually maybe as I remarked now that I mentioned the H equals 1 substitution. So in special cases for example for G over P, this table number of were known before and they have a name called the Schwarz-McPherson classes. So in special cases it recovers some old notion with H equals 1. Okay, so in this picture I'm going to explain just further elaborate on the axioms of step 1, 4. So again step 1, 4 is a cohomology class, so it has component at every vertex. We know that these components cannot be arbitrary, they have to satisfy some consistencies. But let me see, beside these consistencies what other constraints we get from the axioms. For example the one in yellow, it says that the stable number of restricted to 1, 4, it has to be some explicit thing and then you see that, I hope the picture is rather intuitive, at 1, 4 there are two directions which point out of the slope and the product of those, the Euler classes of those has to be the restriction there. So the yellow is that the restriction has to be that thing, it's equal to that. We also have the support axiom and part of the support axiom means that restriction to 1, 2, 1, 3 and 2, 3 must be 0 because this class is supported on the slope. But more than that, for example the support axiom tells me something about the restriction at 1, 5 because you see at 1, 5 there is a direction which is normal to the slope, which is to the northwest direction at 1, 4. Since this is normal to the slope there and the stable envelope is supported on the slope, the stable envelope restricted here has to be divided by the weight of that direction, in this case u1 minus u3 plus h. So the support axiom is more than just a bunch of zeros outside, even in the boundary it requires some divisibilities. So I hope most of these should be clear. And of course divisibility by h is an explicit thing everywhere but at 1, 4 the restriction has to be divisible by 1, 4. So if you just look at this as it's presented on this slide, it looks like very combinatorial saying that there is only one tentapole of polynomials which satisfy the consistencies which I haven't really told you the derivative constraints. But anyway, so the consistencies, there is just one tentapole which satisfies the consistencies together with these axioms. But it's true and that is the stable envelope. Okay, let me check the time. Okay, I'm going to skip this part I think. Okay, so let's just regroup. I might come back to geometric representation theory connections but probably not. So far what we have is that we have a way of thinking about the equivalent conjurings of knackage and molecular varieties and we define the comological stable envelopes of it associated to every fixed point. The way of thinking of that, I would like to think of it as that is the class of this loop. So if there is a big ambient space and you have a sub-variate then it's tempting to think about that as a cycle and that represents the co-homology in the ambient space as soon as you have some kind of point-caridurative type of relation in the ambient space. So there's a cycle. So that is one way of thinking about the class, the class of this loop, but it's not really that class, it's really some kind of H-bar deformation of it. So if you take the coefficient of the highest H-power of this class, it has no H in it anymore, that is like a Schubert class type of object. So this is an H-bar deformation of like 19th century fundamental class type of calculations. So that's what we have and okay that's where we are so far. Okay now we are fast forwarding. I'm just telling you that the stable envelopes have a generalization in K theory and in elliptic co-homology. And there is no way I'm going to define those, not even the theories and even less the actual stable envelopes, but I want to give you some feelings about them. So first of all how do I think about the K theory element on a lecogenic curate or an elliptic homology element in the lecogenic curate. So the first time says that a stable envelope or anything as a co-homology element restricted to a point is a polynomial in the equivalent parameters, right? Everybody agrees, that's what I said. But then the K theory element is pretty much like that except the restriction is a Laurent polynomial, not a polynomial. And in the elliptic, an elliptic homology case it's not a Laurent polynomial, it's an elliptic function, it's a section over a product of elliptic curves. In the same variable is used. Okay, so that's one thing I want to say is that the way you want to think about for example an elliptic homology element on lecogenic curate is a tuple of some elliptic functions. But there's one more thing on this slide is that in the elliptic line I added more parameters, the V's. And in the next few slides I want to give an intuitive feeling that in why in elliptic co-homology when you want to talk about characteristic classes you are forced to have some new parameters. These parameters will be called dynamical or scalar parameters. I won't be able to give the precise mathematical statement but I hope I will be able to give some intuition why you are forced to have some new parameters. Okay, so first this is the look at the theta function. This is a section of a line bundle over the elliptic curve. The way I want to look at it, this is that it starts with x to one half minus x to the negative one half which is in logarithmic variables is really up to a constant, it's sine of x which would be just the k theory part of it. So the theta function is really just a q-decoration, some q-deformation of the sine function. So you can think of it as a q-deformation of k theory. And I will use this delta function, delta A B is just a, you cook up from the theta function another two parameter function delta A and B. Okay, this is just definitions because then the next slide will be the one which I hope will give an intuition, we are feeling why in characteristic class theory in elliptic co-homology you need extra parameters. There is something which is not on the slide so I just want to say that if you want to define some kind of characteristic classes for the stable envelopes, there are many others. There are many approaches but one approach is that you resolve your singular sub-variety, you define some obvious thing in the resolution and then you push it forward. If you do this, you have to show that what you invented in the resolution, it was invented in the right way that your class does not depend on the resolution. So this notion that you define the good class should depend on some identities. For example, the two nearby resolutions are, you get the same class downstairs. And these identities are really always boils down to one identity. In elliptic world, it boils down to the top identities called phi's three second identity. It says that if x1, x2, x3 is equal to y1, y2, y3 and they are both equal to one, then that the product of that, some of the products of that functions is zero. It's a good exercise for your graduate students. So that was defined in the earlier slide and then this is a, it turns out that about elliptic functions, this is the only identity. Everything else follows from it, although highly non-trivial. Anyway, this is the identity which is behind the fact that in elliptic characteristic, in elliptic world, you can define actually characteristic classes. Now let's do the following, that take that top identity and plug in q equals zero. So let's go to k theory. Then what you, what you get is the middle identity, which is you see a trigonometric identity. Now you can give it to your, to your calculus students that if x1 plus x2 plus x3 is equal to zero, y1 plus y2 plus y3 is equal to zero, then this identity holds. Ignore the purple part for a minute. So this is the q equals zero specialization of the top line. And you can further approximate sine of x with x, which you know, which we always do, then you get cohomology. And then the identity that you get is the bottom line. Okay, now look at the purple, purple decorations. Is that the identities for a, in k theory and in the rational limits, they are, they are easier because they, they tell you more. They say that if the x is at the top to zero, then the left hand side is equal to one. And if the y is at the top to zero, then the right hand side is equal to one. In some, so you see that the, the identity splits to x variables and y variables. So because of that in k theory and in cohomology, what we did in, in, you know, in the past 200 years of mathematicians, they were not forced to work with the other set of variables because these identities, they, they, the governing identity behind characteristic classes, they, it separates to x and y variables. So you can just take the left side of this slide and then, and then build up characteristic classes in cohomology and k theory. However, in elliptic cohomology, the top identity doesn't, doesn't split to an x part and the y part. You are forced to, to work with y. Okay, so this is actually something that elliptic cohomology thought that we should do cohomology and trigonometry as well with two sets of variables with the Kirchhoff parameters, but we need to go out of our ways to, to, to introduce them. Okay, so I, okay, this was the explanation of the bottom right corner of this space that stable envelopes are defined and, and, but they depend on new variables, which I call v's. Any questions? All right, then, then we, we come to this fact that, that, which I call 3D mirror symmetry for characteristic classes. It turns out that there are, there are pairs of Naka-Jima-Kivir varieties. Let's call them x, x-tree, that the pairing together comes with some, I, a fixed bijection between the totals, fixed points for which the, the stable envelopes on one are equal to the stable envelopes on the other one in the sense which is on the slide, that you take the stable envelope of p restricted to q, p and q are fixed points on one, and you take the stable envelope of q restricted to p on the other one, then you have a polynomials or Laurent polynomials or elliptic functions, and they claim that they will be equal if you switch equivariant and, and Kirchhoff parameters as well as invert h bar. So, so this is the, the 3D mirror duality for elliptic, for elliptic characteristic classes. Here's an example. So everything above the purple line is about one Naka-Jima-Kivir variety, that you see the, the cottingen bundle of p2. Sorry, yes. We have a question. Yes. How the elliptic stub transforms with respect to the modularity transformation? Yeah, I, there is something and I never looked at it, so I won't be able to say. Of course, you should, you should restrict it to a point so that you really have an elliptic function, but yeah. Oh, what should I, okay, yeah, maybe I shouldn't. Okay, so let me continue. So over the, over the purple line, it's all about the quiverity of cottingen bundle of p2. So you see that it has 3 fixed points, there is the skeleton of it is on the left side of the slide, it has 3 fixed points and there are, those are the constraints among the restrictions. And the table on the top is the elliptic stable envelopes. In, in the following sense that you take, take the rows of that table. So the first, the first row is the stable envelope of F1 and that means that that's the one of restricted to F1 is that product of data functions, restricted to F2 is 0, restricted to F3 is 0. And the middle line is the table envelope of F2 and so on. Now the, the 3D, 3D mirror dual of that, the quiverity is, is, is this one in the, the, the one below the purple line and that has a totally different looking moment graph. But I think that's what I wanted to convince you with that as soon as you see the moment graph, you can write down the stable envelopes. At least in cohomology it's easy, in K theory it's much less easy and in elliptic it's, it's a lot of work but, but for small ones you can do it. So as soon as you see that graph you can calculate the stable envelopes you will have the, this bottom table. Again the rows are the stable envelopes. And the fact is that if you stare at these two tables that they are the same after transposing and switching U and V variables and inverting gauge. Okay. So this is a, this is the baby example of 3D mirror symmetry for stable envelopes. Here are some, some other random examples. With green on the two sides I'm, I'm indicating the dimensions of these varieties. Yes, you see the different dimensional varieties have, have this amazing coincidence that characteristic classes of singularities in those varieties are equal in this sophisticated sense. Okay, the bottom line, the bottom line is that the left hand side is of course a macogymn variety and I claim that there is no macogymn variety which, which is 3D mirror dual to it. But we will fix that later. Because now I'm starting the part two of the lecture. So maybe it's a good point to ask questions if you have. Yes. Do you expect the stable envelope for communism in capitalism? Okay. I, I guess yes. But it, it, you know, having a group versus a formal group law has some advantages. An algebraic group. So these three varieties that, three, co-homology theories that I named, co-homology, k-theorem, elliptic homology. These are the, the, the, the, the theories that corresponds to one-dimensional algebraic groups which is, which are parts of the, the formal group laws. So there are, there are, there are advantages of working with formulas. So indeed maybe there is a formula for, for the, the most general co-homology theory. But I just, I just want, I'm a formula person. So, so I, I certainly want to look at these three. Do we have a question in what sense are all theta function identities derivable from the tricycant identity? Okay. So that is a sophisticated sense and I won't be able to, I can find the reference paper which I looked at and, and I don't remember the details, but I just remember the, the intuitive statement. Yeah. So just at the very beginning we fixed this homomorphism from C star to T. So is that important? Yeah. So in, in the slides which I skipped, it is important. So if you, if you start playing with that, what one parameter subgroup you choose, then you recover representation theory. So after a while I will, I might comment on those. The stable envelopes of course depend on that. The stable envelopes do depend on it. It's not infinite, it's just, they depend on some, some chambers of choices. So there will be finitely many. And changing them you recover young, young matrices and so on and so forth. So that's where representation theory starts to be built up. Okay. So from now on I want, I want to, to define, so, so, so give a feeling about what bovarietes are. And actually I want to advertise them. I think we should look at bovarietes instead of queries. They have some advantages. Okay. So these will be associated to some combinatorial pictures, combinatorial data. The combinatorial data will be called the brain diagram. Here's a brain diagram. So the brain diagram combinatorial is just a collection of horizontal segments called D3 brains. They, they come with some non-negative integers, the dimension vector. But then the consecutive ones are separated by either NS5 brains or D5 brains. So I drew a, a blue or a red skew line. And so that you don't have to memorize this. It's on, on this board out here just because this picture will go away after a while. For future purposes I will also decorate the D5 brains with equivalent variables U sub I and the NS5 brains with the scalar parameters V sub I. So this is a combinatorial object for us. We can discuss some, some, of course some super-syntheory after. Okay. Okay. So what is the, what is the Cherkis-Bau variety? Start doing the same thing as you would do for quiver varieties, but only do them for NS5 brains. So look at the left side of the picture. If you see an NS5 brain, it's a red brain. Then you just do the same thing as you would do for a quiver variety. Take HOMC and to CM, where, where you know N and M are the numbers, the, the decorations on the two sides. Take the cot engine bundle and, and that, that of course has an action of GL and cross GLM. So you do the, almost the same thing for the other type of brain, five brains, but it will be a different space, not just the cot engine bundle of HOMC and CM. That is the left hand side I call the, the arrow, the arrow edge or, okay, arrow brain and then this is the, the bow brain. Actually this whole thing should be called a quiver, which you can put an arrow and, and both an arrow and the, a bow into a quiver. All right. So, so for, for the other type of brains, you put some of the, I, I indicated roughly what that space is, but of course a lot of things are skipped in this, what acts how, but it's also just a, you know, a Hamiltonian reduction or GIT quotient of, of, of, of rather obvious spaces. The, the, the, the, the, the, the, the key difference is that on this other space, an extra group acts is C star, that's in the bottom right of the slide, that there, there is a C star acts. Because what do you, what do you do after that when you, you want to build, build up the necrogenous variety is that you take the, you take the product of all these, all these T HOMS and, and B N M's and then you do reduction by GLM, cross GLM, you do the reduction by all the GL's. You do that, but these C stars will, will survive. So on the space that you get, there will be a C star action for every D5 brain. You see this, this is realized that every D5 brain will be decorated by an equivalent variable. So if you do this story, then, then what will this, the Cherkis variety C, script C of D will be the name of the Cherkis Boo variety. It will be smooth. It will have a holomorphic symplatic structure on it. It comes with tautological bundles coming from the D3 brains. So those numbers are the ranks of the bundles. It, later I will show you that it has finitely many, finitely many fixed points and it has a toro section and the toro section comes from the D3 brains plus there is a C star action from the fact that there were lots of T stars in the earlier slide. So everything that we liked about, like about necrogenous variety is sort of true for this one. Everything is very combinatorial and there is an advantage which will come very soon I hope. You should have asked a question. So what is the framing that is usually, it's replaced by this. The D5 brains, correct. The D5 brains has come, okay. Some of the, they collapse together to be the framing. Okay. Yep. Yep. So there is a dimension formula. Of course you don't have to memorize. You just imagine that if you see the brain diagram with those numbers, the dimension vector, then from that you calculate the sum and that is the dimension of the, of the Cherkis-Bovarietzi. For example, if you see this, this brain diagram that it's on the left bottom of the slide, then you plug in the numbers and you will get four. It's not a surprise. This will be T star P2, the Cherkis-Bovarietzi associated with this brain diagram. So you might say that things are getting more complicated because the quiver name of T star P2 was just one dot and one square and now it's somewhat longer. But there will be things that we've been at the end. Okay. So this dimension formula. Oh yeah. And now I think I'm going to answer your question now. How are quiver related special cases? So what I need to give you now is a combinatorial recipe that if you see a quiver, how do you build up a brain diagram? So the quiver has parts, this K-n part, look at the top left part and whenever you see this K-n part, just just through a segment ending with NS5 brains, the red ones, and put n blue brains in between, where n is the framing of the, or the dimension vector of the framing and decorate the D3 brains with Ks. So this is just, and then glue together these segments. For example, look at the bottom example. And whatever is decorated by yellow, so that part will be just, just go to the brain, the brain diagram on the right, which is decorated, which is decorated, you know, shaded with yellow as well. But it, the quiver has another part and just glue that part to the right of it. So you glue these segments together and you would have a brain diagram. Yes. And this works only for Taipei quiver, Nagajima quiver right here or for any? Okay. I only know Taipei. But what about loops? Yeah. Yeah, okay. I think the next slide. Okay. Okay, so I won't be able to do, you know, all kinds of loops, but one loop is fine. So A tilde. Yeah, so yeah, right. But otherwise, this is right. But make, make an observation here that the brain diagrams that we get on the right, they are special. They, they have this cobalanced condition, which is in the bottom, very bottom line of the slide, is that on the two sides of a D5, D5 brain, you will always have the same numbers. Okay. But here's the advantage that we do not have for quiver varieties. There is an extra, actually there will be two extra operations. This operation is 3D mirror symmetry. This is just the most innocent symmetry operation on these brain diagrams. Just reflect it down, reflect it by the horizontal axis. Or in other words, change red to blue and blue to red and so on. So let's call it 3D mirror symmetry for both varieties. Let's see an example. Let's find the 3D mirror dual of T star P2. So the top line is the, the brain name of T star P2. And we just formally create the 3D mirror dual of it. Okay. I can calculate this dimension, this dimension too. But then here we go a little depressed because this is not cobalanced. So I cannot recover it as a, as a knuckajumma quiver variety. However, I will be in a minute. I will be able in a minute. So just wait. Because I'm going to show another operation which exists on Boo varieties. And it's called Hanani Witten transition. Think of it as like a Reidermeister move. So you can locally rebuild your brain diagram without changing the, the space. So the, the, the rebuilding is such that you can, if you have a consecutive D5 and NS5 brain, you can switch them. The price you pay is that, that the, the dimension vector in the middle changes and it changes the way which is on the, on the right. And the theorem is that if you carry out such a change on your combinatorial model, then the associated Boo variety doesn't change its, its same. Actually the Taurus, Taurus parametrization, the Taurus action reparameterizes a little bit. But, but yeah, for the purpose of this talk, it's just an isomorphism. Okay. Then let's continue this example that we saw a, a few slides up. That the first two lines are just, we found the, the, the 3D mirror dual of, of T star P2. But now I'm going to play the game of carrying out Hanani Witten transitions. For example, first I carry it out at the yellow part. You see it. And then, then, then I hope you can just carry out this decision. And then, then after that I carry it out at the green part. I hope it's visible. And I will get to the brain diagram that I'm, I'm pointing at, or in the middle of the right column of the slide. And this one is cobalanced. Okay. So I was lucky enough to be able to carry out Hanani Witten transitions to make my brain diagram cobalanced. And if it's cobalanced, then I can recover it as a, as a Nakajima variety. So then we recover this thing, this example, the baby example of 3D mirror symmetry, or between two Nakajima quiver varieties. Any questions? Oh, okay. So I want to give you an FNA type. So, so look at the, on the left of the picture, there are quiver varieties. On the right, we have the same varieties, but in their bone names. And the transition between them, everything follows from earlier slides. If you see a quiver, then there is a way of drawing a brain diagram. The 3D mirror dual is just switching D5 and NS5 brains. And then in this one, it's such a simple thing, it's, it's accidentally already cobalanced. So I, I can, I can rewrite it as a quiver variety. This is of course a very well known example, right? Of Hilbert's Schemes and his dual. But you can play the game with more, more complicated type A or FI type, type A quiver varieties. Okay, any question? Yes. Is this the correspondence in Unicom? Unicom, so does it have an automorphism? Certainly this is the natural one. So, so the, oh, so your question is that are there quiver varieties which are isomorphic to each other? So different combinatorial codes isomorphic to each other. Yeah, that, that I doubt. Yeah, I doubt. Oh, yeah, yeah, I'm sorry. I can, of course, I can, I can present the one point space many different ways. So, so, so maybe I have to be, we'll back on that. I'm not sure. Okay, so in, in, so now I think I explained everything which is on this slide. This is just a repetition of the first slide that to find the mirror dual of, of Grassman 2C5, you just have to write its Boo picture. Then formally take its 3D mirror and then carry out an animated and moves if you can. And if you're lucky, in this case you are. And then you will get the, you will get a quiver variety. Right. What I want to talk about is some other very important structure that, that quiver, bovarietes come with. One of them is brain charge. So it will be a number associated, an integer associated to every five brain. So an NS5 brain, it is, it is just the difference of the two numbers on the two sides of the NS5 brain, L minus K, plus the number of D5 brains on the left of it. So here you see that this is not just type A, not F find type A. In F find type A there is something local charge, but anyway, so this is, let it be just finite type A. And for D5 brain a very similar integer associated. So K minus L plus a number of different type of five brains to the right of it. Here's an example. Is everything visible? Not much. Take the left most NS5 brain. His brain charge is 2 minus 0, because these are the numbers on the two sides. Plus, plus the number of D5 brains to the left of it is nothing. So that is the top red 2 on, on this, on this diagram. So for some reason I will collect these charges on, on the top and on, on to the left of an empty table. Okay. For, for the time being this is just a decoration. So I collect the charges of NS5 brains left of this empty matrix, the charges of D5 brains on top of the empty matrix. It's actually a, a, a, an easy theorem that the two charge vectors, the red and the blue charge vectors is a complete invariant of the Hanani-Britain class of, of, of the brain diagrams. Hanani-Britain class is the isomorphism. Is that you, if you, you switch to consecutive different type of brains, you have a different diagram. But the brain, the charges will not change. Yeah. And vice versa. So, so if I, if I didn't want to define your Bouvaritis, but Bouvaritis up to Hanani-Britain, then I would have just told you that they are associated to a, a pair of vectors. This pair of vectors have this one extra perfect, the sum of the red numbers is the same as the sum of the blue numbers. Okay. So again, so up to Hanani-Britain transitions. Bouvaritis are, are, when you, next to this empty matrix, you put arbitrary numbers on top and on the left. So the, the sums add up together. And among these, the ones which are, at least in one of the representatives is a quiver variety. These are the ones which were in the top, the top vector is a, is a partition. Those numbers are weakly decreasing. And if you put numbers there fully one, then these are just the, the topics of Schubert calculus. So why, why do we care? Because actually if you are, if you're coming from representation theory, then you don't, then, then you think that you can, okay, so I didn't say enough to support this, but it's fact that in geometric representation theory, you, you, you permit yourself, you allow yourself to permute those numbers on top. But not here. So the representation theory is the same, but the underlying space is different. So if you want to look at the space, then, then the both varieties are more general than, than quiver varieties. And why, why do we like that both vectors are arbitrary? Because it comes from one extra operation, trans, transpose, which is essentially three-dimensional duality. Which, so this was not complete for quiver varieties, as you can see. Oh, okay, so, okay, since I keep mentioning geometric representation theory, I might say that the following, that, so if you see these two vectors, then the height of this, this matrix is a number, say, n. And then you say, take the young end of g l n. That's, that's a quantum group. Then those numbers on top, read them as, as, as, if the numbers are a, b, then take lambda a of the, of the vector representation, tens or lambda b of the tens of representation and so on. So you take the, the fundamental representations and multiply them together. That's a representation of a, of a quantum group. And now you read these numbers on the left and take that weight space of that representation. And it turns out that that weight space is, is very naturally identified with the cohomology of the associated Boole variety. Okay. So again, this is the, why, this is why these spaces are important in geometric representation theory. On the accurate conjoinings of these varieties, quantum groups act. So one Boole variety is one, one weight space of such a quantum group. And this is true in, in k theory and elliptic. Is this the analog of this Moldik and Kuhnkov Young end? Yeah, yeah, yeah, this is exactly. So if this was, if these numbers on the top were just partitioned, this is the Moldik and Kuhnkov Young end. Yeah. And now, okay, okay, that was just, okay. I want to show you the, the beautiful combinatorics of fixed, torus fixed points. So if you ever looked at the combinatorics of torus fixed points on the Kuhnkov varieties, it was a couple of partitions and, and was somewhat messy. It is a different picture here, of course, equivalent and of course more general because for Boole varieties. And I find it fascinating. So the claim is that fixed points are in bijection with tie diagrams. The tie diagram is on the picture. It consists of ties. A tie must connect a blue brain with a red brain. And you know, my picture of drawing them in these skew lines, it tells you, yeah, so it's a natural way of connecting them. And, and of course each D, not of course, but it's true that every D3 brain has to be covered by ties as many times as this, as its multiplicity. So just imagine that you only see the brain diagram and the numbers and it's your task to put ties there. There are lots of choices. For this particular brain diagram there are 123 brain diagrams. This is one of them. So this associated Boole variety has 123 fixed points. Here are the fixed points of, of grass mints you see four. And you see there's a natural bijection between four choose two, just like in any other names. This, this tie diagrams beautifully transform under Hananividen transition, which looks like a Reidemesser III move. And they beautifully transform with 3D mirror symmetry, just take the, the, you know, the reflection of the image. So that means that not just that we have a bijection between Hananividen equivalent Boole varieties, which should be because they are isomorphic, but there is a bijection, natural bijection between, between 3D mirror Boole varieties, the bottom line. That was needed for the 3D mirror symmetry statement. Let me also mention this other combinatorial gadget that this, this matrix used to be empty, but now put, put zeros and try to put zeros and ones there, so that the row sums are the red numbers and the column sums are the, are the green numbers. If you are, if you manage to put zeros and ones in such a way, then you call it a binary contingency tables, the binary contingency table. You know these contingency tables come from statistics, but here it's only zeros and ones are permitted. It's also a, a, these theorem that fixed points are in bijection with binary contingency tables. Maybe, maybe I want to draw a picture is that, that if you come from Schubert calculus, then, then you learn to work with full, for example, the full flag variety and everything about the full flag varieties is, is parameterized by, by, by permutations. This is a permutation. In this language, what I'm saying is that quiver varieties, in quiver, in the, in the geometric quiver varieties, everything is parameterized by, oh no, first, first what I want to say is that a partial flag variety would be parameterized by an order set of subsets, which can be identified with these, these kind of bipartite graphs where the degree doesn't have to be one on the left. So it's like a permutation, but I permit coincidences on the left. This is a partial flag variety, everything Schubert's, all those fixed points, everything is parameterized by these things. And with both varieties, all we are doing is that we are permitting higher, higher degrees on the right as well. So bipartite graphs, which are the same thing essentially as binary contingency tables, are the, are the objects that, that parameterize all the, you know, the cells or torus fixed points or, or stable envelopes. So there will be a stable envelope for each of them. So on the right, on the right side, there are, there are the BCT codes of torus fixed points on graph 2C4. Okay, so I think I, at the beginning of this talk, I wanted to convince you that, that, that to find characteristics from a space, you have to walk through the following path. From the space, first you find the torus fixed points. You also find the, the invariant curves, which I skipped. There's combinatorics behind that as well. But then you have the moment graph. And from the moment graph, the axioms give you the, the, the stable envelopes. So I gave you half of the story. I gave you the combinatorics of the, of the torus fixed points. Oops. Anyway, so, so, so if you see what's on the top of this, of this slide, then you can create the torus fixed points. You will have the vertices on the thing on the left. Yeah. And you do some more combinatorics. You will find the edges. You will find what's on there and, and below the decorations. And then the, the stable envelopes are defined. From this one, you can calculate. Okay. The same, same for this other one. I might, I might mention at this point that, of course, that's how you define stable envelopes, but they are hopeless to calculate using the definition. So, so you don't really do that. So as we are understanding better and better, so there is some kind of co-homological whole algebra type of structure among stable envelopes. You want to, to, to, to, to, to, to, to, by convolution multiply two of them together third one. And there are some, some formulas as well in certain special cases. Okay. So then, then I'm just walking through something that I already showed you is that if you start with those two diagrams, then you play the game, you will get the torus fixed point, you will get the invariant curves, you will have these two graphs. And then you can calculate the stable envelopes and you recover this slide that you already see. You already saw. So all I want to say is that everything follows from just the brain diagrams, nothing else. Okay. So this statement that the stable envelopes match for, for a 3D mirror dual-bubu varieties is proved in certain special cases. For the Grasmanian and its dual, it's proved by the paper with what, it's a Smirnov, Varchenko's, for the full flag variety in type A being self-dual is, is, is proved in a different paper by the same authors and in general type with, with, on J Weber. The hyperb toric being 3D mirror dual in terms of stable envelopes with this dual is proved by Smirnov and Juh. And, and we calculated finitely many other cases. Maybe I should emphasize many. So it's, it's, it's quite a well-established conjecture that it should happen. Of course, everybody think it should happen. This should, I mean, you might remember the, the Higgs branch, Coulomb branch interpretation of this in, in, in the Commits are stuck. The, they are, oh, that's, okay. So may, It may not be that this is, I don't know. Yeah, I don't know. So I'm not sure that that is a special case of what I'm presenting. Certainly the middle of the general G over L and when it's longer on the word, it's not a quiviriety or booviriety of type A. So that's not a special case of what I'm saying, that in these are the cases for which the 3D mirror symmetry for characteristic classes is established or proved. So here is a summary and actually I'm on time. The summary of what we learned is that if there are certain nice spaces, for example, varieties of the total section, there is a characteristic classes. They have relations to the numerative geometry. They are relations to representation theory of quantum groups. Okay, they are related to some very important Q difference equations and actually two sets of Q difference equations. So they are there, yeah, I advise everybody to study them. They are very important notions. We also learned that they come in pairs, such that this table, I mean, the space is coming pairs and there is such that this table envelopes on the two are related and the natural pool of detecting it is booviriatives, which is close for 3D mirror symmetry and easy combinatorics govern their 3D mirror symmetry. The end. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Any questions? What's the relation, if any, between the boviriatives and the construction of cooling branches we saw last week? So that's Nakajima Takayama paper that they prove it. That these boviriatives are the cooling branches. Taipei, probably more general, but I understand the Taipei. There was a question in the Q&A, how the stub changes under the, and any width and the mover. Is this clear? Yeah, so that's an isomorphism, so it shouldn't change. But the way I set it up, you have to make some choices. The way I set it up, the tourist gets reparameterized, but that just means that in one of the variables, one of the equilibrium parameters, instead of u1, you write u1 times h. But so it's an isomorphism, so there's no change. There was another one. The boviriative description of a given quiverite includes some additional carrier parameters that were not visible on the quiverivariative side. Do you know how this should be dealt with when a computer stable emerald stops boviriatives? They should, I didn't tell this in the last sentence, but before that I want to say. Do you know how this should be dealt with when a computer stable emerald stops boviriatives? So maybe my first remark is that those Vs are also there for the quiver picture. They are at these top vertices. So there is here, of course there's not enough. So this is V1 over V2, and this is V2 over V3. So at this bottom, the top vertices are decorated by the carrier parameters. They come from the PICAR groups. So there's a line bundle, the determinant bundle here and here and here, and these are the carrier parameters. Okay, the question, how to deal with them? Yeah, I mean, in the definition of elliptic stable envelope, those display a role, and they are just at the right place. And also, it's quite, I don't know how you appreciate it or not, but the 3D mirror symmetry, equivariant parameters and carrier parameters just switched beautifully, so as they should. So that's just for me, just another incarnation of the fact that this is a right way, this is a good way of looking at these varieties. So here the equivariant parameters and carrier parameters is not clear how they switch. I mean, but in the bov picture it's rather nice. In the recent paper, this is a result of you mentioned that there is some super-efficient design. So I only did the GLN part, that it is extended to GLNM with Leprosensky and myself. What happens is that each of these five brains you can define, for some of them you put a star, and then you apply a, in a sophisticated sense, you apply a Lejeune transform there, and there's a different associated space. And in the cohomology of that, not the Yangians of GLN, but the Yangians of GLNM relaxed. In Kuhamal. Can you explain what are the results? No, it's rather sophisticated. So first of all, we don't do it in the GIT way, because first of all, we have to give up holomorphic symplactic structure. It might be twice, but certainly we have to give up. And the omega. So we rephrase the definition to, some Lagrangian intersection definition, and then, this is quite involved for me. And then, before, each of them will be a Lagrangian, generalized Lagrangian variety, and the original would be their intersection in some sense. But then, for, there is a star, first you apply some Lejeune transform, and then you intersect them. So that will be the new variety. So that's actually on the archive already, this paper. Okay, I should put here, many of the things that I learned about physics and everything is from Lejeun's case, so he certainly deserves his name. Any further, yeah? You did pass one to use the combinatories of these diagrams to calculate the stable I will open. In some sense, they define it, but it's just very complicated. So actually, there is a student, Thomas Obotta, who has a great way of doing this in cohomology, for in the quiver settings, and it's some kind of cohomological whole algebra multiplication. I have a paper from 10 years ago, but that's only in the Schubert calculus settings, that if you, you take some cohomological whole algebra that Marcus will define tomorrow, so I'm sorry to take it for granted, and then you take some natural elements one, there are many natural elements called one in it, and if you multiply them in the right way, then it will be a, it will be a cohomology class, and that is the stable envelope, but that is in some Schubert calculus settings, when you take quotient bundles of partial flag right. But Marcus will convince you that this cohomological whole algebra multiplication is non-trivial, it's beautiful, but it's not just multiplication, it's non-concutitive, for example. The computation that you are mentioning, it does after localization, is it some shuffle algebra way? Actually, that's correct, but it's, yeah, so I'm cheating a little bit, so indeed that's a local calculation, right, so it's only for polynomials. So it's a shuffle algebra. But for elliptic, we are trying to set it up here, there are lots of technical difficulties. Any further question? Well, I must say that Thomas Bode does it for elliptic, as well, but for quiver varieties. Maybe then you can point me out, please. Except... OK, let's thank each other again.
|
There are many bridges connecting geometry with representation theory. A key notion in one of these connections, defined by Maulik-Okounkov, Okounkov, Aganagic-Okounkov, is the "stable envelope (class)". The stable envelope fits into the story of characteristic classes of singularities as a 1-parameter deformation (ℏ) of the fundamental class of singularities. Special cases of the latter include Schubert classes on homogeneous spaces and Thom polynomials is singularity theory. While stable envelopes are traditionally defined for quiver varieties, we will present a larger pool of spaces called Cherkis bow varieties, and explore their geometry and combinatorics. There is a natural pairing among bow varieties called 3d mirror symmetry. One consequence is a ‘coincidence' between elliptic stable envelopes on 3d mirror dual bow varieties (a work in progress). We will also discuss the Legendre-transform extension of bow varieties (joint work with L. Rozansky), the geometric counterpart of passing from Yangian R-matrices of Lie algebras gl(n) to Lie superalgebras gl(n|m).
|
10.5446/55054 (DOI)
|
I wanted to talk about some work in progress that I hope we will be finishing up very soon with Thomas Kreitzig, Nick Garner, and Nathan Geer. It started out a few years ago as part of an SF-FRG collaboration. And it's related but probably not in a way that will be obvious to some previous work with some of these co-authors and Jennifer Brown, a grad student, and Stavros Garfilidis about recursion relations for so-called ADO invariants. It's also closely related to a paper that appeared last year by Gukovsin Nakajima Parkpe in Topenko that gives in a way I will say a little bit more about later, sort of a 3D mirror of our construction. And there's also ongoing work in that direction by Gukov Fagan and Reshiti Ghan. So to sort of set up the story I want to discuss, there was sort of fantastic progress in math and in physics that started about 30 years ago in a bunch of papers, but in particular in work of Witton's and of Reshiti Ghan and Turayev that sort of connected some ideas coming from representation theory of quantum groups and vertex algebras, WCW models, and Trin Simons quantum field theory. One of the first examples of so-called topological quantum field theories whose partition functions give you topological invariants, in this case of three manifolds with links inside. So I am hoping that many of you have know at least part of this picture. One of the huge exciting powerful aspects of this is that there were at least three different perspectives on quantum invariants that appeared and one could sort of study each side and connect them. So a key object, sorry there's a lot of feedback coming through, let me know if the audio is not okay for me at some point. Okay, so a key object in each of these constructions that appears is a certain braided tensor category and in the axiomatics of TQFT on the math side one can sort of reproduce the entire TQFT from this braided tensor category. In Trin Simons theory, on the physics side, this is the so-called category of line operators. Its objects are extended operators that are localized on some sort of lines or curves and three-dimensional space time. In the OA land, the objects of this category are modules for the relevant VOA, which is WCW, and in terms of quantum groups, objects of this category are modules for a quantum group UQG at a root of unity. So, their objects of this category, more precisely their objects of a massive semi-simplification of this category. So the category at a root of unity, which is really what I want to talk about during most of this talk, is extremely complicated. It's not semi-simple and it was known a long time ago that it wasn't semi-simple, and sort of a tiny piece of it goes into this original Reshti-Kintari construction. Right, so like I mentioned, so in this old classic story, the category involved is semi-simple. In terms of physics, semi-simple means there are no non-trivial junctions of line operators. And in math, I'm hoping you all know what semi-simple means. So mathematically, you would say there are no non-trivial morphisms among objects and morphisms are junctions on the physics side. Okay, so a lot of progress has been made since then, sort of extending into the non-semi-simple world. On the quantum group side, even in the early 90s, Li Buchenko and others started writing down partial TQFTs that started off with a non-semi-simple category. And so the Kutsude-Gucchi, Otsuki, or ADO invariance of links are sort of related to this. They're related to a semi-simple part of a bigger category that I'll mention later on. Well, the problem there is not that it's not semi-simple, the problem is that there, some representations have vanished in quantum dimensions that need to be regularized. And some of the patterns of problems come up in general when looking at the big category of UQMG modules at a root of unity that need to be dealt with somehow. A systematic set of tools for dealing with the various problems that come up has been constructed much more recently starting in work of Geo Constantino and Patiro Muran, and extending in a bunch of different ways since then. And so now there is, well, there are still extensions that are being developed, but there is at least one TQFT defined using these techniques, these new techniques that involves the full representation category of UQMG. I'll talk about this category in about 10 minutes. So, on the VOA side, there have been similar developments. So going non-semi-simple there in its simplest incarnation means generalizing from rational VOAs to logarithmic VOAs. And the simplest logarithmic vertex algebra, the so-called triplet model, and that's been generalized also in many ways, but the generalization that's relevant here is in terms of what are called Fagin-Tepunin Algebras. That's what FT stands for. So this is Fagin-Tepunin. And with suitable matching, the representation category, the module category of these Fagin-Tepunin Algebras is supposed to match modules for quantum groups at certain roots of unity. Well, it was supposed to, I think it was conjectured a long time ago and in recent work, that I'll give you a few references here. The equivalence has been proven in certain cases and generalized to an equivalence of actual rated tensor categories. So we modulate tensor categories that sort of the data you needed to build the TQFT out of this. Okay, so I said a lot has been done. Yeah, sorry, was there a question? Okay. So what hasn't been done is the physics side of this. Transimans led to a lot of sort of amazing computations and predictions, things like geometric quantization to produce Hilbert spaces and conceivage integrals and so on. And the transimans part of this non-simple story does not exist yet. So what we're doing, part of what we're doing is actually to propose a quantum field theory that fits in this third physics perspective. Some non-simple aspects, I should say, have appeared. Now, triplet algebras have started appearing in supersymmetric theories in a paper on 3D modularity, maybe, by Chang-Chen Ferrari-Gericot Harrison. So logarithmic things have started appearing, in particular in a supersymmetric context. So, supergroup transimans theories have also been investigated, and I would, using sort of physics language, I would say they fall into the same universality class. They, supergroup transimans theories have a lot of properties in common with the theories I will discuss today. They're just not exactly the direction we're going in. So the result I want to talk about, in part proposal and part result, is that there exists a three-dimensional quantum field theory that is topological, whose category of line operators matches sort of the big quantum group at a root of unity category. Here the case I'll focus on is an even root of unity, and I am certain about this in type A, and there are obvious generalizations for groups of other type. And the category of lines matches modules for the fake and tipunin algebra, and in fact, the way we actually get at this more naturally is in a sort of morally level rank dual of the fake and tipunin algebra. So they're actually two vertex algebras that appear that have equivalent categories of modules. So, we have a, I would call it physics proof that the vertex algebra categories are the categories of lines. And in the QFT I will talk about for the case of certainly, sorry, for the case of SL2, I think for general type A. So, to make this statement I have to actually say what I mean by physics proof taken suitably liberally, I would say this makes sense for general type A. And Thomas Kreutig in our paper gives a proof that for SL2, the two vertex algebras I showed here are equivalent and it's already known that the fake and tipunin algebra is related to quantum SL2 modules at a root of unity. So sort of the new bit is the left side of this picture. So whatever quantum field theory sits on the left here had better be labeled by a group or a Lie algebra and an integer k that tells you what root of unity we're working at. And change the page on my screen but it's not coming through so I may need to re-marry. Just a sec. Oh, you guys can see that. Okay. Let me just play again. Good. Right, so the theory is starts out as a 3D n equals 4 theory and it's mostly a 3D n equals 4 theory called QFT. That can be analyzed using all of the lovely algebraic techniques that you're hearing about in other parts of this workshop. And it's a very quick twist. So, so it's sort of a mix of a 3D n equals 4 theory and a trans-simon theory. That's what that G of K is doing there. And so it doesn't, it falls slightly outside the class of things that are easily analyzed algebraically right now. And one of the points of my talk is to motivate all of you to think about how to solve this problem. Sorry, and I see Sasha has a question in the chat. It's for all G. So, this theory that I'm talking about here makes sense for all G and all. I was asking about the previous line about the. No, there are subtleties when I would expect it to work for a to E type and there are very huge technical things and that happened later where some language stools need to be involved and so there's some natural guesses for what happens, but the story is not so simple. Okay, thank you. And it's like it's sort of clear and on each from each perspective why the story is not so simple, matching up the various subtleties is is hard and is something we realize we should not attempt in in this first shot at at this project. So, one can write down a sort of a Lagrangian, an action for for the steering and type a using a twisted BV formalism. Just is kind of nice. So in that sense, it's very much like turn simons. It's a concrete theory. And using that Lagrangian one can write down a boundary vertex algebra, the analog of WCW, and there are in fact two of them that show up. And in WCW there's also another algebra that you could use which is the level rank dual of WCW that has exactly the same category of modules. And, and here as well, there are there's sort of two natural algebras that show up that are level rank duals of each other in the appropriate generalization of level rank. And it should be possible, like, it's sort of clear what to write down for other types from 40 brain constructions. But there are lots of technical issues that show up. Right. So, we write all this down and then then the natural question is like what what what you gain from the physics and what can you compute to check that that this guess is even correct. So it's easy to compute using supersymmetric localization techniques, the growth and the group of the category of line operators and with our characters of hybrid spaces which are said this, there's this D bar sitting everywhere, there's a D bar sitting everywhere. So, the physics theory because it's this sort of supersymmetric theory that needs to be topologically twisted to get something topological is like sort of naturally a derived beast or like the category of line operators is naturally a DG category. And so all of the equivalences that we get, particularly between QFT and the V ways are that that first equivalence is a derived equivalence. There are many different ways of describing the category of line operators and it shows up as a DG category and it is equivalent to the derived category of modules for this VLA. The thing that's also relevant on the quantum group site is the derived category of quantum group modules. The things associated to surfaces are going to exist in many co homological degrees, and their other characters are easy things to compute as partition functions. And so what we can compute are things like the category itself from a few FT perspective or the hybrid space itself in a math and classroom action, and I'll indicate some techniques to try to go about that. But like this is where like fully generalizing algebraic approaches to 3D n equals four theories things like the BFM construction would be really useful. That's the end of the long introduction. You know if there are any questions to wrap me at any time as well. I want to actually say a few more concrete things about the representation theory of quantum groups at a root of unity, and I'm just going to stick to the case of SL2 here for illustrative purposes. Great. So, right, so quantum SL2 looks like standard SL2 except the part time generator has been exponentiated. And it looks like there is a question coming but I'll wait till it actually comes. At an even root of unity in fact that any root of unity, but even is the relevant situation for me. There's, in addition to the center that comes from from Kazimier operators that sort of the higher chandra center. There's an extra bit of the center that just comes from case powers of the F and the K. And so the extra central elements act as constants on any indicomposable representations. And so the representation theory sort of decomposes into what little pieces based on what values the central elements take. And the fancy way to say that is that it ends up vibing over spec of this other part of the center. And this other part of the center parameterizes an open cell in the group PGL to So, the reason PGL to is coming up here is because it's the Langland's tool of SL2. And at at an even root of unity. The fancy way to say this is that the category of modules fibers over PGL to at an odd root of unity at fibers of grass. I think technically I should also say this is a coherent chief of categories which is also something that shows up in the field theory. But I think that's all I want to say for now. Really simple example of this is that when. So if you if you take a diagonal element of PGL to that has some eigenvalue either the alpha one would associate that to mod to a subcategory of modules on which either the K and after the K act to zero coming from these off diagonal zeros and K to the 2K acts as you the alpha one. Should also say like the relevant thing about these, these different fibers or different stocks of the chief is that they're there are no hams between them so they're each full subcategories, the total categories a direct sum of all of these stocks. And, yeah. Right. So I'll say. I'll say that terrible transition. So it was observed, it was proven a long time ago by the continue and cats and even root of unity case by back that what the stock of fiber looks like, but the particular elements of PGL to have the center of the K act only depends on the conjugacy class of this element and in PGL to up to isomorphism. And then what was realized a little later on by initially by Koshile and Russian TK is that this sort of fibering of the category over PGL to not just to invariance of links and three manifolds, but more generally to invariance of links and three month in three manifolds with flat connections in the complements of the links. So this was roughly but not entirely correct on and more like correct. And it's been made very precise in the case of links links and s3 in a recent paper by a bunch a gear, but you're on and Russia taken. So, so they build an invariance of three manifolds, sorry, invariance of links and s3 with flat PGL to connections in the complement of the link. And there's work in progress, trying to really promote that to a full QFT that that gives invariance three manifolds with with flat PGL to connections. And the picture that I always have in mind when when trying to understand why why this should be true is is the following. So, so the category has all these different blocks or stocks labeled by by elements of PGL to one should if if trying to translate this to some sort of physics picture, or topology picture that involves say links where the strands are labeled by objects of this category. One should think that the piece of the category that object from labeled by some element G. And then you can define the whole ennemy of your background flat connection, the flat connection that you've enriched the three manifold with in a small loop, going around this line. If you just have a single line. It's only the conjugacy class of the whole ennemy should matter which is consistent with the old theorem of decontini cats and perches that a fiber of this category only met only depends on the conjugacy class. Now, if you want to start building a TQFT out of this you need to make sure that this is compatible with the tensor structure in the category. And you would expect if you're looking at base pointed whole ennemy is that when two lines collide, when gets the tensor product of those representations using the hop algebra structure of the quantum group. And that had better be compatible with multiplying the base pointed whole ennemy's. And it's literally true when all of your hall enemies are diagonal, and it is almost true when the whole ennemy's are general non abelian things and the almost was described precisely in in this book by Kashyap and Russia to get in this later paper by one shade here at your mirror on Russia to get. I'll just focus on the abelian case. Okay. So, this sort of setup where we're looking at three manifolds with the decorated by flat connections appears in physics when we have a quantum field theory with a global symmetry. As opposed to a gaged symmetry. So, so quantum field theories with global symmetries can couple to connections that are not fields that one integrates over in the path in a goal but fields that are just put in by hand and fixed for all time. And so the sort of theory we're looking for had better have some sort of global symmetry. And so the theories in the background of a whole ennemy defect like this were described really nicely a few years ago in a paper of Victor make high looks in the context of super symmetric trans diamonds but the story is is sorry, the context of super grip trans diamonds, but but the story is very similar here. Okay. Let me try to describe what a few of these fibers look like. Just so I can say some concrete things later on. If we take a generic diagonal element of GL to then we look at representations of you qsl to on which K to the 2k acts is that I can value and you do the K and after the K at zero. The category ends up being semi simple with exactly 2k semi simple objects. And that look sort of like standard highest weight modules, except the weights are, I can values of cave rather than an H. And so they're, they're cute as a something. They all have dimension K, and they wrap around the unit circle. And so the, the, the, the, the weights wrapped around the unit circle, which gives rise to vanishing quantum dimensions, which is one of the things that needs to be regularized in order to build a TQFT out of this but that was done so this is the 80 and variant uses precisely these sorts of representations. And more economy also worked on that in the 90s and then Constantino, you're in a mirror and sort of systematized the data you would need to not get all of your link invariance to be zero. And so the average rise quantum vanishing quantum dimensions would naively tell you that even the hop, and even the unknot has expectation value zero. These sort of generic stocks of the category has an extremely simple tensor product. And sort of looks like what you would expect for an in an abelian theory, it's like for quantum gl one. It leads to very easy calculations of dimensions of putative Hilbert spaces or spaces of states on various surfaces. So, you get a space of states on the tourists. You can generate states by filling the tourists into a solid tourists and putting objects, colored by all the different possible representations along its core. Now, since we're dealing with a QFT enriched by flat connections. We shouldn't just say the Hilbert space of the tourists we should say the Hilbert space of the tourists together with a flat PGL to connection on it. And putting an abelian connection on there with a specified holonymy around the meridian of the tourists tells us what piece of the category to choose objects from on the core. And so we just take the two K different objects in that piece of the category along the core that gives us to K states in that Hilbert space. And so we should say, I'm saying Hilbert space because in physics, we always say Hilbert. These are not necessarily Hilbert spaces in the mathematical sense, and in this sort of teacher at T's I'll be discussing. They are vector spaces, they have duals, they don't necessarily, they're not necessarily isomorphic to their dual in a natural way. They're not necessarily just say vector spaces of states. In genus G. One can use the very simple my nodal structure to also draw all trivalent networks of the two, keep one they say line operators inside the core of a handle body, whose boundaries of particular genius G surface and count what the options are. And when gets to the G, K to the 3G minus three different states. That's a very easy combinatorics problem. Okay, so that's sort of the generic setting. And when, when you're looking at the surface with a generic flat connection on it. There's also the most interesting, most non generic case when the flat connections trivial. So, so the connection itself which is zero everywhere. And so in this case, looking at what sorts of what to be labeled aligned by in the presence of trivial homony. The answer is representations of what is called the smaller restricted quantum group. So, it's modules on which K to the K, K to the two k axis one and you to the K and F to the K act is zero. So this is an extremely well studied category in representation theory. It is not semi simple. It has two to the K simple objects. That can form interesting extensions with one another. The two to the K simple objects roughly look like two copies of ordinary representations of SO2 of dimensions 123, although we have to K. And they taking the projective covers of the simple objects when gets projected. And they have sort of diamond structures and movie diagrams. And they're so they're to K, the composable projectives in, in this piece of the category as well. The semi simplification that was used by restyling to read uses a single copy of the, of what look like ordinary SL to representations of dimensions one up to K minus one. So if one sort of quotients out by everything else, setting it to zero. Another way of saying portion out by everything else is set to zero everything that has a vanishing quantum dimension, when gets a semi simplified category that leads to the old story. So the entire of invariant that's involved in the volume conjecture is defined using this last simple and projective representation of dimension K. And, and finally, sort of the entire thing is, is what we Bush and considered when, when starting to define non semisimple to get these in the 90s. Okay. So, I mentioned at beginning that in general, our spaces of states on surfaces would be homological beasts. They, they have multiple co homological degrees. In terms of the category of lines mathematically. One would calculate the Hilbert space of states in general by taking a whole show of the appropriate piece of the category, depending on what flat connection we've chosen and so on a tourist with zero flat connection. So we have a small quantum group category. And we should find it's Hilbert space is Hochschild homology of the small quantum group category. And so what that amounts to is considering the fact that when there are morphisms among the objects in your category, you don't, you can't just wrap single lines around the core but you have to consider junctions of multiple lines. And so, we're the, the, we'll give you the zero zeroes Huxchild homology, higher degrees and Huxchild homology and Huxchild homology come from integrating descendants of junctions around paths in this core. That is, a statement for anyone in the audience who knows about descendants. So it's a totally sensible thing to do both physically and mathematically. In order to compute Huxchild homology of this category it's useful to have a geometric description of it. And archipath, as you can pick off in Ginsburg, and then those are coming to cut the month, let's go Dave such a geometric description, which at an even rate of unity amounts to saying that as a category, not a monotone category, category. So the first category of representations of the small quantum group has sort of two semi simple pieces to that's just coming from the two symbols that were also projected. And then a bunch of copies of the derived category of her she's on the flag manifold. K minus one copies of that. And gives us a geometric way to compute Huxchild homology which ends up looking like, I want to say total double homology. Looks like total co homology of these of P star flags computed in algebraic way. And then there's an answer. The, the relevance base of states becomes infinite dimensional with non negative co homological degrees and it's finite in each co homological degree. And the parts that the Bush and could have used back in the 90s. I say that on the next slide. And the part that the Bush and could would have used is just h8 zero. But, but there ought to be a derived generalization of, of all of the earlier work and also the current work of CGP and collaborators. Okay. So, I, I, I've sort of written what the different ways look like in terms of representations of the symmetry group PGL to that acts on star flag. There are also nice things to say about deforming from trivial connection to non zero generic connection. Which, there we go. Which again, I think I need to. That's the slide I actually want to be on. I'll just see this very briefly but but to go. So, at generic connection, the space of states had dimension to K at zero connection the space of states is this infinite dimensional thing with the infinite dimensional graded components. And there's a differential that one can turn on to deform the infinite dimensional thing to something that exists only in degree zero and has dimensions UK. The other character is invariant under this deformation the other character doesn't care what flat connection you put on. And at a category level. The field theory interpretation that I'm about to get to also suggests that there's there's just, there's a, well, this is a sheet that coherent chief of categories. And very close to the identity elements in PGL to coherent sheaves gets deformed to matrix factorizations with a super potential. And that involves the complex moment map for for a certain elements of PGL to that that's the stock. That's defined by the stock we're looking at. Okay. I said that other characters don't change under this deformation. Okay, that's, that's, sorry, I probably said too much about that I have really enjoyed during this project, sort of learning and trying to put together more of the structure of this big category of representations of the quantum group at a rate of unity. You can probably tell. Okay, so I'm going to take all of that information and translate it quickly to to physics and then try to explain what what one still has to do on the physics side. So we're looking for a quantum joke theory that is labeled by algebra and a level and an integer k. And that's what the term Simon's like. And that, like it sort of looks very close to the rest you can try the category should have some finite like in each piece of the category that a finite number of objects it should have something that looks sort of looks like a close in lines and in this previous paper on recursion relations for 80 and variants we found like we found structure that was very, very, very similar to the simple or the semi simple story involved in so so 80 80 and variants are different from color and they are the same models but they obey the same recursion relations. And so there are a lot of things that sort of look the same. You should see something turn Simon's like here. However, the line operators in this theory should have nontrivial junctions we should not be getting a simple category. So, very easy signal that quantum field theory gives rise to a non semi simple category of lines is that there are non trivial local operators. This is the sort of thing that the BFM construction computes. So, BFM computes an algebra. And it's the algebra of local operators in a 3D field theory. And so that construction applied to this story had better give you something non trivial. If you apply it to turn Simon's it just gives you the identity and that's it. And there should be a global symmetry around that leads to flat connections. And those criteria lead to an essentially unique answer, which is, if you're not familiar with the physics side of the story is going to look strange and weird and not unique at all. But anyway, so the easiest thing one can do to satisfy this property is to start with a 3D and equals for super symmetric theory that's going to have been introduced called t of G. And even though I write a group here it secretly only depends on a le algebra. And roughly speaking, it has symmetry G times the language to do a little bit. Sorry, is this literally true I thought it only had she symmetry and also it's it's mirrored dual had the check symmetry is that. No, I mean, it has both it's it one acts on the Higgs branch and one acts on the coolant branch, but the, the, what I saw the coolant branch is not a symmetry of the theory. No, of course it is no no no it's exactly on the same pudding. It's if you write down a Lagrangian for this for the theory and type a, all you see is the maximal torus, but otherwise it's down the theory in terms of fields and what you call the Higgs branch and what you call the coolant branch is is arbitrary at the level of quantum field theory. And so it really does. It really does have both of these symmetries. So you mean that it's before you twisted. Yes, yes, yes, exactly. Okay, okay, and the different ways. Right, of course, after you twist, depending on what sort of twist you use. Only one of them will appear as a symmetry of the twist. Or the acting very, very different ways. Okay, yeah, okay, now I see what I mean. Yeah. Okay. So then we take G and engage it further. But, but this gauging is different from the standard three, three D n equals four gauging. It's something one can do in the series with less supersymmetry. And so physics terms I would say using n equals two vector multiple it. And so this less supersymmetric gauging allows the introduction of a non trivial trans diamonds level. And so one sort of writes down. If you if you're not worried about supersymmetry you write on the trans diamonds Lagrangian for G and couple it to the rest of this theory to G. And the result of this gauging does it still have some supersymmetry left on it. Yes, but this is subtle. So, in physics terms, it only if you write down a grand G and you try to do this in terms of fields you can only see n equals two supersymmetry in the infrared or with a suitable twist. It should have full and equals four. But seeing that as. Before getting to topological twist so maybe I'll also say later. So, in using the BV formalism one can actually write down a Lagrangian that does not look like it has n equals two or n equals four supersymmetry but has the single super charts you need to topologically to perform what would otherwise be the twist. So if you can only in the infrared this thing regains full on n equals four supersymmetry and you can topologically twist. But there, but there are subtle views around. There, if you believe that this has any of this for supersymmetry. There are two twists available. One that's focused on the Coulomb branch and one that's focused on the Higgs branch or a twist in the B twist. The G dual symmetry behaves in different ways with respect to the two choices. The one that will leave you with an ability to turn on flat G dual connections is is the A twist in this case and it's it's the one that's sort of focused on the Coulomb branch. The B twist of the same theory was something that Kapustin Salina studied 10 years ago, and something they called transimons rosansky written theory. The B twist is transimons rosansky written. But it is, it is completely different theory, at least it looks, it behaves very differently and it's not the thing that's relevant for quantum groups that are really unique. Okay. So the other thing to say is that this is a quiver gauge theory. So the in sorry when when the group is type A when we're looking at us UN this is a quiver gauge theory. Most of the quiver is the quiver you would write down for a T star flag. That's the Nakajima quiver variety for P star flag. However, in T star flag there's a framing node. And again, for for GLN or for for S you and that that final that final framing node is the thing that's staged with an extra insurance simons level. And so the gauge group of the theory is a product of GL one through not GL minus one but all the way through GLN with the transimons level for for GLN. And this extra transimons gauging both the Higgs and the column branch look like the star of the flag manifold. The extra gauging destroys the branch and it actually seems to do nothing at all to the column branch, except to change well. Same T star flag is maybe incorrect. I should rather say it's the outcome. It seems to this extra transimons gauging seems to introduce some extra singularities at the origin of the notebook and come, which I can't discuss any more precisely than that. What do you mean when I say is destroys the Higgs branch. That's right. So, so you would, I mean, sorry, the thing that it does is it, it takes a clear question of the Higgs branch. Not not a hypercalor quotient, but just no ordinary. So, so you, you, if they weren't to turn simons level around, you would expect the Higgs, the Higgs branch just be quotient by GLN. So they have less super similar to the Higgs branch is no longer calomorphic symplectic. Exactly. The maximal torus of the flat, the maximum torus of in this case, it's PGL to our PGLN shows up in terms of resolution, complexified resolution parameters for the coolant branch. Sorry, their deformation parameters for the coolant branch or yeah. I'm short on time. So I think that's that's all I'll say there's a lot I think there's been a lot about 3D and equals four theories in this conference. And the Wilson line operators that we wanted to see show up for the turn simons factor. So it's surprising in general that one would have Wilson lines in this a twist that is focused on the coolant branch, it can happen here. Precisely because of the trans simons level. It would not happen. Okay, so, so then there's a black box. So this, I claim is theory one can write down a Lagrangian. Lagrangian for the PV formalism. There are lots of localization techniques that apply to that or just or if you just think of it as n equals two supersymmetric theory, their localization techniques that will compute partition functions and expectations of both some line operators in this theory. So this is the work of Microsoft and shuttle Schvilly and recent work on unfisted indices. This. So, in, in, like, half a day, when can code up a computation that spits out other characters for spaces of states, and remind you other characters don't care about what flat connection is turned on. And when on the nose gets the right answer for SU two and we've checked for SU three and SU SU four as well. Also using the beta roots analysis of Microsoft and shuttle Schvilly that that gives you expectation values of line operators. One can get the growth and dig ring of the small quantum group. And so that's with zero flat connection. And so that tells you something about the small quantum group but it doesn't tell you anything about the interesting non semi simple behavior. So, to actually see directly from the field theory more of the non semi simple stuff. So, in the case of the data, one should try to apply for getting there. A lot of the more modern methods developed in the last five years or so. In, including by the fm and Webster and work of mine with guy at the ball and water, Hilburn and company. And so it's a recent work of Matt Bollemore and co authors and writing down Hilbert spaces. So, so there are algebraic to piece one can use one has to sort of adapt the current techniques to this sort of hybrid case that is mostly 3D n equals for in the a twist with a bit of 3D n equals to with a transignments level. The sort of thing when expects for the category of line operators looks as follows. So if, and I want to compare on this slide what happens in ordinary transignments theory to the quantum field theories I'm discussing so in terms of in theory, you can write ordinary transignments theory to the point that the answer is a 3D n equals to theory and use modern techniques to say what the category of line operators should be and the answer is loop group, equivalent coherent sheaves on a point, which, which of course is is the loop group at level k which is the correct thing for to match wcw and the semi simplified quantum group and the whole term Simon story. In these new theories what one gets instead should look something like loop group at covariant coherent sheaves on a deformation of the tangent bundle of the loop space of the Higgs branch of this quiver. And if I didn't say loop group echo variant, and I did say deformation it's the deformation that deforms coherent sheaves to the modules. And the modules on the loop space of the Higgs branch is due to a bunch of the references above the modern description for what a category of line operators the a twist of 3D n equals fourth theory should be. And one needs to somehow combine that the module deformation with a loop group, equivalents. And maybe there's just an obvious way to do that in mathematically I, I have not thought about it enough except to make this heuristic statement, the very very concrete way to do this. That looks very close to the sort of category I've down is to go to vertex algebra which I'll get to in the last minus one minutes of the talk. So the one of the vertex algebra is I will write down look very much like functions, like the derived algebra functions on the space that we're taking coherent sheaves over. Okay, Hilbert spaces so in transimans in geometric quantization they show up in sections of the case power of some line bundle over bungee. And in this new setting, they should show up as sections of the same line bundle, answered with a very complicated sheath that if you don't turn any flat connection on is infinite rank, but finite rank of each homological degree. And it's a sheath that initially Gayoto described for the TMG theories. And so it's against something that we can heuristically write down. I have not done any explicit calculations with that yet. Except in the case where the surface is unit zero. And one is supposed to get local operators in the QFT. And one gets one dimensional space associated to S to intern Simons and functions on T star flag. And in this other theory which is functions on T star flag is the thing you would get from the derived category of small content group representations I mentioned earlier, as well. And that's why I said functions on the colon branch, which is why I said the colon branch as a variety has not changed, but, but extra stackiness is involved for other for higher genus surfaces, this sort of makes sheath description suggests that you should get something roughly looking like the transimans Hilbert space times a factor of the local homology of star flag. And that's actually what we got by computing. Haxchul Tomology of the small quantum group representation category. It was almost of this form with a few extra factors. So at least things look reasonable but but it would be nice to actually do the computation more precisely in this geometric quantization language. Okay, and I'll finish up quickly. So, first, there's, there's some 40 construction of this quantum field theory that I mentioned that is closely related to geometric langlines and for the s duality with different boundary conditions that I can answer questions about at the end, if anyone else is interested. So this 40 setup seems to be closely related to that work of Gukath at all from from last year that I mentioned in the introduction. And finally there are vertex algebra is involved and the vertex algebra is coming from either putting boundary conditions on the 3D quantum field theory, or by working in this for deconstruction and considering not just the sandwich of boundary conditions but a corner. And this was yet a third boundary condition and using work on vertex algebra is at the corner that Sasha must have talked about yesterday. Anyway, one can extract vertex out of this from these brain constructions and from the field theory and when they're actually two natural ways to do it. And I mentioned level rank duality before. So, in, in the classic transignment story, there are two WCW models that play a role that are cosets of each other inside some number of free fermions. And in the news story, there are also two algebras that play a role. And one is the Fagan to put an algebra and one is something new. And the something new is the thing that looks like functions on that weird space that I was trying to take loop group from when I had the module on. Okay, so the Fagan to put inside of this directly relates to the small quantum group, and I haven't turned on any flat connections when making these statements, the vertex algebras can also be performed by flat connections. And this, this new vertex algebra. So, we can gather some number of copies of a bit of a beta gamma system for each edge in the quiver, and then takes a BRST quotient by GL one up to GLM for the nodes in the quiver. And then a final, not the RST quotient but just derived in variance for a final copy of SLN at level So that's, that's that's the new vertex algebra. And Thomas Thomas quite sick showed this was actually dual to the fake to take it to put in for SL to. Okay, but that's, sorry, horribly out of time. So that's all say is, there's a, there's a really amazing vertex algebra story here. So, Thomas could give a one plus hour talk on on the vertex algebra side of this. So, let me say, so I've, I've indicated that that, like, it would be really awesome to sort of generalize the BFM like and other modules on the loop space constructions for line operators. So, I think that's the best to, and Webster to generalize that to include transignment levels that that should land in this case on the small quantum group module category. And I think that's just all correct. Be great to implement this geometric quantization perspective, start looking at open spaces and more so to get actions of the modular group or the mapping class group on spaces and states, which are currently kind of hard to implement. And there's been work for the tourists. Looking at things is that close can and key. Looking at the action of the modular group on hox show homology of that small group category, but I don't have anything in higher genus. So, yeah, they're beautifully in some derived version of this CGP like TQF. Okay, thanks. So, if you have a question now I'm able to let them speak. Yeah. Yeah, can you. I'll answer the technical first. If I can analytically, you are generalized Sharon Simons theory with respect to the level and anyhow different from ordinary terms. So, yes, the answer is certainly yes in some cases. And so, like, like Sergey Gukhov and co authors already described the analytic continuation of the 80 and variance. So, the analytic continuation uses a generalization of this 40 setup that I wrote down with different boundary conditions. So, it's always easier to discuss the analytic continuation than what happens at that integer level. And it's very, very similar to what went and did with ordinary transimons. So, so like the different choices there are all about choosing boundary conditions carefully and this 40 story. So, one needs to move to a half space. I got it. And the second question you mentioned this work of the other and the other check on the vertex algebra in the corner. I understand that it's kind of geometry. It's sort of certain derived category of coherence shifts on on say three day on C, C cube supported on some fat divisors given by coordinate planes, which kind of define you with corner. Now, how do you so that so this is a simple case of that where yeah yeah. Okay, but just look look at this simple, simple case. So how do you see this geometry. And do you see the tall. Yes. So, so there, see if I can actually draw on here. So the thing you're talking about involves sort of the, the full vertex corner. And here we're looking at and zero and zero brains. And so, so that's that's trying to describe say what happens here. And one also needs to tilt to tilt the brains relative to each other a bit, according to the trans diamonds level. So, so what I drew was probably not quite right. And so instead of that one needs to have some sort of tilted corner. So one can extract from that from the corner, the pieces that show up here. So, in one of these corner constructions the one that gives rise to Fagan to put in. And then starts off with a corner that after the summit this after the simplification spits out a w algebra. And so the general corner is these like why and then and algebras that are like massive generalizations of w algebras. In your story you see only and zero zero. Exactly. And, but it would like, of course, like, it would be beautiful to them like start generalizing this and relate to other quantum groups. But but it's yeah so my story is a very, very small part of that. And so colliding two corners, one, and they both have n zero zero but with slightly different decorations on them. And so one gives rise to a w algebra and the other gives rise to to to catch moody, rather to WCW. And, and so colliding those and taking an infinite level limit. And so it gives Fagan to put in. Okay, yeah, all right, thank you. Any other question. I have a question, can I, maybe I mean, some rise what you did in the end, just don't understand if if I understand if I understood it correctly so you can see this. So you get this theory of few G and then instead of gauged in a strong Simon sense, and you get some n equal to super symmetric theory which still has a in between and he's in cool and branch. But. Okay, that's kind of, it's kind of magical that it still has a and be twists. It's not magical what's on so you think I mean, is it so you mean that general and theory school. They don't they only have a holomorphic twist. I have a holomorphic topological twist on and and so what the way we actually analyze this is by starting with the holomorphic topological twist and observing that there is an extra differential that can still be turned on to to deform that to a topological twist. So it's not something general it's just kind of specific. It's very special. Okay, yes. Now, what is the statement sorry that this is probably the main point one of the main points of your talk which I, which I missed what is the state of the relationship between this theory and representation of the quantum group. I mean, so. So this the a twist of this theory. So, this is the same theory that has still has a symmetry and it's the symmetry that acts on the cool and branch. Okay. And the cool and branches is still the no potent cone so so the symmetry is the language dual group. Okay. So, one can, if one wants, turn on a background flat connection for that symmetry. And three manifolds with flat connections, if you don't do that. And you just ask, without any deformation what is the category of line operators. The category of line operators in that theory is representations of the small quantum group. It's it's it should be the derived category of representations of the small quantum group. So the camera for line operators is the For the eight ways for the eight ways. Okay. Yeah. And this is something that you sort of can more or less prove or I mean this. Yes. But we the only way we can actually prove it is by introducing a holomorphic boundary condition that supports the vertex algebra. And assuming that the category of lines is the same as modules for that verdict such a breath, which is what you would expect for a sufficiently rich boundary condition. And then we take that vertex algebra show that it's proved that it's dual to fake in to put in which is known to be the same as as modules for the small quantum group. And I'm wondering, you know, for the small quantum group business, the category of representation of small quantum group. It has this description by sort of bizarre county of Finkelberg and checkman in terms of factorizable sheaves. And, and I'm wondering if you have some feeling that it should be relevant for what you're doing but I have, I have exactly the same feeling. I don't understand that work well enough, but they like that, that story also gives derived like in principle should give derived spaces of conformal blocks. Yes. For these. Exactly. Yeah. So, so I, I think that should be very highly relevant for for all of this. I guess I also think that dates, dates, great has been been doing along the same lines. Maybe one small question you mentioned trans-simon theory for super groups. I mean, did you mention just because it's analogous or does it play any role in what you're doing. There should be an analogous construction that involves quantum super groups. And like, as, like as you know in this world of like 40 young mills with different boundary conditions and interfaces, one can engineer things that look like super group trans-simon as well. And so there should be generalizations of this that involves super groups. And part of the super group story is also like an under derived version of the super group story has been discussed. And it's like Rosenski and Soler wrote that trans-simon theory down and I guess in the simplest case for GL11. And started talking about what GL11 WZW you should look like and found non-synical categories. I think if one approaches this from the super symmetric side when we end up with the derived category of everything in sight. Yeah, I think the super group story is very close. Yeah, we can thank you again.
|
Topological twists of 3d N=4 gauge theories naturally give rise to non-semisimple 3d TQFT's. In mathematics, prototypical examples of the latter were constructed in the 90's (by Lyubashenko and others) from representation categories of small quantum groups at roots of unity; they were recently generalized in work of Costantino-Geer-Patureau Mirand and collaborators. I will introduce a family of physical 3d quantum field theories that (conjecturally) reproduce these classic non-semisimple TQFT's. The physical theories combine Chern-Simons-like and 3d N=4-like sectors. They are also related to Feigin-Tipunin vertex algebras, much the same way that Chern-Simons theory is related to WZW vertex algebras. (Based on work with T. Creutzig, N. Garner, and N. Geer.)
|
10.5446/55218 (DOI)
|
Hi everyone, welcome to my talk. My name is Sandra Sinoni and I'm giving a talk titled Implementing Systems Double Reform, Institutional Change Towards Transparency. I'd like to start off and mention that over the past 10 years, the scientific landscape has completely changed. If we compare ourselves to where we were 10 years ago, it would be unrecognizable. We've had so many new ideas, so many new ways of looking at how we do science. We have so many new suggestions on how this is going to work. We're designing a credible research system for all of us. The thing is, we have to remember that we cannot stop at ideation. We can talk around the table for as long as we'd like and we can come up with many different ideas, but if the end goal is to actually implement these systems and to have research system produce credible research that will actually benefit society, then we have to remember that the end goal is actually implementation, not just ideation. It's difficult because determining the vision for a research system, a credible one, and implementing it are altogether different challenges. In the same way, research doesn't exist in a vacuum. Whenever we start talking about implementation, there are all these factors and issues that we need to consider. For example, there are existing research cultures. Whatever we implement, whatever initiative we have isn't going to sit on top of our research culture. It's going to interact and integrate with it. So we have to remember things such as centralization of authority because a lot of research cultures outside of North America, Australia, as well as Europe have more centralization of resources and authority to the government rather than on an institutional or research level. In fact, there's more power gaps or more power scaling as you go up in the ranks and that will affect who develops and determines what policies as well. On that note, we have to remember that not everyone has the same autonomy. If you're in a research culture that pays you very little wages and you're relying on publications to survive or to progress your career, then you may not have as much capacity to change to research practices that may yield more credible research but might take you away from the KPIs. Number three is research integrity, where we have to remember that any good meaning research system or limitation when interacting with research integrity, if we don't have the underlying research values or integrity, it's not going to turn out well. For example, the notion of preprints. Well, in Indonesia, if you upload something to a preprint server, it gets indexed as part of your KPI. Now, there are several cases where researchers and lecturers would have their students upload each and every one of their assignments with the lecturer as a co-author in order to bolster their research portfolio. So we need to also assess, can these well-meaning initiatives go wrong or there's not the underlying research integrity there? As well as the lability of research culture, where in a lot of places, they've just recently implemented research policies and a lot of them are updating it quite quickly. China is an example of this, where they're constantly updating their research policies, quite drastic measures and effect, in fact. And so how do researchers react to this? If the culture is always labelled and never setting, how is a new initiative coming in going to behave? Could it be easier to implement because they're already implementing new things as it is? Or in fact, even if it's easy to uptake, is it going to be more difficult to actually set and be sustainable in the long run because none of the research cultures are actually setting due to the lability of the policies? And we have to consider all of this. For example, there was the transparency audits that happened some time ago. And a lot of people, especially from a research culture in North America, Australia and in Europe, said this isn't going to work for us. No, absolutely not. They didn't even say this isn't going to work for us. They said this isn't going to work. This is not the way to do things. But consider a research culture where there's very little space for researchers to be able to move into habitonomy. And in fact, everything is guided by top-down policy where there's more centralization of authority. In fact, the transparency audit could be critical to developing a research system that's credible there. And so policy makers' priorities are another important thing to consider because there could be severe pushback. A couple of years ago, there was this huge controversy where the Indonesian government behind closed doors, of course, was pushing back against open science. Why? Because turning out journals was part of their KPI at the time. And they saw open access as an enemy to this, thinking, well, if you can publish things up and put it up without being peer reviewed, then why do we need all these journals? And so they pushed back against open access as well as open science, thinking that open science only consisted of open access. So I'm not saying that they're right. I'm saying we need to be cognizant of what their priorities are and how they're likely to react to new incoming initiatives. And there's also a large disconnect between umbrella-level networks with grassroots-level researchers. For example, UNESCO recently released their open science recommendations. One of the feedback from the different countries are like, this is really difficult for us to implement this. Because very so often, there are lots and lots and lots of networks now everywhere that we're seeing to improve transparency and credibility. But very often, there's this big disconnect with the typical researcher who may not have an audience or access to these networks and yet are told to implement the things that they're recommending. And remember that the goals of every researcher, every institution, every system can produce credible science, regardless of the journey there. And so they might take different pathways, but that's the end goal. So what can we do to implement strategic change? The number one is we have to navigate competing priorities. For example, in 2019, some colleagues in there, we wanted to create a conference, an event that could reform Indonesia's science. But we knew that there was pushback against the government. So what did we do? We had an open science conference without mentioning the word open science even once in any of the marketing materials. Instead, we knew that their parties was to move science forward in the region. And so we framed it in terms of, okay, we're going to create a conference to upskill all the researchers. But we know the way to get there is through open science. And so we had all the sessions talk about transparency and research integrity. As a result, we developed good networks with parts of the government, and we're even invited to lead science briefs, science policy briefs for them. And so by understanding what the different priorities are, we'll be able to implement these initiatives well. We also have to understand the priorities on each level institutionally for researchers. Are they able to dedicate thoughts to creating good science, or are they just trying to survive? That allows us to work in the system a bit better. Number two is be sensitive to culture. Did you know a lot of cultures in Asia won't tell you if they disagree with you? And so that's why a lot of researchers from outside find it frustrating because they're trying to communicate, but they're completely different cultures. And so we have to know the people because science is a social enterprise. And in the end, it's not about the system, it's about the people in the system. And by understanding the culture, what does the country's culture prioritize? What aspects of open science? What aspects of transfer? What aspects of integrity? What language would they use? We'll be able to implement these initiatives a lot better. Number three, we have to co-design, not transfer. A lot of the time, the initiatives are designed and implemented by one group for a completely different group, whether we see it ideated in the global north and trying to implement implemented global south, or we see it ideated and implemented by the UN level organizations, or even government or institutional, and yet the grassroots researchers are living in a completely different world. It doesn't work that way. And we wonder why a lot of aspects of open science and research integrity have had slow uptake in areas outside of North America, Australia and Europe, because it wasn't co-designed or ideated there. And so the way to do this is we need to be able to bring everyone to the table to design these policies together, to design these ideas together, because very so often, we don't realize that the ideas we're coming up with, the ideas that we think about, are perfect for our research context without understanding how different it is out there. And so there are many different ways we can do this. For example, one of the events that I'm leading is an event called Advancing Science in Southeast Asia for the October this year. And in one of the sessions, we're actually bringing together UN level organizations such as INXA, International Science Council, and we're having talks with UNESCO, along with institutions in each country, along with the grassroots researchers to come together and to design policy documents together, because if we can integrate the ideation and implementation, then the whole process becomes a lot smoother. And so there are many different things we need to consider. And I'd like to leave everyone with this thought. For every idea we have pertaining to science as a whole, we must consider in which context it will work, because the goal is an ideation. The goal is implementation. Thanks so much for coming to the talk. I'm looking forward to hearing the discussion and the questions. Thanks everyone.
|
In recent years, the academic community has evaluated the research ecosystem and identified key issues which undermine the trustworthiness of its output. With it, myriad suggestions, and solutions. Despite this, change is slow and well-meaning initiatives often have adverse reactions. This is because the process of determining a vision for an ideal research system and implementing it are altogether different challenges. Furthermore, research systems in Asia, Latin America, and Africa are substantially more heterogenous than in North America, Europe, and Australia. For example, in many countries in the Global South, research culture is still labile due to the sudden introduction of policies and unique incentive structures which champion research quantity .Thus many ideas generated in one research system do not translate well and vice versa. The question becomes given reach region's unique state, how do we champion process and institutional transparency? In this talk I discuss the different factors that affect how we approach each country, including but not limited to existing policies, centralisation of research authority, and inherent cultural beliefs. Further, I outline strategies for reform using Indonesia as an example, from a grassroots movement to influencing national infrastructure and policy.
|
10.5446/55212 (DOI)
|
Hi everyone, my name is Lucy Barnes and I'm an editor and outreach coordinator at Open Book Publishers which is an open access, scholar led, not for profit book publisher based in Cambridge in the UK. And as part of my work with OBB, I'm also part of COPIM, the community led open publication infrastructures for monographs project. And I'm here today to talk to you about COPIM and about our work scaling small. Open access books are in an interesting place at the moment. More presses are starting to publish open access books and chapters. And we're seeing increased willingness on the part of funders to implement open access mandates that include long form scholarship. We've seen an example of that recently in the UK. With the UKRI, one of the major funding bodies in the country, releasing its open access policy which includes books. And we're expecting to hear from coalition S as well later in the year to hear more about how their principles can be applied to the publication of open access books. And publishers are responding to this trend. In some cases we're seeing increasing use of book processing charges and transformative agreements, for example in March earlier this year, spring and nature signed its first ever institutional open access book agreement with the University of California Berkeley Library. Running for at least three years, this will enable academics at UC Berkeley to publish a spring and nature open access book. So these sorts of agreements favor the activities and the expansion of single presses. And they benefit a limited pool of authors. So in this case, the authors at UC Berkeley. Book processing charges are another such model which is seemingly entrenched at many presses and can run very high, especially at the more prestigious outlets. So again, taking some examples from the UK, Cambridge University Press will charge a single author or their funder £9,500 to publish an open access book with them. Manchester University Press currently charge I think £9,850 and Palgrove at Millen charge £11,000. But we're also seeing the growth of something different, we're seeing alternative models pioneered largely by scholar led initiatives and smaller university presses such as open book publishers. So we produced in 2015 a library membership program and according to this model, libraries support us with small amount of money each year. So at the moment it's £300 a year from each library and collectively that money enables us to publish open access books without charging the author of book processing charge. Punkton Books is another scholar led open access press that has picked up this model and is using it as part of its own operations. And at Copim we've built a model called Opening the Future, which I'll talk about a bit more, but which enables a press to use subscription to its closed access backlist to fund an open access frontlist. There's also an example in the US, we have for example Lever Press, which is a group of universities come together to fund an open access press which does not charge book processing charges and there are examples such as Language Science Press, which has a library funding model of its own and which employs a kind of collaborative editing model sharing the labour of the press with the communities that it serves. So these organisations have been willing to experiment and to try new things, thanks in part to their sites and often to their not-for-profit status. They can be much less conservative than larger presses and they're generally extremely transparent about their business models and their operating costs and this enables them to share these models widely and in quite a lot of detail. And there's some evidence that these approaches are having an influence on larger presses. So for example MIT's Director Open was released earlier this year and it has some similarities with Copim's Opening the Future approach, which was launched last year. And in turn Copim's Opening the Future approach was informed by Open Book Publishers and Punctum's work with libraries. And in recent years we've seen a variety of these smaller open access presses and open access university presses emerge. But there is a problem. While their size and also the not-for-profit status that a lot of these presses share can be positive, as I've explained, it can enable experimentation, it can enable a certain kind of responsiveness. It also imposes structural constraints on those presses. So there might be a lack of skill sets and this is particularly the case if the press is a new press or if the project is a new project. There might be a lack of experience in certain areas. There often is a lack of resources. And these presses individually have insufficient market leverage to create significant change within the industry. The capacity of the individual presses might be small and that can lead to these presses being dismissed as niche or as irrelevant. I know certainly at Open Book Publishers, I sometimes hear as described as doing very worthy work, but doing it in a small way which doesn't necessarily make big changes. And instead there's a demand for scalability. Scalability is seen as necessary for any scholarly communication project or practice to really succeed. And you often hear demands for an individual press to scale or at least to show that its model is scalable. And again, this is something that I've experienced at Open Book Publishers. And what people mean when they say this is they mean, can you grow bigger and publish more books yourselves or at least can your model be adopted by another organisation which could grow bigger and which could publish more books. But what we're interested in at COPIM is what if we asked a different question? Is there a different way to achieve resilience and to make an impact in the scholarly communications landscape other than scaling up? How can we expand the multiplicity of smaller scale, scholar-led efforts in a landscape that often tends to favour the growth of single, competitive and large entities? And how can we do this while at the same time preserving the advantages of the smaller presses? So the high degree of responsiveness to their authors and their readers who might operate in a particular scholarly niche, they have their own particular character, their own particular editorial approach, which is rooted in their particular size and in their discipline. They might have an appetite for innovation and experimentation. They might have a focus on scholarly mission over profit making. So how do we preserve the qualities of smallness while at the same time achieving a kind of scale? And our answer to this question is scaling small. Scaling small involves generating scale through collaborative relationships between a large number of community-driven projects and this creates a platform for them all to flourish and crucially for more organisations to join them. This allows individual presses and projects to retain their independence, their own editorial and working practices and to make their own decisions about how they operate and how they experiment but by sharing knowledge and best practices, by offering mutual support, by collaborating on larger projects and by building a shared infrastructure, we can develop a framework within which a diverse range of small scale and not-for-profit initiatives can flourish. Each one can become more resilient and more secure themselves and they can also lower the barriers for entry to others. This has the advantage of preserving rather than flattening bibliotiversity in the process of achieving a different type of scale. And there are already groups that are working in this way such as the Radical Open Access Collective and the Scholar-Led Consortium. Scholar-Led includes open book publishers, punks and books, open humanities press, mattering press, mise en presse. And we're a group of not-for-profit Scholar-Led presses that prioritise cooperation over competition. So we've initiated practical collaborations that include a sharing of resources and the sharing of best practices, a shared conference presence, a shared website and joint funding bits. And individually we have our own different ceilings, we're working at different scales. So mattering and mise en produces three to four books per year, open humanities press produces around 30, open book publishers currently produces 30 to 40 books a year and punks and produces around 50 books a year. So collectively we've produced over 600 books and our shared catalogue is currently the third largest collection on the OAPEN Library. So it gives a nice indication of this kind of concept that I'm trying to describe. Another ambitious expression of the scaling small philosophy is the community-led open publication infrastructure for Monagrass project, the COPEM project, in which the Scholar-Led presses are key partners. COPEM is an international partnership that involves horizontal collaborations, so collaborations between pressers, but also vertical collaborations that involve presses, libraries, universities and infrastructure providers. And what we're doing at COPEM is we're developing infrastructure to address the key technological, structural and organisational barriers that can inhibit the funding, the production, the dissemination, the discovery, the reuse and the archiving of open access books. So we're trying to look at the process as a whole of producing open access books. COPEM is a £3.6 million project and it's funded by Research England and by the Arcadia Fund and the infrastructure that we're producing is created, owned and governed by the communities that use it. It's not commercial and it's not going to be dominated by a single organisation. Instead, it will be situated within and led by the academic communities that it serves. COPEM is building the structures and the systems that can sustain a diverse, Scholar-Led, not-for-profit open access publishing ecosystem of the kind that I've described according to the principle of scaling small. The infrastructure that we're building is open, it's modular and it's interoperable. So what that means is if you want to use a particular aspect of it, you aren't locked in to using all of it. And it includes a revenue infrastructure and management platform. This is the open book collective which comprises open access publishers, service providers, university librarians and academic researchers and they're brought together by a hub for open access book publishing that will enable the flow of flexible revenue streams and the flow of information. So this will support the publication and the dissemination of open access books financially and it will make those books and the infrastructures that support them better integrated with a more legible two library systems. Knowledge exchange and piloting alternative business models has resulted in the creation of a business model called Opening the Future that enables presses to flip from closed access to open access publishing by using library subscriptions to a closed access backlist to support the publication of an open access front list. Two presses, Central European University Press and Liverpool University Press are already implementing this model and it's already funded its first open access books. There will also be a toolkit that will make this model reusable and implementable by more presses. Best practices for the collective governance of open publication infrastructures guided by community values and involving all of the key stakeholders in the academic publishing process. So these best practices have been developed in the process of devising the governance models for the infrastructures that COPEM is building. An open dissemination system called TOEATH that enhances the discovery of open access books using open metadata. The open dissemination system has already been fully implemented by open book publishers, by punks and books and by media studies press and it's in the process of being ready for adoption by more presses. A pilot of representative experimental books built on top of existing open source tools and platforms to gather with technologies and cultural strategies to promote the discovery and the reuse of open access books. Showcasing and supporting the continually emerging diversity of long form academic publishing. And finally technical methods that can effectively archive complex digital research publications, thus supporting publishers to implement long term preservation of the books that they create. And there will be a pilot case archiving a subset of publications at a number of locations, including the British Library. COPEM is already more than halfway to completing our objectives and we believe that the infrastructure that we're creating will help to foster a larger and more diverse ecosystem of scholar led and not for profit initiatives that collectively can carry more weight than they might individually. We're aiming for growth through collaboration rather than the growth and expansion of a single press or a single model. We want to see more presses, not bigger presses, thriving at multiple scales, enabling bibliodeiversity and experimentation and creating a more complex and pluralistic landscape than a single model growing ever larger could do. Through COPEM we aim to develop infrastructure that enables collective stewardship of the sustainability and development of open access book publishing. Rather than monolithic scale, we aim to unleash greater diversity. In the words of my COPEM colleagues, Yanaka Adama and Sam Moore, building on the work of Anna Singh, we want to unleash diversity that might change things. Scaling small can be thought of as a way to re-conceptualise the world and perhaps to rebuild it. If you're interested to find out more about COPEM, the best place is our open documentation site, www.copem.pubpub.org. And you can also get in touch with us via Twitter at COPEM project or via email at info at coPEM.ac.uk. Thank you very much for listening.
|
Open access (OA) book publishing is undergoing a period of transition. While scholar-led presses have long been at the forefront of OA book publishing, developing innovative business models and publication workflows and advocating for a broader shift to OA, larger commercial and university presses are now beginning to take OA books seriously. Community-led approaches such as the ScholarLed consortium and the Radical Open Access Collective may be threatened by the emergent trend towards 'big deals' and 'transformative' agreements in the OA book world, through which institutions and authors are encouraged to support only the ‘big players’ with money or manuscripts, potentially leaving smaller and academic-led presses out in the cold (e.g. see https://group.springernature.com/gp/group/media/press-releases/new-open-access-book-partnership-with-uc-berkeley-library/18993926). The ‘scaling small’ approach (see Adema & Moore, 2021, https://doi.org/10.16997/wpcc.918) offers one alternative to this monopolistic vision, focusing on collaboration between smaller, academic-led and non-profit entities to build systems and infrastructures that provide mutual support at multiple scales. This ‘scaling small’ philosophy is being put powerfully to work by the Community-led Open Publication Infrastructures for Monographs (COPIM) project, a major three-year international project bringing together libraries, scholar-led OA publishers, researchers, and infrastructure providers to build open, non-profit, community-governed infrastructures to expand the publication of OA books. COPIM, which includes members of both ScholarLed and the Radical Open Access Collective, is developing platforms and partnerships to address key technological, structural, and organisational hurdles around the funding, production, dissemination, discovery, reuse, and archiving of OA books. The project thus aims to build the structures that can sustain a diverse, scholar-led, not-for-profit OA publishing ecosystem according to the principle of ‘scaling small’. We are approaching the halfway point of our project and this paper will share insights into our progress so far, together with our plans for the next phase of our work, outlining how COPIM is putting ‘scaling small’ into action. This includes: 1) a non-profit, community-governed platform to facilitate the exchange of information and funding between libraries, OA book publishers, researchers and the wider public; 2) Opening the Future, a business model enabling the transition of legacy publishers to a non-BPC (book processing charge) OA business model; 3) the study and development of appropriate and robust governance models for non-profit, community-owned infrastructures; 4) Thoth, an open-source OA book metadata creation and dissemination system and service; 5) a report, toolset and use cases exploring the field of experimental book publishing practices, including a review of open-source tools and platforms; 6) technical and legal solutions to effectively archive and preserve complex digital research publications. This paper will lay out these developments and the philosophy of the project as a whole, giving attendees at OAI 2021 valuable insight into a major new initiative supporting scholar-led OA for books. As Adema and Moore (2021) argue (building on the work of Anna Tsing): ‘scaling small’ can ‘be perceived “as a way to reconceptualize the world – and perhaps rebuild it”’.
|
10.5446/55211 (DOI)
|
Allow me to start by saying that I'm extremely happy to be opening the digital research data of the Geneva workshop on innovation and scholarly communication. This entirely online edition of 2021. I would rather be somewhere with you all talking about research data, open science and all the things that I'm very passionate about. But hopefully this will be the last time we'll have to do this via the computer. So I'll be talking today to you about data, how data have changed the way that we do science and that we publish and hopefully that we evaluate the science as well in the last few years. Now, if there is a message that I would like you to take home or wherever you are to take with you at the end of this presentation, I would break down this message in three parts. I would definitely say that we cannot talk about research the way we did years and years ago. So we now conduct research in a much different way than 10 or 20 years ago. So the way we talk about it also needs to be different. The other thing is that given that this is the workshop on innovation and scholarly communication, we can only innovate once we feel free to experiment. And I believe that this is absolutely crucial for the challenges ahead of the entire scientific process. And then the last part of this message, which is also a little bit the pillar, the theme of this edition of the workshop, is the future of open science. The future of open science, I believe, is already happening. Could it be happening faster? I think so. But if I look back at the last five or six years, a lot of things have happened and the future is already present. Speaking of open science, of course, I cannot start talking about research data if I don't put them underneath the big umbrella of open science. I could be talking for hours about open science, people who have listened to me talk before know that that would be absolutely the case. A lot of things can be said. But the thing here that I want to highlight is that the open science movement really fights and encourages for the sharing of research output beyond the contents of what we call an article or a published paper. So I will say to this as, let's say, the starting point for the rest of this presentation. And what I would like to address with you during the next 25 or 30-minute stop is that the data needs to be started, treated like first-class citizens. And hopefully, in a few minutes, you will understand what I mean with that. I will also talk a little bit about why there are so many barriers to data sharing and what can we actually do to overcome this. But I will already have to tell you that I don't have a solution for everything. And last but not least, because I'm not a traditional researcher, of course, I'm an independent researcher affiliated with ICTORE, the Institute for Globally Distributed Open Research and Education. And I'm also a professional data scientist full-time in a profit organization in industry. I also would like to talk to you about the potential and the benefits of sharing data outside of the academic bubble, so really within the society. But let's start with the first point. Science has certainly evolved to the last centuries, perhaps even the last thousands of years. If it's true that a long time ago, we would be talking about science from an empirical point of view, driven by experiments. A hundred years ago, we finally were capable of drawing most of the laws that are now regulating almost all the theory that we know about science. In the last few decades, thanks to a lot of computational resources, we've also been able to look at science from a simulation point of view, using computational science. And in the last few years, we have definitely assisted into a completely new paradigm of doing science, which some people would say that it's driven by data. I prefer to talk about science that it's informed by data. I'm a data scientist. I collect and analyze. I have to interpret a lot of data every day. I don't like to think that this data will drive the business that I work for and with, but better than we inform the decisions that we're going to make. Of course, the fact that we collect all this data and that we are now capable of doing what it's called, science 2.0 or eScience also means that we have to face new challenges. And these new challenges are, first and foremost, that it's very difficult to actually produce data that is not only human, but also machine-readable, and then to use this data as a substrate for knowledge discovery. Because let's be honest, we don't produce data just for the sake of it. We produce data because we want to understand how the world works. Of course, reproducibility also comes with it and all the issues that we know about that. We'll go to that in a moment. And research assessment, given that we're now producing more data, analyzing more data, and we're also expanding somehow the portfolio of the digital skills that every researcher needs to have these days. Also, research assessment should be changed and shifting likewise in the same direction. Let's talk, let's think for a moment. We really need to spend too much time here about the research method. This would be, of course, slightly different depending on the discipline that you're studying or that you're researching in. But generally, we start from hypothesis, we design our study, then we have to run the study, which mostly means collecting some sort of data, which we then analyze. And finally, we write a report to tell the world and to tell all our peers and our colleagues at different universities and research institutions, whatever it is that we have found. This also means that research produces much more than just a PDF. And sometimes we tend to forget about this. So what I have sketched here is really, I like to think about the final paper where we write our findings as the paper that does the advertising for our research. And I don't mean this necessarily in a bad way. We need to have a mean to tell society and to tell our peers that, you know, we did an experiment, we did a study, and these are the conclusions and these are our doubts and the next questions that we have. So we need to tell the story, absolutely. But the science, the research behind the story is an entire package that it's not only made of this narrative, this text, but it's also made of all the data, the code, the version, every digital object that we are capable of producing. Of course, I say digital objects because to share physical object, it's something different. And so when we talk about the data in this context, I really refer to every single unit of digital information that we produce in our research cycle, as I have just highlighted. This also means that we have a certain reproducibility spectrum that goes really to very, very, very minimum to almost 100%, depending on how much we share what we work on and how well, of course. And just to give you a little bit of background here, but I'm pretty sure everybody is by now familiar with the concept of reproducibility is really the minimum standard for research validity. But as a minute means that if we take the same data sets and we rerun the same analysis, we should be able to reproduce the study, the workflow, and this workflow should give us the same results. And really, if you think about it, if you can't replicate an experiment, how are you going to trust its results? Right? So we have all these objects, they are all scattered, but somehow all these objects that belong together should be linked to each other and should also be linked to other objects so that they can be discovered on the web and so that they can produce this substrate on which we can enable the knowledge discovery and everything that we need to progress science and to progress discoveries. But how do we do that? It's rather complicated. We came up with this beautiful concept of the fair principles, which really provide guidance for data stewardship. Now, I'm not going to go into details, especially given that the rest of the talks you left during the session on digital research data are going to look at the fair principles from different angles and across different research disciplines. The only thing that I want to mention here is that their goal is really to assist the discovery and the reuse of all these objects through the web. We talk about the fundable, accessible, interoperable, and reusable. That doesn't necessarily mean open data. The fair principles come in degrees. This is something for me very important and I'm sure you will hear about this later during the day. And they are completely diagnostic of technical implementation. So they're out there. Somebody has come up with this beautiful idea and they are really a nice set of principles to make sure that we can achieve the discovery and the reuse of all these research objects through the web. But if we go back now for a moment to the research cycle or the research method, if you prefer that I quickly highlighted before, between the step three and four, when we run the study, we collect our data and we analyze the data. Of course, this is a little bit simple, the way I'm depicting it. If you were to zoom just inside that part, we would actually see that the data have a life cycle on their own. So we collect them, we process them, then we go through data analysis, then we need to make sure that we publish the data and we enable the access of this data. In the long term, data also needs to be preserved so that if you use this can trigger new research ideas, which turns can become more research data planning and design and the cycle comes on and so on and so forth. So you might already start to think that getting into this all entire big world of research data also means that we need to have to deal with very different specific aspects and a lot of skills are needed in terms of what does it mean to preserve a dataset, what does it mean to access it, and even the data analysis, how do I make sure that it's then reproducible, I need to be able to code, I need to be able to do versioning control. So it's really not that fair and that's something that I would like, not that fair, I mean not that simple, and that's something that I would like to keep in mind because I will go back to this in a moment. And this brings me to why I believe that data sharing is still not so common and the uptake is not as we would like it to be. So there are some barriers for sure, but as I said a little bit at the beginning of this presentation, I don't ever suggest all of them, but I would definitely like to stress and to point out what the possible solutions are already out there and so really to try to have a positive spin on the subject. Mostly when I talk about data sharing, especially within universities and research institutions, some of the things that always come back to me are that, well first it requires additional skills and it is correct. Sometimes people say that, wow you're already so very busy, we have to do so many things, we really don't have time, I mean it's time we need to take away from other things and we don't have the time, we don't have the time actually. It's not rewarding them or perhaps it's not rewarding enough. And the last but not least reason I believe it's really that we tend to hold ourselves higher standard than we tend to hold other people and I'm going to go to that in a moment. Let's start with this case. So I said before that if we want to make sure that the dataset is fair or that the dataset can be, it's not only free, open on the web but can actually be accessed so that it's accessible and that somebody can reuse it, we need to be able to check all sorts of items on a checklist. Starting from data management plan, then thinking about annotating the metadata, reporting all the tools that we have used. So we could really talk about a fair journey where we start from our data, we annotate our code, we also show and share whenever possible the tools that we used and in the end we are capable of saying look these are my results, if you take this nice data package you can reuse it as you wish, as you like, with this specific open license for example. In practice this might not be so simple to do, so easy and it's a bit what happens every time that we have a new innovation that is going to be diffused in the field. So with the innovation diffusion you always have that at the very beginning you need to kind of overcome a sort of technical obstacles or limitations because not everybody simply has the skills or knows how to do specific things and that's what I believe investing in infrastructures and in tools and in communities for good practice is crucial to help everybody through their own fair journey. In this sense I cannot avoid of course to bring to you this fantastic project, the Fair Cookbook that really aims at turning fair into practice. It's completely open source, developed really following collaborative methods, was created by researchers and by data managers professionals. Right now it's targeted at life scientists but I believe it's easily extendable to other disciplines as well and it's really like a cookbook with recipes that you would follow the same way you would follow recipes to make a specific dish, a specific meal. So it really puts things together to tell you how is it that you actually can be successful in your fair journey or as they say the fairification journey because as I said a little bit before fair really comes in degrees. Having a data set or your work fair there is no zero and a hundred percent, even 20 percent is good, even five percent is good. So this fair cookbook it's really full of recipes, example recipes on the four principles FA, I and R as you see on this slide and basically the idea is that it makes it too easy not to do. So every time somebody now will tell you I don't care about fair because it requires too much skills and I'm really not good at it, you can tell them look out to the community, look up online, there are a lot of resources and I am a hundred percent sure there would be some resource that even if it's not fitting your use case a hundred percent it goes very close by and so you can do half of the job ready for free, you can get it done ready for free. So you will see recipes for example about unique persistent identifiers which are crucial to make sure that the people can actually find your data set. How do they access it, what does mean that it transfers data with specific protocols, how to make sure the data is machine readable so interoperable between different systems, what does it mean to take care of metadata notation or controlled vocabularies or ontologies but not only theoretically but even practically with specific use cases and specific examples and of course last but not least usability. So we talk there about what does it mean to release a data set with an open science, with an open license, sorry, what are the uses that are permitted and so on and so forth. So really make it just too easy not to do it, there is no excuse. The other aspect that I believe we really need to start focusing more is the aspect of investment. I know it's not very elegant to talk about money and some people don't want to talk about money but I believe it's crucial and we are now at a stage where we need to choose how we spend our money and with our I mostly mean the public money. So I guess you have all read and if not please do read it, the report from the European Commission of a couple of years ago that investigated what is the cost, the estimation of the cost of not having fair research data. This cost has been estimated in 10 billion euros each year just in the academic sector with plus and super plus of 16 billion euro in the lost innovation opportunities. So it's a lot of money that we're wasting if we really do not have fair research data and so there is this idea, somebody that is a month that is pushing for investing research funds in ensuring the data reusable and I must say that I absolutely agree with it. It's still a shame that we care so much about transformative agreements and now to make a specific company in the publishing sector richer and richer and more profiting from public money and we still don't care enough about research data usability and fairness. So it's irresponsible to support research but not data stewardship and what does it mean? This means that most of the obstacles in my personal opinion right now are really of a cultural type. I believe that TU D'Aft is one of the few universities out there at least in the European scenario that it's really showing the way to go when it comes to research data management. I think for a moment about that cycle that I showed you about the data, you collect the data, you analyze them, you need to make sure that you publish them, you need to make sure that they are reusable, you need to make sure that the data stay there in the long term, so long-term preservation and then you need to make sure that this data can be reused by other people which in turn will generate new idea, you need to make sure that you tell people how you did share the data and how did you run specific analysis, I mean you need to do a lot of things and I really believe that expecting the researchers, every researcher this day to know everything there is to know about digital skills and about data preservation and data management is absolutely crazy. I would say it the way I think it's crazy. What I think needs to happen for a for a fair data world to become a reality is not only to make it easy, so to really establish these tools and infrastructure and good practices and communities that are built on these good practices like I showed a little bit before with the Fair Cook book, but first and foremost I think we need to invest in people. So every research institute, every university these days should have data stewards and data managers on campus which will across different faculties help the researchers address their data management needs. They will also need to have software developers because to some extent some faculty is more than others, there is a need of coding tasks, a need of moving into software development and skilled professionals also need to be able to assist the researchers with this. Of course open champions, so if you have people that are already engaging in open research practices, in open research practices around digital research data these champions are there and are super crucial. Their voice is very important because they can teach, they can support and they can inspire other researchers in embracing the same research practices. And of course the head of the research data services it all goes there, it goes back to people who believe in people and people who believe that specific budget and specific funding needs to be invested in making sure that these human resources are available and they're there to help the community grow. I want to focus for a moment on the open champions. Of course open champions you would think immediately that how do you become a champion of open data or fair data? Well first because you think it's good, maybe because the technical barrier for you is not so high to overcome, maybe because you feel the responsibility for your data to be open as much as possible but definitely to be fair. Of course it would also be nice if you felt that you were assessed because of these open research practices and that you were credited and that you were rewarded. So what still needs to happen is really a shift where we start value reproducibility but also transparency and open research practices as much as we have always valued the number of papers that we publish and unfortunately the input factor over journals where we publish those papers which is now becoming less important I would say in the common conversations but still of course present. So it's really time for this culture shift to change because it's only with the systematic system, systematic practices of research assessment and rewarding metrics that will incentivize researchers to shift towards a higher data reproducibility and sharing. The other thing of course it's the data publishing. This is something that where definitely a lot of work has been invested in the last few years. What's the idea here? The idea is that the credit needs to be given where the credit is due. We are too much used into thinking about the currency in science as a PDF but it's a bit what I say at the very beginning, right? We produce when we do research so much more than the final story, why should we receive credit just for the PDF? Only for the final part. There is an entire process, an entire list of struggles we had to go through to produce the paper and those should be recognized as well. So ideally what would happen and this is already happening somehow in some journals that allow specific publication of data, you identify yourself with an orchid of course with a digital identifier as an open researcher. You also have a persistent identifier associated with the data set that you have produced. You produce this data set and you deposit it with its own digital identifier and now from this moment on forever the two objects, you as a digital object in the sense of your digital identity on the web and your data sets are forever linked to each other. Then the data set is cited in a paper with a specific digital object identifier with a specific UI and this citation is tracked in an article publication and so on and so forth. So you actually know that this data set is already being used by other people who are perhaps building extra analysis, data reparposes and other things. And of course this opens up the possibility for data reuse because this data set then can generate an additional data set perhaps with some extra dimensions that are complementary to the first data collection and this can be analyzed in another paper, in another paper, in another paper and it has already acquired like four or five citations by now which means that the credit is given really what it's due and this is a mechanism of recognition to you as a researcher which in turn will of course also make sure that you will stick to this type of approach and you will stick to this type of data publishing and verification of your data. Sometimes I have heard this, I don't want to share my data because I'm afraid people will find mistakes and here with data of course you need to read also code. I don't want to open up the source code that I used for my analysis because I'm afraid people will think I'm stupid and people will think I made mistakes. Human mistakes, sincere, honest mistakes are okay. I'm getting a little bit, I feel a little bit stupid having to say this because sometimes I think about shouldn't this be natural, shouldn't this be obvious to everybody but unfortunately I don't think it is. It's okay to be human and it's okay to make honest mistakes. That's why the peer community is there, that's why other researchers are there to tell you hey I think this is a mistake, perhaps we should look at it and make it better. Actually the reason you should open up your data is also for other people to make sure that they can correct you because making mistakes is absolutely fine and possible. Which I also believe goes back to the fact that we should make it okay to be slower, to go a bit slower. I'm a big fan of this low science movement that has come and that has emerged in the last few years especially now after the pandemic or I would say in the pandemic still era where we know that what we do when we are within traditional academic path and careers it's way too much. Researchers are expected to excel and to be able to take care of everything that's on their plate, produce as much as they can while I actually believe that science needs to slow down a little bit. So maybe if we started recognizing and giving credit to researchers for all that even the research outputs that they produce also this space would change and the future of academia would look a bit brighter. And last but not least I want to close and I think I'm good on my time the last five minutes stop on why I advocate for open data, why I think it's so important and what's the value of it even outside of the academic bubble and I want to start with a message to all of you and my message here is that sometimes when I read about research discoveries or when I read about articles that talk about society and science or even when I see on televisions people citizens that are being interviewed about science and we have seen a lot of that especially in the last year because you know COVID-19 and then the vaccination and the fear of the vaccination and I could go on for hours unfortunately. Sometimes I believe that science seems to have lost its connection with society and to the needs of society but this connection of course is crucial to address the challenges that we need to address and to build better realities and this connection is made of dialogue and collaboration and when I look at science and society I believe the one of the first things that we really need to start investing more is the availability of data and digital products. So this synergy where science really goes back working with and for society it's a synergy that can happen with collaboration and transparency. So open access in terms of open access publishing of research papers of research articles is absolutely crucial because the citizen has all the rights to have access to the scientific literature which is being paid with their taxes and on top of that the availability of every digital product that it's produced on both sides. On one side within for example governmental institutions and on another side within research institutions. So the first study that I want to talk to you about it's a story that it's very very close to my heart and it narrates a campaign that was launched by Ondeta which is an Italian association that promotes the opening the publishing of government public data so of which I am an associate together with transparency international Italia and Action 8 also with Italian chapter and basically this campaign that you can find on datipenicommune.it it translates into English with the data common goods and it basically what it does it asks the Italian government to open up all the data around the pandemic the epidemiological crisis and to release this data in a fair according to the fair principles. Why is this so crucial? This is not because we want to have this data because God knows what are we going to do with that. This is so crucial because public trust in this type of data it's absolutely key to ensure that there is first a correct risk perception which in terms is followed by proper behavior from the citizens and these two things together are absolutely fundamental to make sure that we get out of this situation as fast as we can and actually to have had this data one year ago would have made a huge difference according to me and the sharing and the accessing of this data is absolutely key to build this public trust and the last but not least point of this campaign and all these conversations around public data an opening of this data that he has produced is that when people tell you I want my data go to this website you will find a list of all the PDF files and even some interactive dashboards those are not data and they absolutely certainly are not fair data so we also need to start educating educating and educating institutional roles about what data means and especially what is human and machine-readable data so if you're curious this is something that as I say is very close to my heart and I would like to have a look at if you want and the second story which really comes back to the fact that data are not PDF, PDF are not data it's a story of a civic hacking if you want or citizen science in the region where I was born and where I grew up in Sicily in the south of Italy which talks about how a group of activists opened up Sicily as implemented a system that automatically scrapes all the PDFs that are released on the official site of the region in Sicily to make sure that all the information contained in these PDFs is no longer tracked but it's made available in data sets that are in an open format in a machine-readable format that then can be used for example to make a map like this in this specific case we talk about the PDF where the region the institutional office of the region would list all the cities that were in red code COVID-19 specific lockdown measurements in place there was a delay in the publication of these documents but the most important thing is that these documents were not even accessible with people with specific disabilities so this is really not okay and to make this data actually available opens up an entire list of possibilities in terms of understanding what's going on and how to live better fundamentally and the last thing that I want to mention to you before I conclude is the story about the fair toolkit for the life science industry so I also would like to point this out because as I said I'm an independent researcher, a civic hacker but I am also a person that works within the data industry and there the most relevant benefit of having fair data is really the reusability beyond the denation and primary purpose when people tell me I don't care about fair because I cannot make my data open I really try to convince them of the opposite fair data is not about opening data it's okay if you cannot open your data not every data can be opened up the goal of fair is really to be capable of reuse your own data beyond what you had conceived in the first place so it's really a benefit for every organization out there I recommend you to have a look at all the use cases all the stories that are reported on the fair toolkit on the epistoy alliance because they're really beautiful and the other thing it's also that when you have linked metadata on top of data even if because of the life cycle the data has been deleted the linked metadata is still there and it's still a permanent scientific record which will stay forever and that has really really great value so with this I conclude what I wanted to talk about I think I did a good time we'll open data change the world we'll fair data change the world we'll make a better reality no I think that's still up to us that's up to us as a bunch of you know human beings hopefully good human beings can we do that through data through data that it's open and that it's fair and it's accessible and reusable I think the answer is yes I'm pretty sure we can and with this I thank you very much for your attention and I look forward to the session with our Q&A thank you
|
The last science paradigm has marked the beginning of the e-Science, or Science 2.0: we are immersed in an enormous amount of data and are equipped with the computational resources and infrastructure needed to make sense of these data. However, the process of scholarly communication and especially the one of research evaluation need to still shift the focus from the traditional research outputs (aka, the paper) to data. In this talk, I will make the case that the 21st century academic production can no longer be PDF-centric, but needs to look at data as first-class citizens of science, recognizing that the publishing system, as well as the assessment criteria, need to move towards dataset publication, citation, evaluation.
|
10.5446/55214 (DOI)
|
Hello everybody, Today, Dr. Zuo Qidin from China Pharmaceutical University will talk a China's ambitious plan to establish world-class STM journals. In the end of 2019, China Association for Science and Technology, later on I will use the class as a short. Along with the Ministry of Finance, Ministry of Education, Ministry of Science and Technology, National Press and Publication and Administration, Chinese Academy of Sciences and Chinese Academy of Engineering, launched a new and large China Action Journal Excellence Action Plan. The major purposes of this plan are to further enhance the international impact of China-based journals, to build groups of journals with diverse capabilities and levels, and to establish digital platforms for journal operation. This big plan includes seven sub-projects. Number one, 20 leading journals have been selected in prioritized fields of research. These journals are charged with the goal of ranking among the world's top STM journals within five years, and will be given the support to do so. The second, third scientific journals with good foundation and great development potential were selected as key journals, forming a competitive dynamic with leading journals. Number three, 200 journals selected as emerging journals for their focus on basic research, engineering and technology, and potential to expand the communication and popularization of science. Number four is for new titles, new launches, using criteria-based priority research fields. Having a forward-looking approach and outstanding leadership, China hopes to establish new high-potential English scientific journals in the fields of China's traditional research strength. Emerging interdisciplinary research areas, strategic research frontiers, and some key common technologies. From 2019 to 23rd in each year, China will launch 30 new titles as high-potential journals. Number five for the sub-projects is called journal-cluster pilot projects. These two build groups of journals. Five publishers like Hial Education Press, Science Press, China Medical Association, Publishing House have been chosen to implement this sub-project. Number six is to build up international digital publishing service platforms. China's digital companies, CNKI and founder group, and Tsinghua University Press have been chosen into this sub-project. Number seven is to train high-level talents for running journals. For example, in 2020, the Society of China University journals have cooperated with some international publishers to hold a series of training programs. The training of high-level talents for operating journals, for running journals, is a big challenge. Also, to encourage China-owned, China-based journals to move to the house on Chinese publishing platforms is also a big challenge. How many Chinese-owned journals will be moved to those journal publishing platforms in the coming years? I hope as many as possible. After implementation for almost two years, some goals have been achieved. In this year, during the meeting for pushing forward an excellence action plan, CAST has announced that by now 29 journals in this heavily invested plan have come up to top 10% in their subject categories, and 12 have been in top 5%. More significantly, eight journals already in top 5. These are some journals ranked in top 5% and top 10%. Leading journals like Light, Science and Application, Molecular Plants in top 5%. Some key journals, even Emerging journals, have already been in top 10% and top 5%. These are eight journals in top 5. Very remarkably, you can say how to culture research these journals have been ranked in number 1 in its subject. There are some discussions or even debates how we can define the world class. Ranking in top 5 or even top 3 in journal indexing systems can be called world class. I want to hear your opinions, your ideas or your suggestions. In 2021, this is the third year of this action for excellence plan. According to Chinese tradition, China will evaluate the achievements in the past three years. China will research some new policies to support the development of journals in groups. And China will encourage China-owned, China-based journals to use Chinese digital publishing platforms. In this year, they hope at least 200 journals will be moved to these platforms or to use these platforms. China will explore the path of open access. Open access is a whole topic also in China. Finally, this year, China will hold the four forums for world S&M journals. I hope I can meet you or see you in person or online during this conference. Thank you for listening to our talk. We welcome your questions either after this session or later on while e-mails. Thank you very much.
|
Two years ago, China triggered an “Excellence Action Plan” to help China-based STM journals further enhance their international impact. In 2019, 20 journals were selected as "leading journals" and will be charged with the goal of ranking among the world’s top STM journals within 5 years. Besides those, 30 "key journals" and 200 "emerging journals" were selected to improve international editorial practices, service capabilities, international communication, etc. Furthermore, for each year between 2019 and 2023, China will choose and launch 30 new titles as “high potential journals” based on China’s priority research fields. This presentation will introduce more details and some achievements of this ambitious plan.
|
10.5446/55213 (DOI)
|
Bonjour les amis, je m'appelle Fama Dian Sen, je suis la directrice de la bibliothèque centrale de l'Université Al-Dioub de Banbe, il me plaît de partager avec vous une étude que j'avais faite en sciences de l'information documentaire à Metis des sciences de l'information documentaire sur le rôle des Présidents universitaires sénégalaises dans l'accès aux ressources publières à nouveau d'accès au Sénégal. Pour ce travail on va parler de l'éveloppement des Présidents universitaires sénégalaises, un jeu épaispectif pour un avocat de l'accès au Sénégal et nous allons voir pour le plan une première partie qu'on s'utilise par l'introduction où nous parlerons du Sénégal et de sa carte universitaire. Pour la partie deux nous parlerons de la dynamique du libre accès et la plateforme africaine du libre accès que nous avons mis en place et également du premier entrepot de données qui est actuellement disponible au Sénégal. Pour la troisième partie nous parlerons des Présidents universitaires et de l'open access, les défis que nous avons à relever, les perspectives et enfin la conclusion. Alors pour l'introduction, je parlerai du Sénégal, notre pays qui est situé sur la côte ouest de l'Afrique de l'Ouest, sa capitale est Dakar, il a une superficie de 180 saisines, c'est 112 km2, sa population est égale à 6 à 8 millions d'habitants, c'est l'assurance de 2019, elle est composée de rolof, les boules, peules, toutes les couleurs, ses rares, de l'armes, et enfin les basseries, c'est pour vivre en paroles harmoniques. Nous sommes un pays francophone, c'est le français comme l'institution comme langue officielle. Si c'est le cas là, nous avons le Sénégal qui est là et qui est en face de l'océan Atlantique et nous sommes la porte de l'Afrique. Pour la carte universitaire sénégalaises, nous nous distinguons comme un pays de longue tradition à moitié d'enseignements publics et supérieurs. Notre université dans le 1957, c'est une université qui est créée à Dakar chez Antelio et là aujourd'hui, 64 ans. La première expérience de décentralisation de lancement supérieur a eu lieu en 1975 avec la création de l'université de Saint-Louis, l'université de Garçon-Vergé. Et en 2007, on a créé d'autres universités, notamment à l'Université de Bambé, là où je travaille. L'université de Thiers en 2007 et l'université à Saint-Lausanne-Sèque de Gégien-Chor. Nous avons également eu une deuxième expérience de décentralisation en 2012 avec l'université Amouda-Martambeau, l'université du Prédiction légale en 2013 et l'université à l'Université de l'Université de l'Agricolle et la Libraie-Munisse en 2013. Alors l'université privée également, nous avons eu l'université privée, l'université Amouda-Mpatéba, l'université d'Agarbourgiba, l'université de Kathouliou de la Foudelevesse, l'université de Porte-Tubama, l'ISM, l'IAM, etc. Alors nous sommes aujourd'hui une forte communauté de près de 138 000 personnes avec quatre établissements d'enseignement supérieur et une publatrice diantine de 128 119 à 2019 également. 233 000 enseignants permanents, trois mises établissements vocataires et des personnes avec une pratique technique et des services à peu près 2500. Alors pour la dynamique du libre accès au Sénégal, pour un simple rappel, nous avons débuté ces dynamiques en 1990 avec l'Internet et ceci a été bien accueilli en Afrique et au Sénégal particulièrement. Encore, l'OA apparaît à tous les professionnels comme une porte ouverte vers la science partagée. Il permet une diffusion de l'appui large possible de l'information scientifique sans avaler l'économie de manière à faciliter l'accès au savoir. Ainsi donc au Sénégal, il y a beaucoup d'initiatives visant la mise à l'exposition en libre accès de l'algérature scientifique mondiale et de l'ensemble des données et de l'osicile et a permis de produire cette connaissance. Ainsi donc ce qui a permis au Sénégal d'avoir beaucoup de résultats. En 2010, le consortium des biologues de l'enseignement supérieur Sénégal et le co-baisse a organisé un premier atelier national sur l'opel access. C'est le premier atelier que nous avons organisé au Sénégal. Cette rencontre avait permis de faire un plaid loyer national sur l'accès aux prestations scientifiques et a permis à toutes les bibliothèques et universitaires de mettre à jour le parfum de libre accès. C'est ainsi que nous avons eu des chercheurs live avec la parfum agorale, au arre et la rie, à la rie goalie. On a eu d'oage, on a eu de perses, on a eu une édition journal et on a également Benin qui est une bibliothèque mise en place par l'agent de la francophonie et qui permet de mettre en ligne des résultats de la vidéo et des cours en ligne. Ainsi donc tous les bibliothèques sénégalaises ont remis à jour leur part, et notre opère est disponible sur ce lien là et nous avons tous les liens à nous pour asser ce que j'en ai soumis. Mais nous sommes conscients que c'est un lit d'oiseau mais on y travaille. Donc notre premier part de l'animédiation a continué avec Silverbord-Bugard qui a été mise en place par une baisse de Dakar et ça permet à l'ensemble des chercheurs de mettre en ligne un libre accès des articles de périodique et des thèses et toutes les universitaires sénégalaises peuvent participer à ce site là, à ce projet qui est important. On a également eu le DICAMES qui est mise en place par le conseil africain Emal Gache, le KMS qui met en ligne également des publications en libre accès. C'est un site qui a des problèmes techniques, c'est un peu l'endacé à ce jour mais on travaille, ce sera un site important. Il y a également le point un peu donné qu'il a été mise en place par l'incident fondamental d'Athlée-Noire-Liffon et on met en ligne des photographies, des cassettes de diaposities, film et ethnographie, manuscrits et dossiers documentaires. Mais malgré tout cela on sent que la recherche scientifique est bloquée au Sénégal à court des insilsants de moyens. On n'a pas suffisamment de moyens pour mettre en place tous les projets que nous aimons mettre en place. Il faut encore payer pour accéder à l'information et aux services qu'on nous produisant ou qui sont produits par d'autres pays. Les choses africaines doivent payer le propre recherche, ce sont vues parce que ces résultats ne sont accessibles sur open access. Mais il nous apparaît aujourd'hui évident que les pays universitaires peuvent jouer un rôle important dans le libre accès aux publications des sciences sénégales. D'abord parce qu'elles sont à l'intérieur des universités avec commission, l'édition et la décision de la condition scientifique des chercheurs. Elles ont rendu des subtils rattachés au réctorat et en compétence sur toutes les questions où l'université est juridiquement reconnue pour asissants comme éditeurs. Mais on sait rien comme qu'au Sénégal, de toutes les universités publiques qui existent, il n'y a qu'un cas qui ont mis en place le presse universitaire. Il y a un spécial de Dakar qui a eu sa presse en 1991, le spécial de Gaston Verger en 2007, Bambey en 2015 et l'université d'Usaël en 2000. Donc on s'est rendu compte que toutes les universités là, il n'y a que tous les cas qui ont le presse universitaire, ce qui est quand même pas difficile. Alors les réacteurs que nous avons interrogés ont évoqué deux prises par raison pour lesquelles ils n'ont pas mis en place le presse. La première, elle est au lourd de la structure en termes de personnel, de matériel, de bilge et de locaux. C'est extrêmement lourd pour que les universités puissent prendre en place le bilge. La seconde raison est liée à l'aspect fondamental de la rentabilité des ressources qui seront allées aux presse. Est-ce que les presse seront rentables huit ou non? Mais à Bambey, avec le projet de gouvernement et financement de l'Atlant Supérieur, on a reçu un financement de 50,6 milliards français et pas pour tous les universités publiques. L'Etat du Sénégal a allumé 21,5 milliards et nous avons pris une part de la somme pour créer le presse universitaire, puisque les réacteurs ont trouvé que c'était lourd. Alors, riche d'expériences qui étaient déjà faites au niveau des universités publiques, les trois universités, Gaston-Belgaide, Dakar et Sahel, nous on a opté pour des presse publiques et avec un site internet en ligne où les sciences peuvent accéder directement et faire des publications à nous pour accéder directement. Alors, on a eu des sciences de formation des enseignants-chécheurs pour l'inmissions de la recherche. On a appris aux enseignants comment utiliser le gêne et comment faire des points de publication en ligne et nous avons eu des ouvrages de recherche proposés et quatre auteurs qui sont proposés pour la publication. Donc, ces autres sont là. Il y a eu un chimie de coordination, un analyse, le gouvernement d'administration et l'introduction pour les confrudes-vies en avionnement, etc. Donc, on a eu 14 disc publications qui ont été proposées par ces quatre enseignants-chécheurs et les passent. Alors, on devrait payer les droits d'auteurs et commencer les publications à nos accesses. La seule possibilité est d'acheter les droits en raison de 300 000 centifas soit 535 000 euros. Voilà. Tous les séchers sont frais, mais ils veulent absolument recevoir le droit, ce qui n'est pas possible présentement parce que le projet de pressif a été interrompu. Il est difficile d'obtenir aujourd'hui les financements pour financer les droiteurs à payer aux chercheurs. Il faudra que les publics universitaires obtiennent assez de moyens financiers pour faire le plus doigt de lois auprès des chercheurs. Il faut mettre en place des mécanismes de validation des publications à nos lois pour que les nombreuses citations et publications à nos accesses soient le plus élevé possible afin que les choses se rendent compte publiées à nos lois à des avantages parce que leurs travaux seront lus par les enseignants, sur les lus par les étudiants, par la communauté sénégalais des chercheurs et seront cités dans le travaux. Donc ça c'est plus important que d'être payé. Donc il faut également avaler le droit et le faire et poursuivre sur le point de accesses au sénégal, ce qui n'est pas du tout facile. Mais il faut continuer d'avancer, c'est comme ça. Donc en 2019 on a eu au sénégal la déclaration des participants au col-oxyne ouvert au sud en géant et perspective pour une nouvelle dynamique et ça a été organisé le 23 et 25 octobre 2019. Donc le sénégal sera le premier pays de l'Afrique à mettre en place une politique de libre accès et ce serait un des piliers importants de l'Afrique nationale d'opale accesse. Mais les publics universitaires sénégalais poursuivent un poulis mondial dans ce domaine-là. Il s'agit de poursuivre les efforts, de tenir le financement nécessaire au développement de la recherche, renforcer les TIC pour promouvoir le JED et les formats de publication en ligne. Ça c'est important. Il faut faire des plateformes de publication avec les autres presse, valoriser le bon access. Deux presse ont déjà répondu à cet appel de faire une plateforme nationale des presse universitaires sénégalaises. On y travaille, nous avons nécessairement avalé un portail commun. Ceci est un début et avec l'espoir de devenir un poulis fondamental de l'OA dans le monde. Voilà, on avance pour ça, mais on y accède. Voilà, les chers amis, c'est un plaisir pour moi de partager cette expérience sénégalais sur l'opale accesse et c'est avec beaucoup de plaisir que j'ai parlé avec vous. Je laisse à vos dispositions pour vos questions et réponses tout à l'heure sur la plateforme. Merci beaucoup.
|
Higher education institutions play a primary role in socio-economic development due to their three-fold mission: (1) Providing advanced training and education to an increasing proportion of the population, (2) promoting scientific research, (3) providing services to the wider community. In this context, in the year 2000 Senegal decided to grant a significant means of education in general, and higher education in particular, through public financing close to 11% of the national budget, which compares to an average acros Africa of around 3.8%. This was reinforced by Project for Governance and Finance of Higher Education based on results (PGF-Sup), signed between the senegalese government and the World Bank in May 2011. The PGF-Sup was supported financially with 50.6 billion FCFA (101.3 million USD). The Government of Senegal has allocated the sum of CFAF 21.5 billion (USD 43 million) to the “financing based on performance contracts” sub-component. However, at the end of the four years of performance of the performance contract, Senegalese universities have not achieved the aim of "publication of textbooks and booklets" by teaching and research staff (PER). Teaching therefore seems to have a higher priority on university campuses, above that of research and scientific publication, despite the emphasis placed by the LMD (Masters and Doctoral training) reform on the importance of scientific documentation in course syllabi and the work of students (TPE). Meanwhile, on the editorial side: The Dakar PUD university press was set up by rectoral decree No. 626 of September 2, 1991. The University Press of Saint Louis (PUS) was created in 2007, and for SAHEL University in 2000. The creation and mission of PUB was established in March 2015. In other words, 4 presses in 12 universities: 33 percent. PUBs provides institutional support for the promotion of research and for the publication and dissemination of scientific and didactic work. Internal regulations specify the functions of PUBs. A call for applications received nearly 25 requests which are ready for publication. In Bambey, for example: we have retained 14 publications. Despite these efforts and a visit to request support from the president of the scientific council and the administrative secretary, things have remained at this point and the university does not have the means to publish the teachers' books. This is very problematic, and the national agencies to which requests have been made have not been able to respond. What can we do about this lack of resources? Should we let teachers publish elsewhere, in Europe for example, and what kind of strategies can we put in place for a publishing operation that favors research in Senegal and promotes open access?
|
10.5446/55217 (DOI)
|
Hello everyone, my name is Deniz Özdemir and in this lightning talk, I would like to speak about the concepts that I'm currently working on during my research at Czech Technical University and my PhD studies at Charles University in Prague. Originally, I'm an OSC-Pan-GPAN certified ethical hacker and today I'm keen to address the interconnection between secure software development with regards to open science principles and its impactful applications to our digitalized society. To start with, one can only ponder how the research in this ever-growing technological fields ranging from social robotics, machine learning and artificial intelligence can derive to the research fundamentals, since open science principles also cover a vast spectrum of research components. It is vital to draw the boundaries of the definition as how the data obtained from this new and upcoming smart technologies can be shared with who, at what point and intensity, and in which derivatives. Not to mention the fact that what can be legitimately be withheld, as well as the actual result and the potential expected outcome that are being planned during the implementation. There exist several definitions of openness with respect to various aspects of science. The open definition defines it thus, open data and content can be freely used, modified and shared by anyone for any purpose. Open science encompasses a variety of practices, usually including areas like open access to publications, open research data, open source software tools, open workflows, citizen science, open educational resources and alternative methods for research evaluation, including open peer review. As a matter of fact, for some researchers, open directly refers to the absence of legal restriction on reuse and for others, the meaning adheres to social and technical components of accessibility and reusability. Clearly, comprehensively detailed framework to adhere to open science incentives is indeed undoubtedly crucial to consider at this point, so that we can actually leverage open science to accelerate research in new technology developments. With the launch of these advancements, which have yielded new technologies, researchers undoubtedly illustrate the value of trustworthy scientific infrastructure. Building this type of reliability in science, namely the robotics, which is still in its infancy phase, will require wide array of approaches, starting from the single systems to wide area connected domains. In the meantime, with our fast paced digital world, it's becoming clear that the global service automation is now changing its core structure from high touch to high tech. More and more, the use of robotic assistance will be adapted in a plethora of industries, and the roles of these technologies will vary from small pet-like companions to fully equipped and emotionally intelligent big robotic systems, aiming to re-wash on ice user and consumer experience by their robust, trustworthy, facilitated and even influential design. Researchers state that social intelligence is not merely intelligence plus interaction, but should allow for individual relationships to develop between agents. From here on, I would like to emphasize the fact that the integration of secure software development lifecycle in this digital infrastructure is very much needed. As a matter of fact, the collection of data to be fed into the algorithms of these technologies lead the design to a further integration of intelligent structures in the multi-faced nature of complex principles. Moving towards the brand new paradigm in open science-based human robot research, which must be privacy-oriented with respect to ethical guidelines and standards. For example, in order to fully benefit from these research initiatives, the software developers, policymakers, information risk and project management disciplines shall all work in cooperation in order to share data from the trials and most importantly, make the data replication possible. For its purpose, sharing open-source research software is paramount in order to build on the code and systematically reproduce the computational results. Hence at this stage, integrating open science in order to share the preliminary results and disseminate them with the next stage of data collection specialists. The transparency will eliminate any obstacles that might occur in the later phases of the software development. The collaboration between multidisciplinary etiquettes in between robotics and social sciences shall bring the incentives to utilize the norms of open science. This transformative agenda aims to embrace enhancing the integration of not only the science and the society, but also eliminates the discrepancies of secure software development lifecycle. With the light of my own scientific research and my previous corporate experience, the Open Web Application Security Projects, in short, OAS, open scientific emphasis on the software development lifecycle not only addresses the importance of transparency in secure coding best practices with respect to infrastructure frameworks, but also this dynamic initiative emerged to identify and analyze cutting-edge attention to normative exposures for the research and innovation systems. Software development lifecycle is a framework that defines the steps involved in the development of software at each phase. It covers the detailed plan for building, deploying and maintaining the software. To sum up, a greater emphasis shall be put upon the next generation technologies such as robotics, which will surely have great impacts on our everyday life, with open source, which will effectively harmonize the scientific principles in a reliable, transparent and systematic manner. This is the end of my talk. I would like to thank you for your time and attention.
|
Due to the adaptation of technological advancements in our digital society in 21st century, one shall ponder how the open science will be meaningful and applicable to humans by the means of information technology components, ranging widely from wearable smart technologies to robotics, constantly sharing information to its end users. For the future of open science, the contributions derived from artificial intelligence concepts may provide a promising and eccentric road to distribute scientific knowledge respectively, connecting research data across the continents in the form of transparent knowledge process.
|
10.5446/54169 (DOI)
|
Paj인�91-USUT 4 apartment USA Ten te slippedin iz kraju se kar borint, ki smo nosili omluč crowd dealer vs ogledan serve raz deform hem pridat trends separated v to vse konference. So, I will concentrate on a certain type of refill games that I will try to describe at the beginning of that talk. So, as all refill games the question is to find an equilibrium of a continuous motion of a certain density of crowds, which evolves, but will be a specific type of refill games, which, where the equilibria can also be obtained by minimizing a global energy, Joel izke vegetablej Ajsm Midfie game je naš kjerうzil neagrajba sliding 없는ic behaviori skupem neinaire. Sredin bo podd strengthened by wpravily imati in je hobDING deh v durajejanje numeratororske leeneca brešte da ga je zrečaj st помenial, na trorvenGe. što idjaUN ki delimi tevernišcij v buckom zhaljetih possessor ja. Ľ Luščaj mošbel poseselimi premeskji substrate, ko je neremmost v tem stitchesimpive zase. Oh, čas bo impressivno pa niske danes,א k н��usegao šaj tevno vsih katnopriednji Planet. Čaj ne dostajte pa neassen Cooper, od njega ta las do včeg. To St and di notice advice na Aquayan tUD o potrebe Nikro misli, je začta teori, ko je spravilj, v peppersu, svolj se k Škologainskimi araženja, ko glasba stvari, k Rubikimi ga zelo upgraded, tim naamoto, tako bi trajectori, K出来 iz���ba tem, ta režim zelo je najbolj rad já traces in krs' a drugi tri del Katulli odrečaj se venov Meng a več koncept skupar大家都 več obustiera dim crowd da je Elizbelje julica lahko izrebima In priš 성mu ne precedumiv in là, da si povodu pos več nal vezem In setemo ki se se ogoda na režima in Isnelj in izrečijo tri treats Вот ti nenosil jazusti višekten v draži. Ali odkaj je lačeno bomo so in delNično v dist ančivo. En shining tartman pan knight, ta je vševen ring veččalt. Joe Become bo adem konac provodne ozen Okay, o snajn. Éka mogu se listav temper nothincularna očalje in jeva musste bil Reach v počit enable. Laš pon consider veče別 ohfer taki režime naolegrки. bo freezerje inzba svega oxidizacijo. Cel reboga, zato ne slide prez objects, zato ga rasiti inTech payer ampak skupniti pa mo set o ob melodij shortly. A staff plan grana. Jaz je udan continu Range. Mennim kilo pozde in ebsenih roju. Kam tega ne poseb, doma, slowe loan, grav Content penalty dizaj uring del? Tak. Prenuc hitijet deliversb. And so I'm lazy and just concentrating on the quadratic case. That is, my costs are kinetic energy. We could consider different functions of the velocity. Some of what I will present is really specific to this framework because this quadratic cost will be related to the quadratic vaš sana distans, that I will use at the moment. But something can be adopted. So, in this case, the Hamilton Jacobi equation is just a relation between the dti phi and the gradient phi equals the running cost that you have in your optimization problem with a final datum. z mosnjen тебя wife pes derivovlje taz grass pierpečnik inspiresassembljske datum bolts po katkeh entencu posteviliz 표然後 Dva potrek bile nekaj z weld pet livestream bil pomečny in Mind's,G je je urij na thunder Od ampak fill Popravga pračest pa That I� Obptimal trajektor officer Po zona la e Prilo Spectam klไม nya v tamte i dogledam, da, kako spojimo brojturno viasliiver izvočnosti istri več vezami, da bomo ukori vsataklastno udel幕 the queue as pah curves, da je to najvej izma lobby, vsake možemo phi. In艻o, kakusc pah sayo že idem pre assessing the incorrect values in k wahat le Psalm du pa izethiramoários konš Cuba, who ni u 95 un Mana Kaj Base Mi N ne explaining f pah, ako ako dovolj, da Return to kamozonez umrob terms. This Z avoj ni stokastik optim Kobe izgleda protest pri Brisht impressed z 40 bil ja tradана Barcelona di ki nam pr всегда um s Wrestling O чтоб. Sred 을 posla avšte kost. Travam capazjenja k Trustees going to v forbidu.但 nahto у nas kom SK, z visiblej skrenoj singibost stacili kaakukan v ropenje in maksih resultnji. Sred agencies. To pa da liberujeva radh servizijovı vse. doespe prijozku. gel lifecycle in ten flavours, tega znoteva, finu frequency substitute. Eko mitt radchega v vsej tih k aquellim ע чудov. And there maybe d tj. ro plus the virgen so v ro v iz energiji. And what I want to say is that if you take a solution of this minimization problem, then you have an equilibrium. Forst observation, this cost is not the total cost for all the agents in the sense that this is the total final cost, this is the total kinetic energy, but this is not the total congestion cost, which would rather be instead of the integral of capital G, I should take the integral of a rho times G of x rho in the sense that I should take the congestion cost for each agent and integrate it against rho. And in general, it's not the same to take the antiderivative or to multiply the identity. And this is something which is well known in game theory, the fact that you could have a potential game, which means a game where the equilibrium is obtained as a solution of the minimization problem, but the energy that you minimize is not the total energy, which means that at the equilibrium maybe you are not minimizing the total cost. This is what we call the price of anarchy. The equilibrium is anarchy, it means that you let everybody choose what they want and they will arrange to optimally choose according to what the other choose and what they will globally choose will not be the solution which minimizes the total social cost and this is the price for letting people choose whatever they want. Now, this is a convex optimization problem in the sense that capital G is convex because the antiderivative of an increasing function, this is linear and this quantity is convex in the variable rho and rho v. As a convex optimization problem, it is a dual problem that I can write, the resolution transform which appears, the new variables that I will write are phi and h, phi is a function which solves Hamilton Jacobi equation with a datum h. And what you can prove is that if you take a solution rho v of the primal problem and the solution phi h of the dual problem, then as a primal dual of the automatic condition, you need to have v equals minus grad phi and h equals g of x rho, which means that you have a solution of the Hamilton Jacobi equation with g of x rho here and the solution of the continuity equation with minus grad phi. So, this is why you get an equilibrium in the PD sense. The existence of a solution in the dual problem is delicate matter. It depends also on the functional space that you choose. So, there is this variational interpretation, just a word about the fact that not all minfield games are variational. I'm concentrating on variational minfield games, but we already saw even in this conference in the one about crowds, some examples of non-variational minfield games, like the one with congestions, a multiplicative cost of congestion presented by Eve, or the minimal time in the game presented by Guillermo, where instead of putting a penalization in terms of velocity and congestion, you have a constraint that the velocity cannot go beyond a certain tritial depending on congestion. And also, for instance, some class of minfield game of controls where the interaction between players is not the cost depending on the density, but depending on the choice of control. And by the way, I presented some years ago a model which is actually a minfield game of control, which tries to take into account the constraint role, s or equal to 1, by adding a drift depending on a pressure, which is also a non-variational minfield game. We will see again this constraint, but in another form later. And on the positive side, if you have a variational formulation which is a convex optimization problem, it's very easy to do some efficient numerical simulation. This is a very simple one, realized by John David Benemou, that we presented in a survey about variational minfield games and technical. So I have this optimization problem. Let me rewrite this optimization problem. Instead of using Eulerian formalism, that is row and v, that is quantities which are defined at every point and every instant of time. Instead of doing this, let's use some Lagrangian formalism, because if we want to find an equilibrium, we want to find trajectories. And in the Eulerian formalism that I used, row and v, there were no trajectories. So a possibility is to write the same optimization problem by using as a variable a measure on the space of trajectories. So let's use as a space of trajectory the space of h1 curves valued in the given domain omega. Why h1? Because there are kinetic energies everywhere, so they have to be finite, to have finite h1 norm. And then we look for a measure on the space of trajectories. And my optimization problem will be to find the probability measure on the space of trajectories. We've given initial detum, given initial evaluation, so the image at time zero should be given. And then what I minimize is for every curve I have a kinetic energy, and then I integrate the kinetic energy. For every curve I have a final cost, and then I integrate the final cost. And then for every instant of time, I take the distribution of row t, which is the image of q, of the probability measure q through the evaluation map at time t, I get a measure on omega, and I apply this function, which is the congestion function of the integral capital G. This is just a rewriting of what I had before in terms of the measure q. This is also a convex optimization problem, because now this is linear, this is linear, this is convex for the same reason of g being convex. You can prove existence of a solution. What are the optimality conditions? The technique to prove, to find the optimal conditions in convex optimization are quite standard. You take an optimizer q bar, you take another competitor q tilde, you take a convex combination of them, and you differentiate the rest of the parameter epsilon. And the final result is always the same. If you minimize a convex functional on a convex set, then the minimizer also minimizes the linearization of the same convex functional around itself. So if you minimize this complicated function, which is nonlinear because of capital G, then if you define a function small h, which is just small g of x rho, where rho is the one which comes from the optimizer, this is just the linearization of this functional here. So if you minimize this, if q bar minimizes this, then the same q bar also minimizes the functional jh, which is written here, and it is just the linearization of this functional around q bar. Now the important point in this function is this term. This is, I take a major q on curves, I evaluate the time t, so I get a measure on omega, and then I integrate, let's say, I integrate according to this measure, a certain function h, which is this one, which is the one which comes from the q bar. Now this can be written, so this is well defined, so you are taking a function h, imagine it is positive and measurable, you integrate according to a certain measure, this is well defined. But suppose for a while that this is a very nice function, what you could do is to use this image measure to put it instead of x, so you get h of t gamma t, because you are evaluating the curves at time t. So what you get is that this quantity is equal to the integral of h of t gamma t on each curve, and then you integrate on curve. Now the problem is that this quantity is not well defined, if the function h is just defined almost everywhere. So here it is well defined because I integrate a function h according to a density on the space, here it is not because I integrate a function h on a curve, and typically curves are negligible. But forget about this difficulty, so you can rewrite the cost as this kinetic energy plus the running cost plus the final cost, and you integrate according to q, and I was saying that q bar minimizes this, so esenštjali, this means that q bar is concentrated on curves, which minimizes this quantity. But this means that q bar has been obtained as the optimizer of something, it's q bar which defines h, h depends on q bar because it was g of x rho, and almost all the curves of q bar are minimizers for this cost. So this means that I found input equals output, it means that I found an equilibrium, I found a measure on curves q bar, which gives raise to a certain congestion rho, which gives a certain function running cost h, and my curves composing q bar are all optimal. Now how to deal with the case where h is not, let's say a continuous function is not defined almost everywhere, there is a way to do this, this comes from an idea that Ambrosium Figuali used for fluid mechanics, there was no game in the game, and the idea is the following, you have to decide the precise representative of the function h, because you have to integrate on negligible set, so you have to choose one, and they say, take the average on balls, and then take the lim soup when the radius tends to zero, and never think of taking the lim if it's out of the lim soup, it would not work, in the sense that in their proof they really need the lim soup. This is the representative of h, because in all labek points this coincides with h. So let me be precise about the result, is the following, suppose that q bar is optimal, then almost every curve gamma is an optimal trajectory in the sense that it minimizes this cost, which is the cost, this one, defined by taking h bar here, h hat here, and taking the final cost sign, if you want. It minimizes this when you compare against any other possible curve gamma tilde, under the condition that the maximal function of h is finite, is integrable on the curve. Where does it come from? It just comes from that, to prove this, you first write some inequality with this average, then you pass to the limit, and to justify the limit you need a fatulema, you need to prove that something is integrable, you need to bound this from above. And the best possible bound for the average on a ball is to use the maximal function, which is just defined, the maximal function of a function h, on a point x is the maximal possible value of the average. So the maximal function is just soup of this. So it's a trivial way of bounding this quantity. So you need this assumption because otherwise you're not able to send r to zero in this inequality. The question is how many curve gamma tilde satisfy the condition that the maximal function is integrable. Now there is a trivial case. If h by chance is l infinity, then the average is also bounded, then everything is fine. I mean if h is l infinity, this is a bounded number. So for sure it is integrable. If h is not, then it's not clear and you have to prove that there are many curves which satisfy this condition, otherwise this result is useless. You would like to say that almost every curve satisfies this condition, but almost every according to which they measure on curves. So they prove that if you take an arbitrary measurement curve with some integrability condition on the image at time t, then you can prove that if you have sufficient integrability on rho, sufficient integrability on the maximal function, then you can prove that almost all the curves satisfy this condition. So this comes back to proving integrability on the maximal function. And luckily there is a theorem in harmonic analysis which says that if h is in lp, then mh is in lp, provided p is strictly larger than 1. So finally they insist on the importance of proving that h is summable better than l1. So these proves that we would like in some cases to prove some integrability result on the function h, which was, let me recall, h was g of x rho. So essentially you want some lp bounds on rho. Anyway, I insist on the fact that it would be much better to have l infinity bounds on rho. So how can we obtain l infinity bounds on rho? So let me translate last time the optimization problem. It was 40,000 Eulerian problem, then it was Lagrangian problem. So in the Eulerian case I was looking for rho and v. Now the rho v square that I had here, actually for those who know optimal transport, this is just the speed of the curve in the vast space. So I can write everything in rho. It is finding a curve rho of measures with given initial point, a final cost, a running cost and the velocity, and the penalization of the velocity. So this is really a problem in a metric space finding an optimal curve, which minimizes speed square plus something. Now for this, to make proofs, regules, the best way that we found is to discretize in time, because you can fix a time step tau, let's say capital T over n, and then you write this integral as a sum of distances square over tau, and then you write this again as a sum and then the final sum. And what you look for instead of looking for a curve, you look for a sequence of measures with rho zero, which is given, and then you look for 1 rho 2 rho n, and they have to solve a minimization problem when you minimize a sum of distances square plus penalization. Now if you find a minimizer, then each rho k minimizes a problem where only the terms involving rho k appear, which means Tg of rho k, distances to rho k plus 1, rho k minus 1. Now for those who know gradient flows in the vastan space, this is very, very similar to what happens when you do the JKO scheme, the Jordan-Kilder-Rotto scheme for gradient flows, where instead of having distances to the previous and to the next measure, you only have distances to the previous one and you have a different scale in term of tau. But the techniques that we could use are similar, so essentially from these ideas, it's possible to use this technique, which is called the flow interchange technique, which works in this way. That is, you take the rho k, which minimizes this sum, so you have your rho k minus 1, rho k is the optimal one, rho k plus 1. You say that if this is optimal, if I let it evolve according to any kind of evolution that I want, and I differentiate the quantity that I was minimizing, the derivative should be positive, because it's minimum. Now I let it evolve, choosing to use the gradient flow equation of a certain functional integral of f of rho, and this is the equation that we have. So I call s the variable in the sense that it is nothing, it is not the variable t in time, it's another variable that I just used for the perturbation. Now there is a fact, which comes from the theory developed by Ambrose, Gillian, Savare, that if the function that I use is geodesically convex, then the derivative in time of the distance square from my curve, which is the gradient flow to something, can be bounded by the difference of the value of the function. Now I start from here, I follow this gradient flow, I have the derivative of the distance square, let me also compute the derivative of the g term, and this I do it with the question, because if I differentiate integral of g x rho s, I differentiate, so here I get the derivative times ds rho s, but then I use the equation ds is a divergence, so I get the gradient here and I get the expression that I wrote here, and I use the fact that the sum of the derivatives is not to be positive and I get this bound. Now I do it for a very particular choice of f, that is the power m. In this way I get a quantity, which involves the integral of rho to dm here, and here there is this integral here. Now let's take g, which is of the form, a cos depending on the point plus a cos depending on the congestion, let's start from the case where there is no cos depending on the point, what I'm proving is this, which means that since here I can compute this gradient that I had before, there is gradient rho dot gradient rho, so in the end I have something positive, this ratio is positive, but this is the discreet second derivative of the integral of rho to dm. So what I'm proving is that the integral of rho to dm is a convex function in time. So if by chance I suppose that rho is in lm at the beginning at the end, and at the end I have rho m all over time, and this also works for m equals infinity. So if I have an infinity bound at the beginning and at the end, then I am all over the time lm infinity. This is something by the way that also Lyons proved in a very different way with PD, mitten, z maximum principle. Now it can also be adapted if instead of supposing that the final datum is lm, usually in meaningful games the final datum is not given. It's penalized by a certain function psi. If you give some assumption of psi, then you can prove something on the relation between the final measure rho n and the previous one. So essentially you're saying that this function is convex in time with a certain bound on the derivative at the end, which is enough to prove that you have some bounds here. So you can prove that you have some uniform rho m estimates, and then you send them to infinity. So you can get some lm infinity result also if instead of fixing the last measure you penalize it. Now if you don't want to suppose that the initial measure is lm, and you don't want to suppose anything about the initial and the final data, then you would like to prove that you are lm infinity in between. So that there is any sort of instantaneous regularization, you start from rho zero, which is bad, you don't suppose anything on rho t, but you want to prove that you are really fitting between. And this means that instead of just using that this is positive, you do something better. Now you suppose you give some assumption on g, we said that small g was increasing, so g prime is larger than zero, let's suppose it is larger than something, and you do some bounds here, and you recognize that you have the h1 norm of something, of a certain power of rho, then you use a sublev injection, which lets you increase the exponent and use some more iteration. So the important point is that here you have some rho to dm, and here you get some rho to a larger exponent. So you can bound a larger lm norm with a smaller one, and you iterate and you have to take care of all the coefficient, and you can arrive to prove this something that we did with my students, you go, I have no, that you get l infinity. It means that you get local l infinity bounds independently of what you have at the beginning of the end. This local result was not present in the proofs balions, by the way. Then if you want to complicate, you add the present of v, you have an extra term, you have to take care of it and to prove that it is less important than the other, because it's lower order and so on. So let me summarize the result. If you have a g of x rho, which is a potential constant x plus a congestion constant rho, under some assumption of g, you can get under some assumption of v. So the easiest case is that this bound is true everywhere, starting from zero and alpha is not so bad, then you get an infinity lock. If the bound is only true starting from a certain s zero, then you add some extra assumption on v and you get the same. If you also have an assumption on the final cos psi, you can arrive till the final time. If alpha is too small, the point is that here you get this exponent m plus one plus alpha, which is smaller than m, and then you multiply it by something, which is two star over two. So you need to start from a certain m to be sure that when you subtract something and then you multiply something larger than one, then you go beyond them. So you need, in this case, to have at least a certain lm zero or some ability at the beginning. So essentially you can prove infinity results by means of the variational structure and by using this technique, which comes from the JKO award, let's say, from ideas by McCann, Savare, Mates, and in some sense also Jordan, Kindere and Otto, you can prove an infinity on row. If you prove an infinity on row, it means that the function h, the running cos that you had, is an infinity, so for sure the maximal function is an infinity, so you don't have to take care of this strange restriction that you only compare to curves such that the maximal function is integral. So it's a technical requirement that you add, it's good to prove that you have a royal infinity to prove that the minimizers are an equilibrium. So this is what we did for a main field game where there is a penalization on the congestion. So you have a penalization on the position plus a penalization on the congestion. Now I want to present a variant, and this will be the final part of the talk. This is the variant where instead of penalizing the congestion, penalizing the density, I put a constraint on the density. So how can I define a main field game if I replace the fact that people pay for large densities with a constraint that density must be smaller than one? And there is an idea which is just, OK, let's do what we do usually. I give row, I take into account my constraint, so everybody chooses a trajectory, and he has to take in, to keep in mind that in the trajectory he cannot go beyond the density one, and he chooses only admissible trajectories which do not go beyond density one. The problem is that if I give a row which already satisfies row less than one, and I am a single player, which is completely negligible compared to everybody else, and I choose whatever trajectory I choose, I will not change the density, so I will not violate the constraint. So this constraint becomes empty, and so in the end I will forget about the constraint, so this is not the good way to define an equilibrium. So let's forget for a while about the equilibrium, and let's look at the optimization problem, because this is well defined. I can take the minimization of the kinetic energy, plus I have a cost depending on the point and the constraint, which means considering a convex function g, which is linear if row is less than one, and plus infinity otherwise. I can also write the dual problem by computing the g star in this case. I can see that, for instance, this problem is the limit where instead of taking the constraint of less than one, I add an LP penalization of p, which tends to infinity, for instance, which means that in the cost of every player I add the row to dm, which will tend to zero if I satisfy the constraint, otherwise it will explode. Now, if I look at this dual problem, I see that I only have an L1 penalization on h, so it means that I cannot easily find the minimizer, and most likely this h will be just a measure. But if I forget about regularity condition, let me recall that I said that at optimality I should have that h is equal to g of x row, but g was capital G prime. Now, in my case, this will be v of x, because if I differentiate these, we'll just do whatever get v of x, plus I have the constraint that is plus something, which is in the sub differential of the indicator function, which is zero and plus infinity. So, essentially, I'm saying that h is equal to v of x plus something, which I call p, which has to be positive and zero unless I am saturating the constraint. And so I can write this system. The system is the Hamilton Jacobi equation with h, which is v plus p, the continuity equation with grad phi, and the p must be something which is positive and only leaves where the constraint is saturated. Now, this p, from the fluid mechanics point of view, it's a pressure. From the economic point of view, this is a price. It's a price for passing through a saturated region. So, the equilibrium condition will be enforced by the fact that there will be a price to pay to go where too many people want to go. It's completely natural in economics that you only pay a price to go to choose a good, which is already chosen by the exact number of people, which exactly fulfill the availability of this good. And this good is the capacity, role less than one. Now, I have the same problem that this is the Hamilton Jacobi equation corresponding to a control problem with running cost h. But what is h? I said that h comes from a low-on minimization. H is only a measure, so it is not even a function. So, I don't only have the problem that h is not well defined on every point. H is not even a function. So, I really need to prove some regularity. Now, this is something that we did. So, we studied this problem with Pierre-Cardallier and Alpare-Message and we were stuck at this point. And in the end, we found the technique coming from a paper by Brénier again devoted to fluid mechanics and incompressible equation. In incompressible equation where rho is equal to one instead of less than one, but the techniques are similar. And we are able to prove through many pages of computation that actually, this p in some cases, you can prove that this l2 in time is valued in b in space, so at least it is a function. And it is a little bit better than l1, so you can have some ability of the maximal function, so it's OK. But still, I would have liked to have p l infinity. Now, just a formal computation about the Laplacian of p. Come back to the Hamilton-Jakobi equation, you take the Laplacian everywhere and you get this minus dt of Laplacian plus grad phi dot grad of the Laplacian plus positive term equals Laplacian h. Now, use this notation. This is the material derivative, the derivative along the trajectories. The trajectories of the continuity equation are the trajectories with velocity minus grad phi, so you take derivative in time minus grad phi dot derivative in space. If you compute the derivative from the continuity equation, if you compute the derivative of log rho, you get Laplacian phi. This is easy because the continuity equation can be written as dt rho minus grad phi grad rho minus rho Laplacian phi equals 0. So this is dt rho minus rho equals rho Laplacian phi. I divide by rho and I get the dt of the Laplacian. OK, so the dt of the Laplacian is the Laplacian and you can recognize here that the first term includes dt of the Laplacian. So if you compute the second order in time, the material derivative of the log, you get this, then you add this positive term and you get this. Now, be on a point where the pressure is positive. The pressure is positive, rho is 1, rho is maximal, log rho is maximal, and so the second derivative is negative. So essentially what we are able to prove, but this is just a neuristic computation, if p is positive, then the Laplacian of p is bounded from below by something, the Laplacian of v. This can be made rigorous in the time discrete formulation. You write the optimality condition of these two problems, you have the Pankant-Torovich potential, which appear, then you use the Mon-Jampere equation, which tells you that the determinant of the Jacobian of the map of the optimal transport map, which is identity minus grad phi, should be equal to the ratio between the densities. You suppose that the final density is 1, so this density is 1, the other is smaller, so this is larger than 1, and you use arithmetic geometric inequality. In the end, you find that the Laplacian of these two terms are positive and you get again the same. So let's keep in mind that we have this. And now from this, you can obtain all the regularity that you want. You take this, you test this inequality against p, and you get, first of all, you get grad p squared smaller than this. You do a younger, earlier in a quote here, and you get this. Grad p squared is more than grad v squared. This, for instance, proves that you are an infinity in time, valid h1 in space, under the only assumption that v is h1, which is quite a big improvement compared to be l2 in time, valid bv in space, under the assumption that v is c11. Of course, there is a trick. This only works when we have vastent to square. It means only works for the quadratic Hamiltonian, while the proof that we had with Pierre and Alpard could be adapted to more general Hamiltonians. Then, if instead of testing against p, you test against the power of p, you get gradient square of some power of p, you do the same trick of using mozer iteration to use the fact that h1 norms are able to bound larger powers, larger LP powers, and under some assumption on grad v, grad v belongs to a certain l2 space, larger dimension, you can prove that p is infinity. There is a last point that all these computations work well inside, in the sense that this optimality is different if you are at the last point. You only have one vastent distance, so there is an extra pressure, which will be created at the final time. In the end, you can prove that the pressure that you have is an extra price to pass through a saturated region plus an extra price to arrive into saturated region. But all these quantities can be proven to be h1 and then infinity. And once you have an infinity, you don't need anymore to use maximal functions and so on. So you can really justify that if you choose the correct representative, then you have that almost every curve is optimal for a running cost, which depends on the congestion, which is created by all the players. So essentially what I wanted to present to us, this kind of this class of variation of minfi games, where there is an energy that you minimize and the optimality condition, once you prove the correct regularity to justify everything, impose that you are in equilibrium for the game, where each player has a cost, which depends on the congestion created by all the other players. So here this is just some applications that you can deduce from the fact that you have some infinity bounds on the right hand side of the Amiton Jacobi equation, you can deduce some extra information about the solution of the Amiton Jacobi equation, so about the value function. You get something better, you can prove that it is older, you can prove that it is better, soable of regularity. And you can really prove that the solution of the Amiton Jacobi equation is the value function for this precise choice of the representative. And I thank you for your attention.
|
In the talk, I will first present a typical Mean Field Game problem, as in the theory introduced by Lasry-Lions and Huang-Caines-Malhamé, concentrating on the case where the game has a variational structure (i.e., the equilibrium can be found by minimizing a global energy) and is purely deterministic (no diffusion, no stochastic control). From the game-theoretical point of view, we look for a Nash equilibrium for a non-atomic congestion game, involving a penalization on the density of the players at each point. I will explain why regularity questions are natural and useful for rigorously proving that minimizers are equilibria, making the connection with what has been done for the incompressible Euler equation in the Brenier’s variational formalism. I will also introduce a variant where the penalization on the density is replaced by a constraint, which lets a price (which is a pressure, in the incompressible fluid language) appears on saturated regions. Then, I will sketch some regularity results which apply to these settings. The content of the talk mainly comes from joint works with A. Mészáros, P. Cardaliaguet, and H. Lavenant.
|
10.5446/54172 (DOI)
|
Thank you very much, Peter, for the introduction and Francesco for the audience. I'm going to go over the overview of the talk and in the first part of the talk I will review some older studies of pedestrian passing a bottleneck. We will look on the influence of spatial structures of the boundaries, bottleneck length, how they influence the process, how cooperation and competition influence the process. At the second part of the talk I will introduce two experiments performed in our group, recent experiments, some of you may have heard or seen the paper. The experiment is very fresh in the results too. We are talking about the movement of pedestrian passing a bottleneck, we have an incoming flow, we have an outgoing flow, and we have bottlenecks of different width and lengths, and also the room or corridor in front of the bottleneck could have different width. When we are checking the phenomena we can observe here, then we have, of course, if the outgoing flow is less than the ingowing flow, we have a congestion. So in case of a congestion, we can observe that the density increase and till a certain threshold and then the congested area grows in the opposite direction of the bottleneck. And maybe sometimes we can observe another phenomenon which is clogging in front of the bottleneck. This clogging has something to do with competition and cooperation. And I have here two examples, this is cooperation, this is competition. You can, you watch here, one of the exiting is even not able to exit because the others are pushing so strong here now, look here. So the competition is clear here, people want to find a seat or want to have a place at least in this train. Here we have a more cooperative situation and, yeah, of course, what's happening or what has this competition and cooperation to do with bottlenecks. And that's, I want to show you, to introduce a very old experiment is performed in, yeah, more than 50 years ago from Mint and he had groups of students and the task was to pull out the cones dry. The water is flowing here in, you have these cones here in the bottle and only one cone can leave the bottle at a time, only one cone. And then he has different setups and instructions, some were with rewards, so little money, very small amount of money, and they have had not the opportunity to discuss. And some of the students were advised to, yeah, to severe and to give noise to make something like an emotional arousal. Yeah, at the end, what he find out is the following. As soon you have a reward, you will observe a clock. If you don't have a reward, people are able to pull out the cone. It's very easy, it's clear, if you give a reward, then people will try to compete and then you have a clock. It's easy, but it's very important for the next what we have seen. Whether these clocks also appear in pedestrians at bottleneck can be seen here, that's a video or these experiments were performed by Machid Zavi, University of Australia, and I was present there and had the opportunity to film from above and you can see that there is one moment now where you can see that people are blocking and interrupting the flow for a certain while. Again, here, you see the clock and then the flow is somehow reduced and then it continues again. Okay, so let me try to combine what I have said now in the introduction. So we are talking about bottleneck flow, we have incentives and rewards, motivation, cooperation, competition, and also the phenomena of clocking. So as soon or as long people cooperate at the bottleneck, they keep distance, they give away or they even stop to let someone pass. That is in the cooperative setting. In such a cooperative setting, clocks are very unlikely. Very unlikely, it could happen by misunderstanding, maybe you can, you know this old movie from Dickendorf, how is it called in English, Dickendorf. Does anyone know? No. Stan and Orly and they try to pass together a door and they always clock. Something like that and of course that happens but normally in a cooperative setting you don't see these clocks. If you add a special incentive or a reward, then maybe people start to compete passing a bottleneck. In crowds, these rewards could be seemingly small. For us thinking about the situation in the train, it's not so important to get this place in the train but for other people it is. So the rewards could be seemingly small but they could also be very, very high. It could be your life in case of a dangerous threat. The reward you get passing the bottleneck could be your life or the survival. Okay, in a competitive setting, people moving fast, they getting closer, they filling gaps and even start pushing and using the elbows. Okay, how competition, clocking and flow is related. Of course due to the competitive behavior, the probability of clocks increase. But even if the probability increases, it's not clear whether the flow decreases. That's a very important message here. Maybe because you have some influences like getting closer, being faster, which improves the flow. On the other way, you have clocks reducing the flow. What now is stronger, which effect is stronger, it's not clear. That's not clear. So the questions are, how are the influence of the spatial structure of the boundaries on the flow and how motivation triggered by incentive and rewards increase or decrease the flow. And how it influence the probability of clocks and the density in front of the bottleneck. That are the quantities I want to study in the next slides. So first we start with the bottleneck flow in a cooperative setting, spatial structure. And we look only on the influence of the spatial structures of the boundaries and that are very old, not very old. Ten years ago, performed ten years ago these experiments and we used in that time soldiers and we instruct them to move without any haste but purposeful. They have a goal, they want to pass through the bottleneck and we vary the bottleneck width and we vary the bottleneck length. And what we find out is that also comparing these results with studies of other researchers, we see that with the width of the bottleneck the flow increases continuously. So in all experiments I have looked at on what I have seen in the literature, there is clear that the flow increases continuously with the width. If we look for the length, the influence of the length of the bottleneck, that is a bottleneck with a length of four metres, this is a corridor with a very short length, only a few centimetres, ten centimetres here. And you compare this also with other results, you see that if the bottleneck is very short, you have higher flows than in long bottlenecks. Maybe in this short time they pass, they get closer and improve the flow. To sum up is first of all under normal conditions, cooperative setting, the flow increases continuously with the width of the bottleneck and short bottlenecks lead to larger flows than long bottlenecks. Now let us bring in competition, that was cooperative setting. And now I have three papers found studying these effect cooperation competition. One is from the former East Germany in Magdeburg, also more than 30 years ago. And he used test persons 150 and 100 Rhein and he varied again the corridor width and the bottleneck, the corridor width in front of the bottleneck and the bottleneck width. And he gave two different advices to the test persons. In some of the runs he said, move normal or smooth and try to consider the others. And the other advice was there is a danger, run for your lives. That are the experiments and what he observed when he looked for the clocks here in front of the bottleneck, he have found that in the competitive runs, we have clocks in short frequencies if the bottleneck is very small or less than 1.1 meter. And if you have a bottleneck of 1.2, he has even not seen any clocks. And for 1.6, no clocks observed, fluid and homogeneous flow. What does it mean for the flow? I checked these tables in his PhD thesis and I have not found any experiments where the flow is less in the competitive setting. So always the flow was higher. Even in very narrow bottlenecks 0.6, the flow was always higher. Next experiment was Muir et al. She studied how safety in airplanes and she also varied the bottleneck width by the galley kitchen in the airplane and what she observed is, okay, she has observed these arcs and clocks. And what she found in the exit time, which is the inverse of the flow, she find a crossover at very narrow exits. You see in the competitive setting, here the exit times are larger than in a non-competitive setting, but only for very small width of bottlenecks. In all other, the evacuation time was larger. And now the last experiment that is from Gassi Martin and he used two doors, also not so wide, it's 0.7 and 0.7.5, not large doors, small doors. And he has three levels of competition by instructing the pedestrians in the following way. You have to exit the room by following the following rules. Low competition means avoid intentional contact. Medium, you could have contact, it's allowed and high, moderate pushing is allowed. And then he has different numbers and they have the small door that was the 0.69 meter wide door and the large door, 0.75 and then he has different levels of competition. And he repeated the runs and let us get an impression that is high competition. This is, and you see it's very strong. People are pushing, students are pushing quite strong and here not so strong. That's low competition, that's high competition. Okay, in the high motivation or the high competition runs, you can again observe these clocks to explain what you are seeing. And here is the following. If you play the movie and you cut out these pixels, these line of pixels and then you add them here, stripe by stripe, you see something like passing, the pedestrians are passing the bottleneck here. And if no one is passing, if there are how to say holes, that means that there was a clock in front and people are not able to pass. That was what you will see. Now let us look what this means for the flow. And again, that is the time gaps in between two successive passings of pedestrians, the inverse of the mean value is the flow. And again, you see, even if the clocks appear more frequently in a high competition in comparison to a low competition, you see these are the outliers here from these clocks. The mean value does not change. So again, here we have, or let me summarize it in total, in general, a high motivation improves the flow, always. Not always. In general, a high motivation improves the flow. People move faster, the few gaps, they get closer. And of course, if you have a high motivation, clocks could appear. But in all experiments we have seen, this does not lead to a decrease of the flow through the bottleneck, the clocks. So a negative effect of a motivation or a high motivation to the flow is only evident for very small width and for very competitive settings. That is the point. And you can, I have here a sketch to what I believe is a picture. If you compare the data, I was not able to do it at no time. But here is a, how to say, a qualitative sketch. So this is the flow, that's the width of the bottleneck. In a high motivation, you have in principle, or in a competitive setting, you have in principle higher flow values than in low. At very slow, low, at very small bottlenecks, maybe you can observe these effects, like the clocks reduce the flow, or it's also called faster as lower. That is the only region where you can observe, maybe observe, faster as lower. Okay, now I come to the new experiments. Now I will ask, I will not look to the flow, I will look now what happens in front of the bottleneck. And what different instructions to test persons and how the structure of the corridor in front of the bottleneck influence the density in front of the bottleneck. Now I don't look anymore to the flow, just to the density. And that was an experiment proposed by a very experienced crowd manager. He's doing really great concerts, or the large concerts with more than 10,000 people. And he said, I want this experiment, Army, please do it for me. And we organized it and it was a surprise. For me it was really a surprise. What we did is we had one set up in the following way. Here are the entrance gates, here are the people arranged in a semi-circle in front of the entrance gates. And they were advised that they have to imagine that's a favorite artist and they want to get a place very near to the stage. And they should try to be one of the first passing the entrance. That was the setting one, that is the setting two, the same instruction. The experiment was performed five minutes after the first one and it looks in the following way. That is the set up semi-circle, that's the set up corridor. And now let us watch what happens. Very good test persons, they do what we want, what we said to them, they compete to get inside and very good. You see two things, you have something like a constriction effect at the beginning. Here you have the constriction, then the density is very high. And the movement or the velocity of the people or the moving velocity is very low. Okay, that was experiment one. Now let us check experiment two, same advice to the test persons. Try to be one of the first passing the bottleneck. No constriction. And if you compare it to what we have seen before, very small density. And that is what we will now measure. We measure the density, that here you see the trajectories of the path of the people moving through the bottleneck. And here we measure the density inside the rectangles. And you can see in the semi-circle set up you have densities up to eight persons per square meter. While in the corridor it's less. It's only around five to four persons per square meter. What we tried then to measure here is something like the fairness of the procedure. And we defined the fairness of the procedure in the following way. We measured the distance to the entrance and the waiting time. And if the waiting time and distance to the entrance is correlated, strongly correlated, then we could call it a fair procedure because someone who is near waits for a short time, someone who is far away has to wait longer. And that is an unfair procedure because people are very near and they have to wait a long time. Other people are far away and they can get in quickly. So that is an unfair procedure. And now let us check this for the experiments we have performed. That's the semi-circle. That's the corridor. And you can observe these constriction effects. So people are moving fast to the entrance and then they get stuck in front. And then we have a very correlated function. Meaning that the constriction is something like guaranteeing the fairness of the procedure. So the constriction in fact correlates the waiting times with the distances. Instead in the corridor where you have lower densities, you see that the correlation is not so strong. It's still correlated, but it's not so strong as in the other. And you can observe that people pass because there is more space. If you have lower densities, you can pass. And that's not possible if you have a very high density where you are not able to pass or to overtake. So what we did next because I was not able to understand it, we give the same advice. And there are different, how to say, totally different results in the density in front. And I asked a social psychologist whether she can explain what happens there. And she had the idea to make a questionnaire study. And we did this questionnaire study around one year later by showing the freeze frames of the experiments to students. And then we give the advice, imagine you are one person here inside this ellipse and you want to try to enter here. And it's your favorite artist concert, all the same things. And then we played the movie, we asked them to fill the questionnaire again, and then we showed the second setup. So we have now four questionnaires, one rating the situation before by looking on the standstill, and one rating the situation after they watched the movie. And what we asked them is how just is the entrance procedure, how likely it is that you will be one of the first, of the first 100. So being successful in being near to the stage. How comfortable you feel, whether you can contribute to access the concert faster, so could you do something to reach your goal. And which rules apply, whether it was open end questions. And that is what we found out about the people said. So we have here the semi circle is the blue one. And you see that the perceived justness, that's very important. So that's the justness of the semi circle is rated low. They say that is a competitive situation. It's not just. And the other one is very high or is higher than the other and also the level of comfort for these highly packed crowd is lower than for the corridor example. And if we ask of forms of inappropriate behavior, people stating something like it is inappropriate to push and to show other people. That's clear. But if you ask them, what is your strategy to be successful? The answer pushing and showing. So in other words, on the one side, they say that's inappropriate behavior, but on the other side, it's my strategy. Something like it's clear. And the rules which apply the strongest wins or the right of the strongest that is in the semi circle in the corridor. It's something like the norm of queuing. And that was the intention. This result the norm of queuing. It's what let us remind that, okay, let us compare situations like in safety security check in the airport. Yeah, you have the queuing systems and people do not start to push. But in the train setting, maybe people push without a call. So it is something like a social norm. People from England know very well what means queuing. Other countries don't know that so good as people from England do. And in fact, we had the idea that was the idea that these smaller corridors, this is something like an intention of trigger to this now is queuing a situation. The other is more competitive situation or with no rules. That was our idea. And what we did then is we make a second experiment looking whether this idea is true or not. And what we did is we studied the influence of the corridor with and the motivation. So we have different degrees of motivation and different more corridor with to look whether there is a crossover in these pushing behavior or not. That were experiments were performed in January 2017 University of Wuppertal and what we did is we the corridor with in front of the bottlenecks was very far five with we have 1.2 to 5.6. We tried to get the test persons in between two lectures and it was very difficult. Yeah. Unfortunately, there were only how to say few runs with numbers larger than 40 students. The other we will neglect in this study, but we will look to the runs with larger numbers of people and we had different motivations high and low. And again, we advised them. Okay, that's entrance to the concert of your favorite artists. And in the high motivation runs we said only the first of the audience will only the first of the one will have a good view to the artist. The others will fail. Yeah, in the low motivation runs, they said everyone will get a good place. Okay, I will stop soon. That are here to runs this a high motivation. But I can go back. So that was the setup and now look to the that are high motivational runs and that is a width of 1.2 and that's a list of 5.6. And you can see that already here you are wall. Yeah, the competition at the how to say the motivation is very high, but you can see differences. Now, let us study it a little more in details that are more diagrams you to use to measure individual densities and you can see that here the individual densities reaching values up to 10 persons per square meter. Well, in the other not. Same motivation and comparable numbers of pedestrians or test person. So that was the idea. Now, let us look to the time series of the density in here directly in front of the bottleneck or the entrance. And you see here different fluctuations and. Okay, and what we did to compare these different funds because they have different lengths. We studied the. The window of five to 10 seconds where we measured the mean value of the density in front of the bottleneck for high motivation and for low motivation. And what we see is, in fact, there are two problems or two effects influencing the density in front of the bottleneck. So you see two branches high motivation or runs with high motivation have starting here or starting yet density of five to six increasing up to eight while densities at low motivation also increase with the width. And starting from two to reaching four to five. And also here one run the six. And here is to get an impressive impression that up the movies above from low high motivation this are low motivation of solar. Of course, you can observe that the density is in fact for the high motivation much higher than for low motivation and that was confirmed here in this. But we have another effect which depends, which is independent from the motivation and just depend on the width of the corridor. If we look to these dating correlation between waiting time and distance to the to the to the entrance. We will we have something interesting. We have to say here these are the low motivation that means that people first stand and then after a while, they start moving. It's like what we know in front of a traffic light the first are moving and the other standing behind and move a little bit later that is what you see here. Also here people are waiting. But in the competitive situations in the high motivation people start immediately moving and then they get stuck what we have seen, but not all. That is apparent here even if we advise people to be highly motivated, some of the people wait and others start running and moving forward. So I will try to sum up now. I'm finished. I'm finished already. Okay, let me sum up now. Of course, we have observed both queuing and pushing behavior and pushing is indicated by a high density and high density is facilitated by a wide corridor with but also by increasing the motivation. So, clocking again clocking only occur at small doors with minor relevance for large clouds. If someone has a thousand pedestrians and want to send them to a very door which is smaller than one meter is crazy. It's clear that this is not reasonable, but there are other risks in highly dense crowds. And that's for example, this this is also a movie from the group of Sarah Goss from gas, he Martin and so it gets to Miguel. And that are also the trajectories of the pedestrians in these competitive situation of the I have shown you the results of the paper. And if you look a little bit in front one moment, sorry. You can observe something like that in a high competitive situation people have some movings in that direction. That is very important and maybe in connection with a high risk. And another point will be here. If you look a little bit closer you always see if in the moment people are going out or the clock is really say stumpled. And maybe this is a much more important risk in bottleneck than these effect of clocks in front. It's much and that has to do with the three dimensional character of our body. We have our the highest with the fear we have small with here and we are pushing here and then if we release maybe there's another leg and let. Yeah, you can see it. Yeah, okay, that was I want at the end. I want to thank Julianne Adrian who has done most of the analysis of the experiment to and Anna Sieben with her co I cooperate in this social psychology field. And of course to the students of the university and the other partners of in the group. Thank you. Thanks. Questions. Regarding the first part of your talk. So one of the many facts that one would expect in a competitive situation is an increase of the desired velocity. Are there any experiments where you have an increased desired velocity, for instance, asking the participants to run but to cooperate at the same time and compare the flow with the competitive situation. To figure out what actually the influence of the increased desired velocity is. Yeah, you are, you are pointing to the to these term faster is lower. No, no, not necessarily not necessarily. Okay, yeah. Understand the influence of the desired velocity at all. Yeah, that is an interesting point, but I don't know whether there are any experiments. So I have not heard about something like that. But maybe will be an interesting considering these results would be interesting to see how the desired velocity probably it's difficult to design such an experiment. I mean to have people running and corporate at the same time is probably not so easy. The challenge. Yes. But maybe regarding this, it's worth to try something. That's right. Thanks. I had a question about the size of the crowds. How many people do you need in order to see the clogging to what to see gloating to. No, in fact, that is that is that's serious what I want to say it's to like we can try it afterwards and show you how it happens. I'm asking because you seem to limit it to more than 40 participants when you are looking at experimental. Yeah, that is something different here. I that is the point is what you can observe is that to how to say to enable the crowd to develop a certain density it needs some time. And if you have only how to say a low number of participants, they already exit before the density increases. That was the reason why we excluded the runs with slow numbers of participants in the study. Yeah. Thank you. So my question. So you are. So my question. What happens when is there a sort of modeling for for the case when there is a sort of opposite flow. So some people want to enter the room and we want to exit. And so from the we use the same bottleneck one one portion of people to exit the building and there was a portion to enter. So what happens in this case for the flow of other density for the flow you want to ask. The flow. Yeah, I know that there was one experiment at what I don't remember. I have to check where in fact they found out that in a bidirectional stream the flow was higher. Even higher. Yeah, then in the uni direction. But again, I would say that it depends again on the level of competitiveness and other factors on the with also yeah it. I'm not sure. For instance, this this made was a traffic jam on the very top of the Everest. The highest mountain world and people. You know that people want some people wanted to to get the top and the other need to get back. And it was okay because I know I can I know what the example of bottleneck because the people can walk only one after one there. Okay. In fact, if you have something like how to say if this is a bottleneck, and you have only one person in front and one person here, it's clear that the flow should be reduced in the bidirectional. But I'm not sure what happens if you have something like that. Then I'm not sure. For here I'm sure it will decrease the flow. For here I'm not sure. So it depends again on the width and what's happened there. Thank you. So thank you. I'm just a bit sorry for me. I know we know each other for quite some time. Thanks again for the for the talk. I have two questions. One is more during the talk, it was bouncing to my mind that you were basically trying to make an analysis of the effectiveness of the system compared to the level of selfishness of the agency in a way. So it came to my mind that probably you're doing a sort of evaluation of the price of anarchy. And while when you are reducing the choices, you are basically reducing the opportunities and efficiency seems to be stable. But there's another measure of efficiency that could be related to risk that is getting better. So lower density, lower risk. Yeah, that's right. Yeah. No, of course, of course. That is also what the people stated if you ask them where they feel more comfort and it's not only more comfort, it's also lower risk if you have not so densely packed crowds. That's totally right. Yeah, the questions are what are the right measures to rate the movement of crowds and that's not only effectiveness in the flow. Or it's there are also other measures. That's that's clear. Yeah, thank you. Thank you. I'll be from father. What are your consequences of your experiments on modeling? Yeah, don't trust models because in most models you will observe clocks in very often very often many situations, even if it's not realistic. So that is my short message, but okay, I believe I believe what we in, no, I have to say something else. I would say most of the models we are using till or what I have seen in the moment is they are somehow modeling a competitive behavior by forces or by volume exclusion or by something like that. Now, in the last years, there was more interesting, how to say some models appearing and Rafael will talk about that. Today, or yesterday, today, where you try to model a cooperative behavior. And that is very difficult. I would say it's easy to make a force model for competition, but a realistic model for cooperation is hard. Because the strategies to cooperate with others are totally different. There is multitude of solutions and so I can I would say, let us try to make good models for cooperation. What would I ask? Because these are humans, there is the whole question of course of human psychology. So wondering if at just the physical level, are there any experiments with physical objects with increased pressure? Obviously incentives one thinks of being pressure on the system. And I'm just wondering to what extent if you talk physical objects, although some dimensions, not in the statistical particles, how do they behave on the increased pressure? Yeah, I know that there are of course studies in the field of granular media. So if you have sun, suns or pills or something like that, and I know that, no, I don't know. I have to check. I have no results, but I can give you an article where these things are summarized. I find myself asking to what extent we're seeing here physical phenomena, and to what extent we're seeing psychological phenomena. I would say that most of the phenomenon I have seen are physical phenomena. Because the most thing which triggers the effects we are discussing here is the volume exclusion. And volume exclusion is independent of competition, how you feel and something like that. But influencing the quantities we are interested in mostly. That's the strongest influence. I would say it's really, it's still a physical, of course you are right, it's physics and psychology. It's both. But how to say without volume exclusion, you can imagine that you will see nothing. If you have no volume exclusion, if you have overlaid particles, you don't see anything. Thanks very much. You stressed at the beginning, you, in the first part of your talk, you stressed a lot about the fact that longing does not imply the reduction of the flow. And what about the second, in the second part, can you maybe discuss something about the fact that dancing, you're dancing, you're discussing about the dancing and not about the flow. No, again, the question Francesco, I have not. In the first part, you discussed that aspect that is important for you. In the second part of the experiment, you very forgot about flow. What can you say about the flow? Okay, in every experiment we performed, we don't see the effect that with high motivations of flow decreases. Okay, does it increase? No. Okay. No, it doesn't. And there the width was very small, but it was an entrance gate for concerts. So people are also able to lean out there. So it's not comparable to the others where you have this higher. But we have not seen any effect decreasing or increasing the flow. No significant effect. Okay. No.
|
The talk summarize the empirical state of knowledge on bottleneck flow and introduces an approach to describe crowd disasters. The approach combines quantities well known in natural sciences with concepts of social psychology. It allows to describe crowds at bottlenecks in case of exceptional (life-threatening) or normal circumstances. On the basis of empirical data, the influence of the spatial structure of the boundaries (width and length of the bottleneck) and the motivation of pedestrians on flow and density will be presented. The phenomenon of clogging and its effects on the flow will be discussed in connection with congestion, rewards, motivation and pushing. Positive effects of pillars in front of bottlenecks are critically questioned by recent experiments. Results of two experiments including questionnaire studies connect flow and density with factors of social psychology like rewards, social norms, expectations or fairness.
|
10.5446/54173 (DOI)
|
So indeed, my title today is Meafield Games on Unbounded Networks and the Graphon MSG Equations. This is work with Xuan-Gal, my current post doctoral research fellow and former student and Min Yi Wang, professor at Carlton University in Ottawa and former student of Roland Malamé who's in the audience and myself. So the program is some propaganda as if you need it for Meafield Games which have been beautifully presented in the PDE context in several talks at this meeting. Just a slightly different perspective and to introduce a key feature and then we're going to introduce the motivation for considering what is effectively a new mathematical theory, effectively, of complex networks which I hope in modesty might be relevant to the study of crowds and for which we are developing a control theory. So then we're going to specifically present our initial results. This is initial because we're starting in all of this on control graph on control systems, graph on Meafield Games and we finish off with a linear-credit Gaussian example. So there are essentially four parts of this talk and I'll try to step through them efficiently. So where Min Yi, Roland and myself started with these stochastic models for the many player game setup is given here. I hope these slides are not too faint for you to see. The images don't look as vivid here as they do on my laptop screen. The notion is that one has got a large number of agents as you've heard in the talks up until now in the meeting and with states here denoted z i and that the dynamics of each agent are given by a averaged deterministic drift part and the stochastic term, the diffusion term, where one has here an average of influences over the other agents. And what I want to draw your attention to right at the start is the fact here that one has a flat average 1 over n. So that once averaging over the influences of all the other agents, notice here the indices for this is the ith agent, its state appears in each of these five functions as they are averaged over the states of the other agents and here one has the state of a major agent which stays as a order one entity as the number of agents goes to infinity of what we call minor agents. In your notice here the ones that the ith state appears in the intensity term on the Brownian increments and we're averaging again over the states of the other agents. Similarly in the last function are the performance functions for each agent. This is what makes it a game of course that the individuals have performance functions attached to themselves so they have their individual interests. Here we have for the order one major agent its performance function which is this expectation of the integral of the average of the cost rate functions of the influences of the other agents and here we have the minor agents, the agents who are asymptotically negligible, whose cost functions depend upon the major agent and of course are averaged over the other agents. So now the underlying we have a complete filtered probability space which is generated by the mutually independent initial conditions and the independent of the independent Brownian motions on each of the agents. Right now the way that we go to the limit here is to consider a McKean-Vlasov equation where the trajectory of a generic particle is governed by so we're going to what's happening here is we're going to the limit we're averaging over the other generic agents, along the generic paths of the system and so the average turns into here an integral in the limit against the measure governing the generic path of an agent. So what we have here is a extended Markov process in the sense that a solution is not a single trajectory but it's a pair namely we have the trajectory of an agent of a generic agent, a typical agent and the measure governing the agent. So now I'll make a remark at this point that it's because we come in with a formulation like this that it appears that we join up with the formulation that Guillaume Masanti presented earlier in the earlier in the conference. So then similarly the infinite population limit costs are written in this form of an integral against the measure of a generic agent. So there are one of the key features of Meanfield games is that the strategies of the individual agents depend only on their local information. So this is highly motivated from the point of view of large-scale systems but it means that not only is it the case that each agent you'll see, not only is it the case that each agent is restricted in its information pattern to its own state and deterministic quantities which it can compute but it's of no value for it to do anything else. This is a key feature. So this simply spells out the fact that the local information is a trajectory of the individual agent and that there is indeed also a global population information set. Now the notion of equilibrium which we're using which is basic to again all of the previous Meanfield game presentations and which I don't want to overlap with those because they were complete and elegant in the partial differential equation framework which was being presented and used. Here we've got the notion of a Nash equilibrium is as I'm sure you're familiar with that a unilateral move by a single agent yields no gain when everybody is using those strategies. So formally a set of controls are adapted to the local information set generates a Nash equilibrium with respect to the individual performance functions when the loss incurred to the ith agent cannot be reduced if it makes a unilateral move away from the prescribed set of control. Notice here that this is actually in a strong form because we're not just allowing the delinquent agent or the unilateral agent to move generally with respect to its own information set but we actually here allow it to make any move with respect to all of the possible information which is available. Right so now so this is the famous picture of the of a zero-sum equilibrium between in a zero-sum game equilibrium between two players which adds nothing technically to this talk except to give you the famous image of a Nash equilibrium. So what we actually do with because we're very interested in approximations is the is an epsilon Nash equilibrium which is essentially as it says that given an epsilon and then the strategies that the that the Nash equilibrium holds up to epsilon. Namely if you make a unilateral move you can gain up to epsilon's worth improvement of your performance. Okay so the fundamental partial differential equations which you which as I've said before you've seen in this meeting is that going to the limit so assuming all the limits exist assuming all the limits exist then then the the generic agents best best response strategy is generated by the solution to a McKean-Vlasov Hamilton-Jackabee-Belman equation displayed here where the measure for the generic agent is generated by a Fokker-Planck McKean-Vlasov Fokker-Planck equation Fokker-Planck-Komogorov equation displayed here and as was clearly emphasized by Martino Bardi this the the Hamilton-Jackabee-Belman equation is propagated backwards in time and the Fokker-Planck equation is generated forwards in time and this is equivalent to the stochastic differential equation describing the behavior of an individual agent subject to the controls. Now the the key point here again which has been mentioned before but these ideas are definitely worth saying more than twice is that a large-scale game problem which is intractable game theory problems are intractable when you've got three agents or four agents so that's what large-scale means there but the game theory problems that we're thinking over the order of 200,000, 300,000 or millions of players then that's turned into a stochastic control problem because at the equilibrium which of course we haven't yet proved exists the individual agent is playing against the mass what it sees is going to be this deterministic measure propagating and it plays against that so it's just a cat so the the way that the sort of the magic circle is entered the equilibrium is that it turns a game-threat problem into a stochastic control problem and of course the question is does the equilibrium exist now then the answer is yes and in particular in the that subject to technical conditions now I attempted to spell them out a little bit in a little bit of detail here because it's not really satisfying when somebody flaps their arms and says subject to technical conditions which I won't make completely explicit today so I'll make them semi explicit it's that for the for all the functions concerned so this is a general sort of high generalization not the full details once assuming uniform continuity and boundedness on all the functions and their derivatives and Lipschitz continuity with respect to controls okay so we can supply the details but I don't want to go into the into those here now the then so the that then there doesn't indeed exist so a a unique Nash equilibrium which is generated by the the state dependent control of a generic agent and it will depend on the solution equilibrium measure so furthermore we have the epsilon Nash property for every epsilon there will exist a population size and epsilon such of all population size is greater than that if one plays the infinite population strategy in the finite population setting you're within epsilon of an equilibrium okay now the reason that we think that we consider this to be very important is that it is that the application of MFG theory in a practical in a quote practical situation or on a problem with a large number of agents is hugely simplified by this fact so this one's got 300,000 players instead of trying to solve a 300,000 player game problem one goes to infinity one solves the let's say it's a big if it's a mechanical situation one might have two pairs of six entry PDE's and so one's going to be solving 12 PDE's as against 300,000 PDA PDE's assuming that an equilibrium even existed so this short circuits this runs around the complexity so that of course it's a part of the limit theory of mean field games but it's also the crucial feature of its application so the outline of the proof some of such time here is that how does it work where you restrict the Lipschitz constant so that a banach contraction argument functions so the key ideas here are that and I'll rely on the chair to give me the famous 10 minutes 5 minutes and 3 minutes flash okay so the key ideas is that we have a Nash equilibrium so to a stochastic control which is generated by a solution to a stochastic control problem we have dynamic regeneration namely that equilibrium is dynamically pushed forwards and there's a drastic simplification in terms of complexity okay now now the point is that that for many reasons one wants to apply now this is the most exciting part whether this ah the simulation works so we're through the hardest part of the talk okay so the so here we have one of these familiar schooling pictures where we've got a school of interrelating actually there's sardines so they're interacting with each other there's some sort of equilibrium and here comes a major player okay that's coming through that you don't mess with him or it right now the point is I don't really that I don't think that MFG theory straight off applies in the situation because firstly the the sardines relating to their neighbors they're not taking a one over in average you can't imagine that they're seeing a flat large law of large numbers average taking place and also also I don't think that there's one measure for which a single kernel it can be seen to be integrating to get the behavior now this rather cheesy picture here brings us to networks which are ubiquitous so the point is how are we going to deal with local effects in large-scale systems how can we apply mean field games in for do when we have local effects but they ripple out from each agent so locally you see your neighboring agents but in fact you're interacting with everybody through this can we apply mean for games with this challenge comes up obviously in networks which are everywhere today power grids epidemic models brain networks social networks okay and conceivably crowds so we're going to consider networks with large numbers of nodes in principle millions of billions complex connections which are asymptotically dense this is a technical point but locally in the theory we're using at this point you're going to have an arbitrarily large number of neighbors but those connections do not stretch necessarily over the whole network but the locally locally finite problem will it will be tackled depending upon the our current the advances in the theory by those who are working on it and our capacity to understand that right now what's a graph one so a graph one is a startlingly simple idea for for dealing with complex graphs so and when I say startlingly simple this is the image you have a graph you have a sujegency matrix and you turn that into its pixel picture in the obvious way so here we've got one over n of course partitions along the along the axes so we've had the pixel picture for a given indexing of the nodes now take this to the limit you see as we fill in more and more elements as we run along a sequence of graphs which could be growing they could be nested or may not be then in the limit you see you might conceive that you have some kind of dense image some kind of function at the end so now technically a graph one is simply a bounded symmetric Lebesgue measurable function displayed here and in the simplest case that we deal with the values are between zero and one corresponding to intensities okay so you could imagine it being only zero and one but if the strength of the connection varies as in many problems of interest then you take very zero one or even between minus one and one now the this the theory of graph once was developed basically by a combinatorics group centered on Budapest and the Microsoft research group in Seattle and there's a marvelous exposition it by one of the architects of the subject Lasso Lovic large networks and graph limits published by the American Math Society reporting the state of the subject in 2012 the idea now the point is that the space is given a norm which is the called the cut norm which is given by this definition that you're integrating over the unit square you've got the modulus here on the outside and once then taking the supremum over subsets m and t the measurable subsets of course are of the unit interval then the cut metric is that one then infimizes this you here's you've got a supremum then you infimize over the indexing of the nodes now this infimum makes things difficult to handle but in fact we can in many situations there's an there's a settled indexing of nodes which means we don't have to worry about that which is a great relief then the there's a relationship now I should mention that the cut metric is a generalized a direct generalization the notion of a cut metric in regular normal graph theory and it has some amazing properties for instance if you want to get like the density of triangles which might be important in crowds if three if groups of three were important you simply integrate the the graphon function in this way this triple integral over the unit square and then here I should have finished with an with an x an error which I'll correct next time I'm near the board so the now the point is that in this metric which is related subject which is bounded above by the little two metric and bounded below subject to mild conditions by L two metric as well that that the space is compact so this means that with any infinite sequence of graphs mapped maps into the unit square in this way using the using the cut metric one has convergent sub sequences and on well-defined situations these will actually be unique so now suddenly the world of complex networks is suddenly mathematically under control is turned into a part of functional analysis so that now so then off we go we can now actually put we've got here actually we go from the norm to a metric the properties which I've just described and now we have an operator you see we have an operator from L2 to L2 we can convolve we can we've got an infinite estimate generator we can now define differential equations where the states are going to be L2 functions on the unit interval so so this so now we what I'm going to do how much time do I have exactly left okay so now so then we'll be we try to five minutes for control theory and five for let me field games so to step through perhaps the control theory we can actually take linear control problems here's the picture you see here's the finite dimensional system here's a halfway infinity you see you've got functions on the pixel elements and then you go to the well-defined L2 limits and then one can actually solve linear control problems in the sense of now it now it's just turned into classical infinite dimensional linear control theory so we've taken problems on our three on sequences of arbitrary complex networks and turn it into linear control theory you see so then we consult and I'm going to step through this now quickly we we have here the notion of controllability if you can steer from any state to any state and we can define a Gramian in an analogy with the finite dimensional situation here we require uniform positive definiteness for state-to-state controllability again classical linear infinite dimensional linear control theory and then in fact we get our control methodology this is clearly mean field games driven because you start with something very complicated you step to its limiting smooth infinity object you can you solve your problem there and then you apply the infinite so to speak the infinite limit simple controller in your complex finite situation and it works in the sense that we can prove and I'm going to have to step through this you can actually prove the convergence that the country that if you go to if you've got a control or designed for the infinite system then and if you sample it and apply it to the finite system then you have approximations which of course will get better this is the terminal state approximation which will get better as n goes to infinity namely as the network converges on the limit graph on okay so now I'm going to step through this and I apologize for doing this quickly so that the that these are simulations where a smooth graph on is used to sampled you generate a network and then for using a control which is generated analytically on the graph on limit and applied to the sample and applied to the complex network then in fact one can steer the states of the nodes in a very precise way and of course this gets more precise as the number of nodes increases okay let's get through to and this works for LQG theory as well so you see here's the smooth graph on and here's a network of 160 nodes which is converging to the graph on in the cut metric sense which I've described before right and the control laws have computed for the smooth limiting object not for the complicated sample object of the networks see when you've got 500,000 nodes here obviously this is the simple it's simpler thing to use now let's go to mean field games now the point is as I said that that I don't think that the that the motivation here is that the flat one over n averaging which appears throughout mean field games and the single measure the single generic agent measure which appears doesn't describe a situation where for instance in this picture of airport densities you've got a different characteristics at the different clusters of nodes which are then interacting with each other over a very complex network so the idea and so here is a diagrammatic representation where you have a sequence of clusters which some which contain a very large number of agents where that there's a large number of agents are communicating with each other over a complex network see so we've now got those two we have now these two infinities running okay the infinity locally and the infinity of the network or in the limit so the that we can now go to a direct generalization of the part of the mean field game partial differential equation pair that you saw earlier and which are familiar with from in this meeting so now you see the dynamics at each the dynamics and the costs the dynamics here at the generic the node alpha in its cluster will in fact be a function now averaged over the graph on so the picture here is you're sitting at the at the agent alpha in its in its cluster which is now turned into a point so you've now got an infinite number of agents at a point and those and those are communicating with the infinite a number of agents at every other node and so here you see if you strip away the zero one integral over the graph on with the graph on waiting here then we've just simply got our old McKeen flasso of integral we're integrating we just be integrating here this is our because it's scalar states for simplicity you see if I had x alpha the control you alpha and then x beta x beta here would be integrated would then be be integrating with respect to the measure at now just set beta equals to alpha so there's just one node see that if I have here new alpha DX alpha and then I'm integrating over this free variable then I've got the McKeen flasso of integral that gave us the limit population before but now I have another integral which is that I do that I do that McKeen flasso of integration at each node at each node beta over the network which and the waiting between the alpha node and the beta node is given by the graph on function g alpha beta so I wonder if I dare try to just use the chalk and so the picture what it comes down to is that so here here we've got our unit square here we've got our graph on function a point here being the intensity of the connections between the alpha and the beta node if I'm sitting here at the alpha node then what I'm doing is and I've got here my graph on intensity is that at each of the points here there'll be sitting a measure which describes the popular the behavior of the generic agents at the beta node in the infinite network so then and then I integrate that at alpha integrate over those measures which are going to be distinct in general at each node so this this integration here gives me the dynamics similarly for the cost now for the lqg example oh okay then we have an existence theory okay and which is a generalization of the methods that give us the existence and uniqueness in the classic in the classical mfg case and we've almost got a complete we've almost got me to go the right way we've almost got a full epsilon Nash equilibrium results so this is work with our mini Wang the existence and uniqueness as presented the CDC last year the Nash equilibrium we hope will be accepted for the CDC this year and we're working on the overall paper right so the theorems read the same you see it's like declaring Lyapunov stability or declaring optimality see there's no exciting new information here except there is this earth for me exciting new setting but it's a direct generalization and it's worth mentioning that you retrieve you retrieve the classical case by taking a flat graph on what's happened the classical case we just set the graph equal to a flat level okay and then you retrieve the classical case right so now if for the here we have any more minutes including questions okay so in order to get a question I'll say that you see here's the lqg problem here's the lqg case here is the finite averaging okay here we have the individuals costs incidentally this business are following the crowd was the original problem that a mini a roll on I considered in the our first lqg mfg paper so now let's couple these the clusters by the by the uniform attachment graph whose graph on is given by 1-max alpha beta okay so then we have the individual agents cost you see it looks the same as before except that the now then the last two slides of the talk are here here's the individual agents dynamics here's the class function looks the same as before but but each agent is dealing with z alpha okay it's tracking z alpha which in fact you see is going to be a weighted mean of the other agents over the network so here the z alpha here is the graph on weighted means of all the other agents at the other over the network then the the control equations I look the same as they did in the classical case except that the s alpha equation is driven by v alpha which is the graph on mean over the whole network not just the mean with respect to one measure but is a mean with respect to all of the measures of the agents to which it's connected over the network with intensity given by the graph on and so that's the mean state process okay so that's it so what we've done here is meanfield games the introduction of the notion of a graph on indicated meanfield graph on control and then given the like to think it's a natural generalization of classical meanfield game theory to meanfield game theory on graphons and that's the lqg example to prove it works thank you
|
Very large networks linking dynamical agents are now ubiquitous and there is significant interest in their analysis, design and control. The emergence of the graphon theory of large networks and their infinite limits has recently enabled the formulation of a theory of the centralized control of dynamical systems distributed on asymptotically infinite networks [Gao and Caines, IEEE CDC 2017, 2018]. Furthermore, the study of the decentralized control of such systems has been initiated in [Caines and Huang, IEEE CDC 2018] where Graphon Mean Field Games (GMFG) and the GMFG equations are formulated for the analysis of non-cooperative dynamical games on unbounded networks. In this talk the GMFG framework will be first be presented followed by the basic existence and uniqueness results for the GMFG equations, together with an epsilon-Nash theorem relating the infinite population equilibria on infinite networks to that of finite population equilibria on finite networks.
|
10.5446/54800 (DOI)
|
All right. Thanks again for coming. If you were at my previous talk, you might have asked why do we care about all this. There was a lot of combinatorics, there was a lot of kind of links and link invariance, and then there was a lot of kind of form of logical algebra. But if you're actually interested in algebraic geometry, like why do you care? Why do you care about all this? And I think one of the goals for me today is to explain like why you might care and why will they subtract problems about chain cobaltism of bimodules actually make sense and compute something that you might be interested in. And so I will start really elementary. So today's topic is braid varieties. And I will start with the matrix where I have once everywhere except this two by two block at some position. I and I plus one where I have zero one one Z. So this is N by N matrix depending on one parameter Z. And I would call this a braid matrix. And so an exercise. There is some kind of echo. And Ray maybe need to mute. Anyway, so we want to associate this matrices to crossings in a braid and so we need to check a braid relation. And so to check the braid relation, like one thing which we need to check is bi bi plus one bi. And this is not satisfied on the nose it's satisfied up to change of variables. So specifically bi of Z one times bi plus one of Z two times bi of Z three. So you have three different variables the one to two to three is equal to the other product. And so bi plus one of Z three bi of Z two minus Z one Z three bi plus one of Z one. And these are just computation for three by three matrices which you can do as an exercise. And so as a result, we have this product of these corresponding to I plus one and I, and when we're not a product of these corresponding to I plus one I and I plus one. And we can compare them by this explicit change of variables. So I have a positive braid beta. I write it as a product of crossing Sigma I one up to Sigma I are, and then I just replace each crossing by this matrix by one of these matrices with all with different parameters. So if I have a braid with our crossings, I will get our different variables you want to see our, and I will get a giant matrix, depending on all these variables. That's it. And so this braid relation or kind of braid relation tells us that in fact the giant matrix up to change of variables up to reprint realization actually doesn't depend on the way how we write the braid. So, as a function of these up to change of variables, it is all the same. And so it makes sense to define this braid varieties so in the simplest form, we just look at the locus of these so Z one through the R where these giant matrix be sub beta of the one through the R is our portraying. So as explicit as it could be. So we just it is a locus in a fine space where Z one and Z are coordinates. And we have n choose two equations which require that this giant matrix is upper trend. Very, very explicit our final break variety in CR. And so, as I said by this equation star varieties for equivalent braids braids are related by braid relations are as a more. And the game maybe let me repeat so all this is very specific of course for positive rates, there is no, I mean, you can talk what you can do for negative braids but it will be much more complicated and like so far everything is very positive but still braid relations are fine. And so let me give you a concrete example. So, the braid is one crossing to the power four. So this will be I don't want to draw this. So I will have four crossings. It is a two strand braids so that will be two by two matrices, and I have all different parameters for all the crossing so I have the ones you do this really for. And so I get 0111 0101 on the two and so on. I multiply all this matrices, I get some stuff here some stuff here and some stuff here. And in the lower left corner, I get the one plus these three plus the one to three that's an explicit computation. And so my variety is cut out in C4 with coordinates Z1 Z2 Z3 Z4 by one equation that Z1 plus these three plus the one to three is equal to zero. And it's kind of curious and we'll see a little bit of this in general that this does not depend on the four somehow before willing being other entries of the product, but it won't appear in this lower left corner. But it's fine. It just doesn't depend on the four. And then we can actually solve this equation by factor in out these three so this will be Z1 plus these three times one plus the one Z2. And this whole thing is equal to zero. And then we have two cases. Either one plus the one Z2 is equal to zero. In this case, the one is equal to zero from the equation. But then this is a contradiction because one plus one Z2 is equal to one. So this is not possible. And so in the second case we get one plus the one Z2 is not equal to zero. And then we just solve for these three. And so our variety is actually just a complement to hyperbola. So this is the equation in equality that one plus the one Z2 is not equal to zero. And this is an open subset of C is two of the plane. And then these three is completely determined by Z1 and Z2 and Z4 is a free variable. It doesn't appear in the equation. So we have just this X of beta times a fine space, a fine line with coordinates Z4. It might be not completely obvious from this equation from the initial equation, but it's completely obvious from this description down below that this variety is smooth because it's open subset of a fine space. It's three dimensional and obviously it's not compact. So it's not projective variety, it's quite a projective and it's complicated. But we can study it very, very explicitly. So maybe let me pause and ask for questions like is it clear what's the definition of X of beta in general. Is it clear how multiply the matrices. Any questions about this. Okay. And so this variety X of beta. And this particular one is the main object of today's lecture like how to associate something interesting to pose the brain. And so note for people who have seen this and I'm sure there are people in the audience have seen this. So this one plus the one C2 is not equal to zero is not completely random thing. So it's not a completely random open subset of C2. In fact, there's a cluster variety of type a one with one cluster variable and one frozen variable. And so the frozen variable will be one plus the one to two, and then Z1 and Z2 or two clusters corresponding to this type a one so if you've seen cluster varieties you might recognize this and you might have seen this in different and I'll talk about this a bit later. But there is already some interesting structure here and maybe one kind of geometric consequence of being a cluster right is that we can, we have to tour inside this so we have this kind of complement to hyperbola. This is this variety. And so I can also remove the vertical line. And I can say that I have a chart you one, which is where Z1 is not equal to zero and one plus the one Z2 is not equal to zero. And this is just a torus. And I have another chart you to where Z2 is not equal to zero. And again, it's a torus in both cases because in you do I can express the one as a function of Z2 and one plus one to two. And here I can express the two is a function of the one and one plus one to two. And then like, probably speaking being a cluster right it means that transition function between this charts is very very specific. And another interesting thing which is not so relevant here but will be very relevant in a second is that there is a torus action which scales Z1 by T and Z2 by T inverse so it doesn't change this equation. In fact, it scales these three if you want by T again. And maybe it scales before as well but we don't care about it. So there is a non trivial torus action on this local Z1 fixed point but again it's non compact and you have kind of lots of subtleties but this equation or which we remove from the plane is torus equivalent. Okay. And so what do we know in general so in general it was studied under different names for a while. I think like one concrete result which we've recently with Roger Casals my brother and Susie Mentale. I guess, say in a mission somewhere in the audience is that so for reasonable class of rates this variety is very nice. So we assume that our brain contains a half twist. So half twist is the positive lift of a permutation where just every element goes to n minus so I, delta y, and minus I this is a permutation and plus one. And you take a positive lift of that that's a delta and we require that our brain contains this delta is suffix so there is some positive rate gamma and then I have delta in the end. So things are slightly easier if we contain a half twist. So, for those trends the half twisted just one copy of Sigma so this is a contained a half twist. Okay. And so the first statement is that this variety that we're talking about it could be empty actually so if beta is just equal to delta. Then the variety will be empty. And so this is really really easy to see because, for example, on those trends you will have this matrix 011z is not never up a triangular because you have one in the corner. So the variety will be just empty. And in fact we have a criterion when it is non empty. So this is non empty whenever gamma contains another copy of delta is a sub word. So not necessary suffix not necessarily prefix but some letters some generators inside gamma form another copy of Delta. And in this case, the variety is actually smooth. And it's smooth of expected dimension. The lens of beta minus and just to or the current at the lens of gamma. So, this is of course expected dimension because we have a lot of beta variables we have one of these variables for each crossing. And you always have fixed number and choose two equations, which are just lower triangular part of your matrix. And so, in this case, the variety is nice and smooth as we see in the above example. And another thing which is much more complicated. But I want to say very clearly that this variety itself is an invariant of a link. So, which link. So it's an invariant of the lead of the closure of gamma delta inverse. So, gamma is a positive rate gamma delta inverse is not necessary positive rate. But we can still talk about the closure of this non denser positive rate. And then the claim is that the variety of beta is an invariant of that closure of that rate under conjugation and positive stabilization. So you can think of it as gamma delta inverse maybe slightly easier ways to say that this is beta delta to the bar minus two. So you just take a belt, beta you remove a full twist from beta. And the, if I have a different varieties which close to the same thing, I actually get the same, the different rates, such that beta delta minus two close to the same link, then the varieties is the same provided that two links to the rates are related by conjugation and positive stabilization. So we're not allowing negative stabilization. And, in fact, they're not exactly isomorphic, but there is morphic up to see to some power and see stars or some power. So my X. So if I know anything because the countries. But if I know that beta delta to the minus two is equivalent to beta prime delta to the minus two. And it could be a different number of strands. Then I know that my acts of beta times C to some power times C star to some power is equal to X beta prime times C to some power times the stars to some power. I actually know all these powers I just don't want to write them. So this is very, very explicit and somehow it's much more interesting than just talking about the homology or anything the actual algebraic variety is an invariant of a link with some restrictions on operations but that's it and maybe if I have time I'll explain an idea why this might be true. But maybe I won't. But I'll give some examples, for sure. And another thing which might be more interesting for you is that acts of beta has a smooth compactification. So not the variety is smooth but it's very, very non compact you remove all this and devisors, but depending on the brain words for beta whenever this variety is non empty, you can always write a very natural and in some sense canonical compactification, which will depend on the brain words for beta. The complement to acts of beta in this classification will be nice normal crossing divisor and the components or the strata in this classification correspond to sub words of gamma containing the full twist. So very concretely, again in this example. So we have this one plus the ones that do which is not equal to zero in C2 how does it come back to five so it compactifies to P1 cross P1 and the complement to P1 cross P1 to this thing in P1 cross P1 is what hyperbola which have to put in back, then have this line and infine and that line and infine. So each of them intersect the hyperbola at one point. And so if we can strata in this classification, we add hyperbola with two lines and we had three points which are intersections of hyperbola with the line and intersection of two lines. In this case, my beta was sigma to the four. So my gamma was sigma cube. And there are three two letters sub words of gamma containing sigma, which are essentially non empty, and they correspond to these three strata of one dimension one. And then you have three points which correspond to letters of gamma which also contain them. So there is some combinatorial way, which might be not so important for many people here how to describe the strata in this classification and this will be relevant for us very very soon. And for people who are interested in combinatorics so this notion of sub words containing gamma is related to so called sub word complex defined by Knudsen and Meiner. And the collaborators studied a lot of interesting properties of this complex of sub words containing delta. And so, from our perspective and from the perspective of this talk, this is just the CW complex which governs the strata of the classification of X of beta. Okay, and so this is the theorem. And then the next theorem is asked like how this is related to not homologous, which we discussed in the previous lecture. And so this was kind of this in some sense predates some braid variety but I think it was praised by many people in slightly different terms. So it starts from I guess the work of Webster and Williamson and then it was rephrased more recently by Anton Melit. I mean I'm training and other people that. So if you have this variety X of beta, you always have an action of the torus, which is C star to the power n minus one so n is the number of strands here. And then we can look at a covariant co homology of X of beta under the action of the stores. So either a very very into non a very large because this variety is non compact it has very non trivial weight filtration. So we can associate it graded of co homology with respect to weight filtration, and we can take associate graded of a covariant co homology with respect to weight filtration, and this is some by graded vector space. So we have homological grading here and we have the weight filtration. So we have different grading, and the claim that all these people say is that up to some regrading. This is the same as top. So we have homology, which we defined yesterday so for a break again we can construct a complex of by modules compute Hockschild homology of that complex and here we take just HHN of each term and this complex of by modules. And this recovers, if it a homology this recovers a covariant homology of this right. There is a separate result, which we proved with my homing come on to me and it and Kate and kind of that this top degree piece, HHN and beta is actually the same as the bottom degree, bottom coven for some killing homology of beta delta to the minus two. So if you remove the full twist we exchange the top one for some key homology in the bottom one for some homology. And so this beta delta to the minus two is gamma delta inverse which will read it so and note. Yes, later. So this gamma delta inverse and this bottom homology as we discussed last time this is invariant under conjugation and positive stabilization. And so we expect if we believe this theorem that at least co homology of acts of beta is invariant under conjugation of a brain and positive stabilization. And so, then this part B of this theorem says that actually much more stronger statements through that not only homology or covariant homology is true, then the variety itself up to this factor is here to some power which doesn't take homology and see start to some power which changes the usual homology but doesn't change the current homology is invariant and so, because the variety is the same. We see that the homology is the same. And so as a conclusion, this theorem to tells us that in fact, we have, you can read it in two different ways. So one way is that you have explicit geometric interpretation of link homology for positive rates, or at least some part of the equation for positive rates. And this is just homology of some very explicit break right. You can also read it in the other direction and you can say, we have a very, we don't care about like Zorgel by module so we can complex or anything like this. I'm really interested in this varieties, where the product of matrices is upper triangle by some reason. And then we can ask, well, when what is the homology of this variety. And it turns out that the answer is given by Kavan-Frozancki homology. And all these combinatorial techniques that I tried to outline last time actually helped to compute the homology of XFB. And so, for example, for torus notes, as I explained last time, and as I will explain a bit later, we can compute homology, we can compute Kavan-Frozancki link homology by now. And so this gives us homology of very, very non trivial algebraic varieties, which have different names and different incarnations, but in some way you just take any positive rate which closes to torus not, then this variety is essentially the same by the previous theorem. And then it's homology is the same and this homology is very, very non trivial and given by these combinatorics that I discussed last time. So this is roughly the idea. So I will explain the proof of this theorem in a second. And maybe I want to ask for questions here. Yeah, so that's a good point to ask any questions before we go further. Okay. And so, couple of remarks. So one remark is that this variety is actually, you can stratify by strata of the form C to some power and C star to some power, like with so before. And so, hodge filtration, you can ask, well, you have mixed hodge structure on the homology. How does that behave but hodge filtration is actually easy and you always have just pp pieces of hodge filtration. But the weight filtration is really, really non trivial. And that's what it is interesting. And if some people in the audience have seen something like pw, p is equal to w conjecture. So this is kind of the w side of that conjecture in this particular setting. So this is some weird version of a character variety. And we're taking a homology of this weird version of character variety, depending on the braid with the weight filtration. And so, their hodge filtration is easy, but you have interesting weight filtration. And another remark, which some people might ask, is that very recently, I think couple of weeks ago actually meantime train posted a paper on archive very found a way to compute all hodge of homology, using very similar techniques. And so, you have a similar analog of this X of beta. And then you roughly speaking tensor, take a fiber product with string resolution and do some formal springers here to recover all HHI. And so, you have all hodge of homology, all Kavan-Prazansky-Lynh homology from the construction of X of beta or really, really similar constructions. And there are lots of interesting things here. And he has some other construction related to unipotent matrices and things like this, but I guess I won't talk about this. And maybe, yeah, maybe before going to the proof, let me actually compute the homology of this variety. And then I have complement to the hyperbola in C2, how to compute its homology. Well, so the easiest ways to use Alexander Duality. So, maybe I'll write it as a side remark. Before going there. So, you have X of beta. X of beta. So, we can use Alexander Duality and say that the homology of one plus C2 is not equal to zero. Is in fact the same thing as homology of four minus one minus blah. Maybe with compact support. Where you have one plus one C2 is equal to zero. And this is just C star. So, the hyperbola is essentially is isomorphic to C star. We know it's homology is two dimensional in two neighboring degrees. And then we have by Alexander Duality we have two homologists there, and you have one extra piece of homology because this is for you. And the same thing works equivalently. I just don't want to write the answer. But if you do this properly, then for example, non-equivalently we have H zero is equal to H one is equal to H two is equal to C. And all other homology things. So, beta. And then equivalently, you get slightly more interesting thing, which I guess we'll discuss a bit later. But in this case, it's something very, very concrete and slightly more general case computing homology and using Alexander Duality is much harder because this is very complicated algebraic right. And we'll see some examples of this right. Okay, so let me spend some time talking about the proof of the theorem to. So, we need to compare this variety with something that we saw last time so what did we actually see last time. Last time we saw by modules bi, which are our tensor are over our s i so these were by modules over our, which are again very, very formal. How do you think about them, geometrically. So, genetically, they came from and in fact, Zorgi invented this by module by thinking about both Samuelson varieties. And these are called both Samuelson by modules. So what is a both Samuelson variety. It's the variety of pairs of flags. F and F prime where F and F primer to complete flags, which coincide, except for one place so FJ is equal to a prime G for J is not equal to I. And we can compute its homology. So we have line bundles on one side which are FJ, FJ minus one, we have line bundles L prime, LJ prime, which are FJ prime, what FJ minus one prime. We have axis index primes. Okay, that's already something that we can see. So this is axis are turn classes of line bonds on the left. And all these line bundles are actually the same except for J is equal to I and I plus one. So this means that x J is also equal to x J prime for J is not equal to I and I plus one. So we have F I plus one mod fi minus one. So the two was keep this fi which is not equal to it. I prime, but this quote if I plus one is the same as if I plus one prime fi minus one is the same as if I minus prime. So this quotient is the same and this quotient is filtered both by a line that I plus one and by a lie prime and a lie plus one prime. And so, in particular the chair classes of this rank to bundle are some other functions in a lie and a lie plus one and some other functions in a lie prime and a lie plus one prime. So the symmetric functions in x i and x i plus one and x i and x prime and x i plus one prime agree. And so these are just chairing classes of fi plus one mod fi minus one. And so, in fact, this by module bi is very closely related to the homology of this BSI. We had to tense a product so we tense a product to term complex but but let me tensor first this bi. So, on algebraic side, we have bi one tensor bi two, where we tensor over R. And geometrically, this corresponds to this game but Samuelson rises more complicated but Samuelson right. Wherever sequence of flags f one of two to the F R plus one. So we have one more flags then by modules here. And for each pair of neighbors. We have this but Samuelson condition over here. So that do flags coincide except for one place. And so the first two flags coincide, except for the place I want the second pair of flags coincide except for I do and so on. So we have a sequence of flags abstractly you can think of it as some kind iterated fiber product of this simple but Samuelson varieties but you can just say that this is a sequence of flags with this condition. And so we can actually understand this. These and you can compute the homology of this variety which were studied by many people in Boston Samuelson, there are many others using this algebra using this fact and you can just explicitly compare the homology of this and you can just say that it's a two the terms of product of this by money. And so we were interested in TI so TI was. And so we were interested in the complex prime actually generate BSI and these are essentially on relations, plus relations on complete slides that are always there. And we were interested in two term complexes TI, which are the colon on bi to R and again it was very very formal what does it actually mean what is the map from bi to our. So the co homology of this variety and ours just a co homology of the flag right again it's not exactly but it is very very close. And then you have a map from the flag right to be a sign. Mark this. So this corresponds to the map or diagonal map where you have just the space of complete flux, and you embed it to this was Samuelson space PSI, and then you have a map and co homology, which goes backwards. And this corresponds to gain up to some subtleties which I don't want to discuss to the map from bi to our. And so we take the corner of this map what does it mean that we have a corner for map between two algebraic varieties well so this is a closed embedding we can just take the complement. Now, we're saying that f and f primer and position si. If FJ is equal to FJ prime for Jane or equal to I so this is the usual but some of some condition which we had before. And we remove the diagonal by this kind of argument. So we require that f sub i is not equal to f sub i prime. And so we can say that this is an open but Samuelson variety or there are many different names for this variety. And again if you have now the tensor product of this cheese. What happens is that you have a sequence of flags F i f one F two dot dot dot dot and F r plus one. And the sequence of flags satisfies the following condition that the first two are in position si one, meaning that all subspaces are the same except at one place and at that place they must be different. So, because we removed the diagonal. If we compare two enough three. Again, there should be the same except for one place as I do where they must be different and that is this condition and so on. So this is kind of open but Samuelson variety that we're talking about. And this variety actually appeared one before all this discussion. So this variety of flags appeared in many many works in geometric representation theory most notably in the work of brain Michelle. And there's a beautiful paper of the link on break group actions on categories and many other people. And it plays a very important role in the linguistic theory, because they consider similar sequences of flux so if you're interested in geometric representation here or in the linguistic theory you might have seen the space in some way and so you can define this space for any break. And actually it's a very useful exercise, which for example you can check and it was done by in that paper of the link that braid relations are satisfied so maybe let me put it as a remark. So this variety will be as beta is invariant on the brain most So for the same braid but for different presentations of the braid is product of generators you get isomorphic varieties and in fact can only collect more in some sophisticated sense, although this but Samuelson varieties David and not a link invariant. And so you can think of this as very similar to what we had before and like in some very precise sense that what Samuelson variety is a project variety, because they have closed conditions in the product of lag varieties. But this opens subset or BS is something smaller and in fact it's an open subset and BS beta. And so you can think of this was innocent variety as a compactification of this variety can be as or whatever the line and brain Michelle called it. And so you have a variety which is a brave variant, but this compactification is definitely not an invariant of a break and this is very similar to what we had before for break right. And so, just to summarize. So, let's just briefly talk about the complex of keys. So the product of cheese expressed as a product of these as a complex of products of these is this some kind of inclusion exclusion formula for open but Samuelson variety inside close but Samuelson variety. So for example, here, we can say, well, so we either remove diagonal in the first place or we don't remove the diagonal in the first place. In the second place we either remove diagonal or we don't remove the diagonal. And so you can imagine that you have a giant complex of close but Samuelson varieties, which resolves in some sense this open but Samuelson variety, meaning that for example the complement to open but Samuelson variety in the close but Samuelson variety is again paved by close but Samuelson varieties and all the intersection up with Samuelson varieties. And so inductively we understand the structure of compactification. And that's a very rough sketch of the idea. And then you can ask, well, okay, so we know all this, how this is related to this right ex of beta. And a very useful Emma says that ex of beta is actually subset of this open but Samuelson variety, where we require that the first and the last flags are standard. So, there are many things that you can do here and this could be really confusing. Yes, it is so there are different versions of what you can do now, which all corresponds to different notions of closing the break you can require that for example the first flag is equal to the last. The first is equal to the last up to some kind of twist. And the loose the variety would correspond to the fact that the first flag is equal to the last times the action of the prevenous or morphism. But here we require that not only the first is equal to the last but also they're both standard. And then basically parameterizing bra cells gives you this matrices and it gives you a relation between ex of beta and obvious. And so then embedding as I said, obvious beta to be as bitter correspond to the compactification of ex of beta above. So you can think of this compactification over here as an embedding. So this is some piece of open but Samuelson variety where we require the first and the last like the standard. And this is P1 cross P1 and just the close but Samuelson variety where again required the first and the last of standard. And then how do we compute homology with this inclusion exclusion for most we're just saying that to compute the homology of the complement to hyperbole. Instead of using some tools like Alexander dole see we use this certification so we take homology of P1 cross P1. And then we have homology of these three strata which hyperbole two lines with infinity and three points. Maybe I should write down so I have three points. Let's take a co homology. So here you have co homology of hyperbole. And you have co homology of P1 at infinity. Another co homology of P1 at infinity. And then you have co homology of P1 cross P1. And so you have restriction maps, given by this inclusion, and you have restriction maps here. And then the claim is that co homology of my open space is actually sitting here in degrees zero. And it's given by homology of this complex. And it's a psychic everywhere else. And you can take homology co homology will be do to each other, and you can do everything equivalently because the certifications equivalent and everything makes sense. And so this is a rough idea. And then the claim is that this kind of giant complex corresponding to the certification precisely matches the complex computing the technology. And this is an idea which goes back to Webster and Williamson for sure and Zorger in some sense and many other people that you can compute homology in this way it just looks complicated. It is complicated, but you can certainly do this. So I think that's all what I want to say about the idea of the proof so you can think of X of beta and many people like to think about this X of beta instead of matrices and the product being triangular, saying that in fact we have a bunch of flags and the first and the last standard. So, again, let me pause and ask for any questions. Okay. And so then, with all this. Let me actually talk about some examples, because like what are the examples. So I gave you just one, like how does this variety actually look like. And I think the most beautiful example was very recently shown slightly different terms but essentially equivalent to what I will say, it was shown by Galashian and long. So, they observed that tourist links correspond to open piezotroid varieties in the grass mining. So open piezotroid strata in grass mining K and so let me explain what this thing is. So, Rosamannion K and parameterizes K dimensional planes and n dimensional space. So if you choose the basis of this plane, we have K by n matrix of rank K up to row operations, as usual. And given such a matrix, you can repeat it periodically. You can repeat the columns periodically. So here I have the columns we want to be in and I require that we are plus n is equal to the I. So I have infinite sequence, infinite matrix if you want. And then this piezotroid cell, piezotroid strata corresponds to the condition that the determinant of K by K matrix, Vi Vi plus one up to Vi plus K minus one. So this is a K by K consecutive minor. So all these minors should be non zero. And again, we're quotient by row operations and this condition that minors non zero all these minors are unchanged or multiplied by non zero scalars under operations. So this is a well defined subset and it was started by many people starting off by Knudson, Lannu and Spar and many other people in cluster algebra community. And so the claim in the first claim is that up to some C to some power and C star to some power, which I again don't want to write explicitly, because that will take a while. This braid variety for the torus knot. So this is just the torus link K and is exactly the same as this was it right. PKM, which is an open subset in grass mining up to C to some power and C star to some power. And this actually depends on how you draw the torus link so you can draw it as a link on case trend. You can draw it as a link on and strength you can draw it as a link on K plus and strength. So by the theorem that I mentioned all this right as are actually essential as a morphic up to the game C to some power and C star to some power. But you can also draw it as one is a brain and K plus and strength and that's I guess the closest to PKM. So one way to draw this which is most relevant here is to say that we have case trends like this and strands like this. And this closes up to key and torus link. And the braid variety for this thing is isomorphic to PKM without any stars but with some C. So concretely, so we have our friend. One plus the ones you do is not equal to zero. What is it how it's related to any positive droids so we have T24. So grass mining to four, we have this open subset and P2, four, and the claim is that this open subset is actually one plus the ones you do is not equal to zero times the star square. And if you want to have the braid right we need to have an extra C here because we had an extra C in the braid right. And so, in this sense, the doors link T24, Chris, or torus. And the braid variety for this link corresponds to the droid variety P2, four. And this is very, very easy to see but maybe for the interest of time I don't want to do it it's one of the exercise. So, if you're interested please do this exercise and come to discussion session. Okay. So, again, like maybe the most important thing here is one of the most important points of this paper galatian alum is that they've been trying to compute the homology of destroyed varieties for a long while, using various matters using cluster algebra using perth recursions provided by cluster algebra and that's just hard. It's a genuinely very hard problem. And so what they observed is that instead you can compare it to this braid varieties or their cousins and compute the homology and query and homology using the machinery of lean homology. And in lean homology. There's recursions of allies who can come from a minute that I mentioned last time actually compute this thing for you compare it to cut the numbers and other things. And so if you're not interested in formal stuff that I discussed last time, you might be interested in for example computing the homology of this very very specific algebraic rights. And it turns out that to compute it, you might need all this abstract combinatorial and logical algebra stuff. I think it's a beautiful and very interesting question to understand this without going around without using this recursion so fucking come from method and like doing it directly here in the setting of pre Detroit varieties or maybe brain right in general. And so for example, like one concrete application from last time, which maybe I will write here. I'm going to compare with so Galashan well. And it uses the results of last time. Anyone has any questions. So the first notes correspond to position right varieties. I don't know. That's a good question what are iterated put those notes in the setting. I don't know and I mean there are lots of questions and it just being explored so all this like connections and explicit equations, and this work of Galashan Lam it's all just last year essentially this year. So open things here. So what I was saying is that so using this results of Hogan company minute let me advertise them one more time. So if GCD of K and then is equal to one. And this is like as concrete as possible so you also have some explicit combinatorial formula for it but even explaining this thing without income all I think this is an open problem. Because there is no like sell the composition for this right and there is nothing like this just complicated open algebraic writing on compact and it turns out that it's homologous even under some assumptions. So maybe it's retracting something with even paving. Anyway, so the second example is also related to some known varieties and geometric representation theory. So if you have a pair of permutations in a son and W is greater than you in our order. You can form the fallen braid. So you have the positive rate lift for W. And you have the positive rate lift for you inverse W not so W notice the longest element in a son, which is delta essentially, and then you add an extra delta. And anyway, some explicit braid that you can cook up from W and you, and this turns out to be this variety is open research and variety for W and you so this is an important subset of flag variety which was studied by many people. And another thing which is, I guess, related to research question is that you W and you are in the same W is greater than you in the order game. And in addition, W satisfies some technical condition of being a grass mining. Then you get all positive right varieties. So Knudsen Lavinch power defined like the whole specification of grass mining where this PKN was the top open stratum, but we have lower order strata and all this lower strata actually correspond to some smaller braid varieties of this type. And this actually is known that it's open research and varieties as more to this. So this is also fine. But it's interesting. And so what we recently proved again with Roger mission that you can realize the same destroyed variety in many different ways. So up to see to some power and some power. We have four different breeds, some of them on and strength some of them on case trends. And all of them are isomorphic up to see to some power and see start some power. This is one of them. But we have many more, which look differently. And so again, I think it's an open question for example which Richardson varieties are isomorphic in general and on maybe different number of strands. So this is one indication of why this might be interesting. Anyway, and so I think I try to convince you that like there are lots of interesting varieties, most notably for Detroit varieties and lower presidio varieties that are related to braid varieties so it might make sense to study all this. And the last thing which I want to mention is a theorem also from last year by Gaussian and when that X of beta under some assumption that this beta delta to the minus two is a positive braid has a structure of cluster variety, or maybe I have to say upper cluster variety. I'm not 100% sure. And again up to see to some part of the stars on power. And so in fact, all these varieties, which we've seen here. In particular, because it's right varieties are cluster varieties by another work of Gaussian and long. And this variety one plus one to two as I said was cluster variety so there is a lot of interest in structure which I'm not an expert in. And I don't fully understand for sure. So I have cluster, you have interesting charts on this variety see have lots of interest in other structures. And I think it's a very important question, which I want to answer for sure is that what does it tell about co homology so what kind of structure do we have in co homology and what kind of structure does it tell us about the link homology which I, for example interested in. And so one example, I will explain next time. So I can just say one sentence that any cluster variety has a canonical two form. And from this form, we'll have an interesting class or an interesting operator on the income order. And so construct next time. And really thinking about the construction of this form and different constructions of this form really helps to understand this operator and different other operations in new demolition. And as I said, there are lots of stuff to explore it's very active and interesting subject. And I think I'll stop here for today. Anyone have any questions. Yeah. Okay, great. Yeah, co homology but Samson right is related to complexes of by modules yeah you need to do it very interesting. So you need to work with the current and I just am lazy and I don't want to explain it very, but if you use proper equivalence and all this proper like actions of Braille's and G and the Taurus on this flex and but Samson's you can recover the by models. And I have to say so I didn't say it. But some people might ask so you have this variety for positive rates like, can you do anything for negative rates, and it was kind of a big breakthrough of Rukia realized that you don't have a variety for negative rates, but you can still construct find a complex of constructible sheaves on flag variety. Or, and so this thing is some kind of correspondence between flag right in itself right a projection to the first flag a projection to the last flag. And so this correspondence, give you a constructible sheaf on flags cross floods. And so, Rukia explained that like this formal stuff that we did yesterday, you can think of some computations between constructible sheaves on flags cross and so in this sense you can do anything for any breed except that it won't be given by an explicit correspondence and I don't know how to think about this geometrically. And I'd love to know but like the best you can hope for is like at least for sure there is always an equivalent constructible ship and flag rise. And that's I think an important remark to me. What if we consider a quiver and K theory. I don't know. That's an awesome question. Never thought about it but that's an awesome question. Yeah. Quantization of class variety playing your role. I don't know. This two form that I mentioned in the end is probably responsible for quantization but again I'm not an expert and maybe my brother who's in the audience can see more. I don't know. Yeah, yes. I think they should be and maybe it's somewhere in the work of lipstick. So you have a general coaxial groups I don't know but for general while groups, I think so. And the reason is like you can think of this matrix as fallen so you have your Brazil which is something like B, as I be. You can just write it. So as be on the left and then you have this brave matrix B sub I, for some Z. So the claim is that any B S I be you can write in this form. And so this this works in general so you can say that you have a simple reflection, you have this decided Brazil and then you can kind of move B to the left and see what's left so you always have some metrics and I've seen some work of lipstick and I don't remember it's a reference when this is discussed and then I think from general grounds, you know that there is some kind of analog of this relation. I don't think anyone has ever written the explicit change of variables aside from Taipei, but I think it's possible and I think, again, the most on paper of lipstick where this is discussed that. It's always exists by considering general machinery bra cells, right in this equation explicitly for other types it's an excellent question. But yeah, I don't think it's done in at least in this explicit way as I presented here. The flag right is though so if you if you don't want to think about that thing. So this. So this open but Samson right it is certainly makes sense for all types of this for while groups. And I started by all these people in all types. And like rewriting it as matrices require some work but the game that can be done probably. But if it is done already by the stupid others. Either anything corresponding to tangles. So there is something corresponding to breeds. If you have a tangle with partial closure. I don't know. I don't know. And like the definition of carter as a homology. Somehow, very much depends on the brain so you have to present your link as a closure for braid and then check market relations so you can and just draw our three tangles somehow this one for the machine your breaks down. And I don't know if you can do anything for tangles. And then, Roman has a second question which I can answer. Why the variety only depends on the note. So this is an excellent question so this is. This is this part be on the tier. So, let me say maybe one word for this about this if I can scroll and crash sorry. So why. Next beta. And so the idea is the fallen so you need to consider great with negative crossings as well. And say this was a legend. possibly. So to a legend or link to kind of associated the DG algebra. And prove that like this digital to breath roughly speaking a legend or Lincoln very particular co homology of the job. Is a legend. Now. So, what we did is that we found in this place is algebra geometric model for this algebra. In terms of this braid right is a and building on the work of common. So the specs of each zero of this is the algebra what it is is your ex of beta. roughly speaking, even if it is a positive grade. I'm lying a little bit but let me put it to approximately whatever. In fact, the DG algebra you can think of it as a generator correspond to the crossings of a great and then generators of the next degree correspond to the equations in the matrix. A roughly speaking is a casual complex for the equations of X of beta. The beat is not a positive rate but equivalent to positive. And then we can also have some outbreak model for spec of a zero beta just by looking at the braids again some generators correspond to crossings differential accounts some things. And so we use this algebra geometric models to check that this is invariant under all braid relations and market. And so this is invariant. Under. And then mark them. Maybe I don't want to say more because that's a lot of stuff. But in some sense, yeah, so we reproof the work of Chikhanov Chikhanov work towards it to and we work with complex numbers and we had to do some other improvements, but we have some very explicit algebra and we have some very dramatic model for replacing this work of Chikhanov and computing h0 of this DJ and proving that this is a link in there. So this is kind of the rough idea, and maybe one last thing that I want to say here is that this model is the fallen so you have some algebraic data. Plus a collection of vector fields. And we have a collection of which integrate the free action of some power. And this vector fields are parameterized by negative croc. And if you just have an awesome break right, which is nice and smooth and understand everything. If we start, for example, introducing negative crossings and like making right my search to move when we introduce sigma sigma inverse. Then we have to introduce this extra piece of data for each negative crossing we have a vector field. And then we have to introduce the fields commute and integrate the free nice action of C to some power. And then the quotient by this action is a link invariant. And that's what we check but again like we don't have a general insight like why it is, except that you kind of tells us so. We check every move separately we check right a master move we check market moves. And this is in the end the general link invariant. And we use pretty subtle properties of this digital algebra, which I'd love to understand better. And maybe another mark is that I'd love to understand. I've never seen this construction in kind of geometric presentation theory, these are sure that you have a variety of vector fields. It's very nice to understand how this construction where your back to the fields for negative crossings how does it correspond to this constructible she's in perspective and whereas before some type of writers but that that's also not done I would say, but yeah, that's the idea how to prove that the right is the same. Thank you everyone so let's thank you again.
|
Khovanov and Rozansky defined a link homology theory which categorifies the HOMFLY-PT polynomial. This homology is relatively easy to define, but notoriously hard to compute. I will discuss recent breakthroughs in understanding and computing Khovanov-Rozansky homology, focusing on connections to the algebraic geometry of Hilbert schemes of points, affine Springer fibers and braid varieties.
|
10.5446/54801 (DOI)
|
So last time we discussed some relations between braid varieties and maybe the Detroit varieties and being homogenous. So I tried to outline the general picture. I didn't really give concrete examples except for a very easy one for two-strand braids. So here is like one concrete non-trivial example which I really like and especially this top picture keeps me up at night for less, I don't know, 10 years. So there is a lot of information here. So this picture is from a paper of Dunfield Gukven-Rasmussen who started this Kavana-Frasansk in tripli-grade lean homology and they computed a bunch of examples. So rather they predicted based on various tools and methods what this homology should look like. In this particular case, this prediction was certainly confirmed and there is a lot of stuff going on in this picture. So let me try to explain some pieces. So this picture represents the tripli-grade homology of 3,4 torus not. So it's tripli-graded. So you have a degree which is represented by the height, by the vertical direction. So this is a degree 0, this is a degree 1 and this is a degree 2. And then for zones of direction, I'm supposed to represent the q-degree, whatever it is, and then these numbers written there are the t-degree or homological degree, maybe in some slightly weird conventions that I won't really discuss. But in any case, you have three gradings and the whole homology is 11 dimensional. So there are 11 dots in this picture. There are five dots in a degree 0, there are five dots in a degree 1 and there is one dot in a degree 2. And so in particular, as I said many times, we're interested mostly in a degree 0. So these are these five dots here, which happens to be a cutland number, but that may be not so important right now. What is more important is like what is the corresponding algebraic variety, what is the braid variety that I talked about last time. And that was also studied by a separate group of people, most notably Thomas Lam and his collaborators, in the world of cluster algebras and cluster varieties. So they considered the variety corresponding to cluster algebra of type E6. And I will explain why E6 probably next time, but I mean, it doesn't matter really. So you have E6 in kind of diagram, you can build a very specific cluster algebra for it, and then the result break procedure, how to build cluster variety if you know what it is. If you don't know what it is, it is just essentially up to some torus. It's a positroid variety. So this is this open stratum in grass mining 3.7, which I denoted by P3.7 last time. So some miners, again, represent every point in the grass mining 3.7 by 3 by 7 metrics. And you require that cyclically consecutive 3 by 3 miners are all nonzero. And up to some torus, this is this variety. And so these people also computed the homology of this variety, even before understanding the relation to homology. And so in particular, they found out that the homology is five dimensional. It's concentrated in even degrees, h0, h2, h4, and h6. So h0 is one dimensional, h2 is one dimensional, h6 is one dimensional, but h4 is interesting in two dimensional. And this perfectly matches this bottom row here, because I have 0, 2, 4, 4, 6, and there's exactly the same numbers. And again, like maybe I didn't say it clearly last time, but it's not true that this variety has a fine paving or anything like this. It's pretty complicated, non-compact variety. So it's not completely clear, as we speak, why this homology is all given. And another feature of this homology, which was phrased differently in these two worlds. So in this world, in the original paper of Danthal Gukovino-Rasmussen, which goes back to 2005 or 2006, they observed that there is a perfect symmetry of this picture. So there is a vertical axis of symmetry. And if you flip it, of course, you get the same picture up to some regrading, which they wrote down explicitly. But it's clear that all these dots are symmetric. In this picture, it's less symmetric. But what they observed is so-called curious hard leftist property, which was also observed in many different settings, which are closely related. So there is a unique class in H2. So there is a unique algebraic two form here. And you can look at powers of these two form, and this would give this class in H4 and this class in H6. And what they observed is that this multiplication by this class in H2 actually gives you an action of SL2. But it's centered slightly weirdly. So these two rows in that are written here correspond to the weight filtration and co-homology. And so this action would preserve the weight or rather the difference between the weight and homological degree. And so there is an SL2 representation here, which is four-dimensional. And there is a separate SL2 representation here, which is just one-dimensional and trivial. But in general, you always have this action of SL2. And if you center things properly, you would see that the picture is symmetric. And so this table, 46, is actually migrated again by homological degree and the weight filtration. And if you do a proper change of gradings, you get this picture, which is clearly symmetric. But also there is an SL2 action here. And it's natural to assume that there is an SL2 action here. And so I think, again, I personally think that these two pictures are very nice and illustrated. Like the first non-trivial example that people can compute and study in all possible details in both languages in many settings. And the air computation of homology is completely independent of not homology. It just uses some recursions in cluster world and some migratory sequences and things like this. But here we got completely different computation and matches. So I think it's really cool and nice. And another thing is that like, where is this SL2 action? So for them, one property of cluster right is that there is a pretty canonical two form, which is very, very important for building this cluster structure, whatever it is. And so these two form is not random. And so I guess what I'll try to explain how to build these two form in non-homology and how they're related to various structures here and how to actually prove this symmetry, at least what is the idea. Okay. So this is kind of illustration for the previous lecture. But it's also, I think, announcement and advertisement for what will follow because that would be a bit more abstract, but I'll try to keep it easy. And again, please ask questions. So any questions about these two pictures? So I just borrowed them from the picture of Danville Gukov-Rasmussen and Lammishberg. Okay. So if there are no questions, let's discuss about different structures. So I want to talk about not only this SL2, which will appear, but more generally, what kind of homological operations can we construct or what would we expect in link homology? And so just very roughly what happened in previous lectures. So we start from this polynomial ring R. There's a polynomials in N variables, which was think as kind of associated to strands of our braid. And then if you have a braid on N strands, we construct a complex of RR bimodules, which is denoted by Tbeta. And these are equivalently modules over C of x1 through xn, x1 prime through xn prime, where this x is index prime, square is going to left and right action of R. And so natural question, what can we say about this complex of modules or bimodules, like from the viewpoint of pure commutative algebra or homological algebra? And so the first observation, which is very easy, is that for any symmetric function f in N variables, the left action and the right action on this complex is actually the same. And this is true for a single crossing. And this is true for any braid, in fact. And this is just again, if you have seen Zorgil bimodules, this is true for any complex of Zorgil bimodules, just by definition. So this is just true. And more abstractly, you can say, well, consider this algebra b, which is the quotient of polynomials in N variables and N variables with primes, and quotient by the ideal generated by f of x1 through xn minus f of x1 prime up to xn prime for all possible symmetric functions f. So of course, it's efficient to take, I don't know, elementary symmetric functions if you want. So this is clearly an algebra. And the action of my polynomial algebra in x is in x primes on tb, on every term of tb, if you want, factors through this algebra b, because the left and the right action of symmetric functions are the same. And for people who have seen Zorgil bimodules, this b is usually called bw0. So this is an decomposable Zorgil bimodule for the longest element of sn. But if you don't care about it, it doesn't matter. What matters is that if you have a random complex where symmetric functions on the left are not equal to symmetric functions on the right, it never comes from any break, or anything resumed in a break. So this is one series restriction. The second series restriction is that the action of x i's on the left is actually a homotopic to the action of x primes on the right. With the exception that you need to twist the action on the right. So any break corresponds to the permutation in sn. And the left action of x i is actually a homotopic to the action of x w y prime, where w is the permutation corresponding to beta. So here is one concrete example, some random braid, just drew it, I don't know what it is. So how do you get the permutation? You just start here, label it by one, and then trace your strand until you get here. So this permutation would send one to two, two to one, and then three goes back to itself. And so what this fact and this property says is that x one is actually homotopic to x two prime. And x two is homotopic to x one prime, and x three is homotopic to x three prime. And this is true again for any braid. And in general, it's not true for random Zorglip by module, but this is pretty special for braids. And somehow way to remember this as people in not homology community like to say is that this action of variable slides through the crossings. So you can imagine that you put some marked point here, and the action of axis corresponds to this marked point. We'll draw this mark point. Let's try it. And so you can slide this mark point over here, and you get another action of x in here. And they're homotopic, they're not the same. And then you slide it again, and you get here. And so one way to phrase it is that the action for different marked point is not the same, but it's at least homotopic. So in homology, it's all the same. And then after you close the braid, this means that you identify x one index I one prime. And x two and x two prime and x three and x two, three prime. And so before x one was homotopic to x two prime, but now after closure, x two prime is the same as x two. And x one prime is the same as x one is the one here, but whatever. And x three is homotopic to x three prime, which is the same as x three. And so if we close the braid, we actually have an action of sampling of x two prime. And so we have some polynomial algebra except that some variables are identified. And so I think some people asked this question in the first lecture, like, do we still have an action of r? So in some sense, yes, but you need to be careful. So a better way to phrase it is that you really have these marked points. And what you can say is that the action of polynomial variables correspond to the choice of this mark points. And then if you slide this mark point around, the actions are homotopic. And if you close the braid, the action of the mark point on the top gets identified with the mark point on the bottom. And so in particular, in this example, if we ignore all this homotopia and stuff, what we can say, so the action of x one is the same as the action of x two, the action of x two is the same as the action of x one. And the action of x three is separated, we don't know anything about it. And so effectively, we have an action of two variables, polynomial ring in the homology in x one and x three. And so if we phrase it like this, so you can say that if you have the closure of beta, it's some link with our components. And triply graded homology of this link is naturally a module over a polynomial ring. And you have one variable, one x variable per component of this link. And of course, the components of the link correspond to cycles in the permutation w corresponding to beta. In this case, the permutation is actually just the transposition one two. So there is one cycle connects in one and two, there is three separately. And so you have two interest in cycles and you have two interesting variables. Said differently, you have equivalence relation on axis given by the cycles and given by the strands of the break. And so this is quite well known. And so you have an interest in polynomial algebra action. And again, I didn't write it here, but if you have a not, all axes are the same. So maybe let me write it here. So for notes are equal to one and all x one and x two and x n, that's the same. And so effectively, you have just one variable action. And it's actually free over this one variable polynomial ring. And you can ignore it. And that gives you finite dimensional vector space, which we discussed below. This is free. But if you have a link, even with two components, usually it's never free. And you can study the module structure over this polynomial ring and you get a lot of interesting stuff. And we will discuss it very soon. Okay. And again, this is just very, very general property, which happens here. And we'll see this actions in lots of different settings later tomorrow and then Friday, especially on Friday. Now, you can do a little bit more of homological algebra. And try to study a homotopy between axes and x primes more closely. So we don't just say that we need to say, like, what is the way nobody's home and what kind of homological operations we can build from this home. So let me just try to spell out what does it mean that to have a homotopy between x one and x one prime and if we go to print here. So there is some operator psi one and the differential of psi one is x one minus x two prime. That is the definition of chain homotopy between x one and x two prime. Similarly, there is an operator psi two and the differential of psi two is x two minus x one prime. And there is an operator psi three and the differential is x three minus x two, three prime. And now we close the braid. So what happens? So differential of psi one, so we identify prime variables with non prime variables with the same letter. So differential of psi one is x one minus x two differential of psi two is x two minus x one. And differential of psi three is x three minus x three. So this is actually zero. And differential of psi one plus psi two is x one minus x two plus x two minus x one. So it's also zero. And so we see that psi one plus psi two and psi three actually give you closed operators on this triplicative homology. And so these are non-trivial homological operations of degree minus one, which are interesting. So in a sense, they're kind of monodromes of these marked points. We're saying that you have a marked point here, we use psi one to move it here, and then identify this point with this point, with the closure, which I don't want to draw. And then you slide it here, and then you go back to here. So psi one plus psi two is the monodromes, which we use to come back. And this monodromes turns out to be closed. And this gives you interesting homological operations in lean homology. And for each cycle in a permutation corresponding to a braid, you would have such a monodroming. And so you have one of these monodromes per component of the link. Maybe I'll just turn on one monodroming. And well, so it could be maybe not so interesting, but like one abstract thing which you can do, which is if you want to say it in a hybrid way, it's some kind of causality, but like you can just do it. And you can use these psi variables. So this monodromes to deform the differential on the chain complex. So you start from your chain complex t beta, which is associated to a braid. And you tensor it with polynomial rank and in zero variables, y one through y n. And then on this thing, you deform the differential by taking the old differential and adding psi i y i, the sum of this, where psi is our monodromes from above, and y is our sum for more variables. And again, you can check that at least when you close the braid, this query is equal to zero. And maybe I didn't say this yet. I didn't say this, but this is a Lincoln variant. So maybe theorem that we proved was my Hogan comp. That this deformed homology is a Lincoln variant. So it satisfies all the market for most satisfies all the braid relations, and you can define this deformed homology. And again, you have one monodromes per component. And effectively, this means that you would have one y variable per component after all this identification. So the number of deformation parameters is again the number of components in your link. And so here's one concrete example, which might be interesting to some of you. So you have this hop link with two components. So you expect two deformation parameters and two x variables, which are alive. So the old complex is this complex from r to r to r, with differentials given by zero and x1 minus x2. And that we discussed in lecture one, how to get this complex by first considering the complex of Zorgil bimodules, and then taking home from r. And then the psi, there is only one psi or like two sines up to a sign, which goes from the middle r to the left r. And this is this map. And this is a chain map, because if I apply this guy, then differential, I get zero. Or if I apply differential, then there is no psi, so I get zero as well. So this is honestly a chain map, chain and the morphism, closed in the morphism of this complex. And then we can deform it by this rule so that you just put y1 minus y2 times this psi, this red arrow. And again, in this deformed complex, this square is equal to zero. That's easy to see. And you can actually compute the homology. So this is r of y, which is just polynomials and x1, x2, y1, y2. You have two generators, Z and w, which leave in this degree and in that degree. And there is one relation that Z times y1 minus y2 is equal to w times x1 minus x2. Because the differential of this guy in the middle is precisely wx1 minus x2 minus Z times y1 minus y2. Maybe I'm off with some science, but it doesn't matter. And so this is something concrete, which you can compute. And again, you see that the homologies even degrees. But you can ask why this is interesting and that I will explain in a second. Maybe let me actually explain this and then ask some questions. And so one application, which we found, is the following. So you have n kn torus link. So this is a link with n components. For example, this 4k is equal to 1, e of t and n. So all components are actually n knots. And the link in number between different components is equal to k. And so we just computed both deformed and undefined for a triply-graded homology for this link. And so how do you describe the answer? Again, like if you like combinatorics, you can do it recursively as I sketched in the first lecture. So there is some really complicated recursive description and there are some more explicit combinatorial formulas. But it doesn't give you, for example, the module structure for this axis and it doesn't give you some other interesting structures. And instead, we deformed the homology and say that the result is j to the key, where j is the following ideal. So you have the polynomial ring in two n even variables, x1 through xn and y1 through yn and n odd variables theta1 through thetan. And you look at the ideal generated by xi minus xj, yi minus yj and thetai minus thetaj. And you take intersection of such ideals for i not equal to j. So if you don't like these odd variables, you can just ignore them. So again, if you restrict to degree zero part, what we have is the ideal generated just by xi minus xj, yi minus yj, intersection of false such ideals for i not equal to j. And this is just ideal in the polynomial ring in x1 through xn and y1 through yn. And so this thing has the geometric meaning. So this thing, because you can consider n points on C2, which is of course related to the hubris scheme of n points on C2, but that we will see a bit later, I guess on Friday. But for now, just consider n points on C2 with coordinates x1, y1 and so on up to xn, yn. And then you look at the place where it is in the diagonal where some of the two points coincide. So where do i's point and j's point coincide? Well, this is a code I mentioned to hyperplane, which is given by equations xi is equal to j and y is equal to yj. And so this is the ideal of that hyperplane. So maybe i's point is equal to j's point. So this is precisely the equation that xi minus xj is equal to zero. And y minus yj is equal to zero. And so if I have the union of all this code I mentioned to hyperplanes, they take the corresponding intersection of ideals. And this is it. And so this is rg. And this is kind of super analog of this, which is not so important for now. And then we take the powers of this ideal, which also make a lot of sense from the Hebrew skin point of view and from the work of Mark Heyman, especially, but that we will discuss properly on Friday. But for now, this is some ideal in this polynomial ring. And so the claim is that this ideal as a module over polyonyms and axes and y's is actually this deformed homology of Torus link and k. And to me, I would say this is, I mean, let's get a lot more structure. So maybe we don't know or like the combinatorics is hard describing the actual dimensions of graded pieces of this ideal. But this ideal is clearly interesting and important. And the fact that it's related to link homology, I think it's quite remarkable. And if you want undefined homology without y variables, you just kill them. So you just quotient by y, by maximum ideal in y's. And then a separate result of Mark Heyman plus a little bit of work, if you have all these data, just says that j to the key is free of our y variables. So actually, you don't lose any information. And so it is kind of paradox that you want to describe some module over x variables. So as I tried to explain in the beginning, for any link with n components, you expect an action of polynomial ring and n variables. And you want to describe this module over this polynomial ring. But instead, you introduce an additional variables y1 through yn describe this deformed object using axes and y's and then just kill y's. And this turns out to be the right answer because in this case, it's flat over y's. And it's interesting that it's much easier to describe this deformed homology before describing undefined because somehow, I mean, I don't know any good commutifalse or redistribution of this thing rather than j to the key mod yj to the key. And, yeah, I mean, in examples, you can see this. But yeah, I think that's all what I want to say here. So any questions about this theorem, about the deformation, and about anything? Maybe there's a remark. So this is kind of one similar thing, which is not exactly right in this setting, but motivating is that if you have a variety of store section, and you want to describe its homology, sometimes it's easier to describe a covariant homology by localization or anything, and then just kill a quaring parameter. So roughly speaking, that's what happens here. But it's not exactly the same thing. Okay, so let me still pose for questions. Any questions here? Comments? No questions. Okay, so anyway, so if you don't care about this example, you might see it again on Friday, but for now, this is just an example. And then the next thing is really what I want to talk about today is the symmetry and SL2 action, like how to get it from this x's and xi's and homotopias. And for this, I want to do some homological algebra. So I want to rephrase what we've said in parts one and two and three, using slightly more advanced homological algebra language. So you have this algebra b. So these are polynomials in x1 through xn and x1 prime through xn prime, with the relations that symmetric functions and x's are the same as symmetric functions in x primes. And you can resolve r, so r is naturally a module over this b, and you can resolve r over b by three modules. And so you can denote this resolution by curly a. And so concretely, this curly a is what? So you have b, you have the variables xi1 through xin, which we saw before. So the differential of xi r is xi minus xi prime. And these are the homotopias that I talked about. And you have additional variables u1 through un. And the differential of uk is more complicated. So the differential of uk is xi i times the complete symmetric function hk minus one of xi and xi prime, maybe kind of good way to write it is hk minus one of xi, xi prime is actually xi to the k minus xi prime to the k over xi minus xi prime. It's kind of easier to digest. And then you have this thing, you multiply by xi and you sum over all i. And a very good exercise to understand what's going on is to show that d square is equal to zero. For example, if I take d of uk, I get this thing and I apply the differential again, this will be zero. And this is not completely obvious, but this is not hard to see. And slightly harder exercise is that this is indeed a resolution. So there are no more relations. And so you can think that in r, you identify left action and right action on this list. So you need this size, but then you want to describe cesiages between size and these are given by these things. And then there are no more interesting cesiages after that. Anyway, so this is some complex. And this is, you can think of it as a dj algebra, if you want. And so the theorem that we proved in a recent paper with math hoagian component on Mellet is that this dj algebra over here acts on any Rookier complex, t beta for any braid beta. So you have an action of all this size, which we saw before. And the action of this use, which is more interesting on any complex for bimodals for any braid. And so let me try to sketch the argument for this. So the argument is the following. So first of all, you need to construct this action of size and use for every crossing. And size you construct explicitly, that's not so hard. Maybe I don't want to write it, but it's really, really easy. And this is why the left action and the right action are correct. And the action of U is just zero, because you have a homological degree too, there is no rule for it. And you still need to check that these relations are satisfied. So for example, if U is equal to zero, then this right hand side is zero. But this is easier and this is easier. So I don't want to explain this, but that's true. And then you want to extend it to the product of crossings to arbitrary braid. And here is an interesting idea. So you, if you want to extend the algebra action to tens of product, what do you usually do? You build a co-product on the algebra. And so you construct the co-product delta on this algebra A. More precisely, this co-product would go from A to A tensor A over R. And A has some x's and x's primes. The second copy of A has x primes and x double primes. And the co-product is given by these explicit formulas. So maybe it's not so important. I will comment on geometric meaning of this co-product in a minute, but the co-product of x's, it just x's, x's, x's are one. So x's are on the left. Co-product of x's primes is one tensor x's prime, which is x's double prime. So this is kind of, if I x's prime, it will be the rightmost. And then co-product of x's i, is x's i to answer one plus one to answer x's i. That's not surprising. What's more surprising is that co-product of u key of this new guys is u key tensor one plus one tensor key plus an extra correction term. And this correction term involves complete symmetric function in x's i, x's i prime and x's i double prime times x's i. So there is some formula. Maybe it's not so important. I just want to say that if you don't put this correction term, it doesn't work. And this correction term actually is super important for what we'll follow. And so one of the exercises I think is to check that this is indeed a chain map from here to here. I mean, you can explain it more abstractly by such a map should exist, but maybe it doesn't matter. And so if n, m and n are two modules over this algebra, you have an action of a on the left on m. You have an action of a on the right on n. And you just have an action of a tensor a on this thing. And then using co-product, you can translate it to an action of a on the tensor product. And so because we constructed the action on single crossings, and we can extend it using co-product to arbitrary products of crossings to arbitrary braids. And maybe let me say that, like, because of this correction term over here, it's really non-trivial. So although a user equal to zero on single crossings, when we start multiplying things, these correction terms will accumulate and they become non-trivial. So even if you have two crossings where you is equal to zero, once you multiply them, these correction terms will give something non-trivial. And the action of you will become non-trivial pretty soon and interesting. So there is some question. Some question in chat, Eugene. I'm sorry. It's a, yeah. With the product of bi-algebra, hopf-algebra. It is a bi-algebra. So co-product is an algebra of homomorphism, if you want. So we defy it and generate it and then we extend it to products. The only subtlety which I'm really swiping under the rug is that this is not associative. So maybe I should say this properly, although again, I guess nobody here would, nobody here would care. So I'm going to go ahead and say this. Maybe I should say this properly, although again, I guess nobody here would care. The delta is co-associative. And so it's not actually, oh, sorry. I'm sorry. That's a good time to ask questions. I don't know why it crashes every time. I give a talk here. Usually it doesn't crash. I'm sorry. So I was saying that this is co-associative up to homotopes. So if you want to be really careful, you need to keep track of this homotopes and higher homotopes and they give you some kind of infinity structure, but you can actually avoid it in this case by some tricks. So we can write something about this higher stuff, but it's really, really nice to. And instead, we just prove abstractly that this higher homotopes, which give this homotope associativity and some kind of higher associativity relations, they exist on abstract rounds. And then it doesn't matter how we choose them. The result will be the same. But in principle, there is something interesting. And again, it's like something really, really basic. It's all about this algebra B and resolution of R over B. There is nothing else. And again, maybe it's actually a good time to say that because it's a resolution of R, you know that R tensor R over R is R. And so this is a resolution of R. This is in some sense resolution of R tensor R over R, which is again R. And so you know that there should be a chain map which relates this resolution. So and we just write it explicitly and extend. Yeah. So this is an algebra homomorphism and then cost associativity of the homotopy. I mean, it's immediately follows from what I just said, but there is a trick like how to prove it very clearly. And that's what we use. Anyway, that is definitely too technical. But in particular, why this cost associativity is important because if you have a triple product of three things, so if you have M tensor N tensor K, a priority, there are two ways to define a action on this thing and they're different and they're honestly different, but homotopy. Okay. So let me try to sum up things. So there is an action of this algebra on any read. And there's an action of interest in operators sizing use on any Rookier complex for any breed, but they're not closed. And so instead, so solution to this issue is you just deform homology as I did before. So in the form homology, you deform the differential and then you can also deform use by saying that instead of use, we consider use plus again, some correction terms involving partial derivatives in Y. So this is again some formal homological algebra thing, but this gives you an operators in this deform homology. And so what we proved is the following thing. So this Deformed use, so this F case, they're actually closed under D. So this is easy to see because somehow the differential of U is not equal to zero. So these two terms don't commute and these two terms don't commute because Y is don't commute with partial derivatives. But somehow this is designed so that they cancel out each other and this commutator is zero. Now it's more or less clear that they commute between themselves. FK and X sub I also commute that's clear. And then FK and Y sub I, they don't commute, but you know how they commute with each other. So the commutator before closure is something like HK minus one of XI XI prime. The commutator after closure is just K XI K minus one. And so these Fs are what we call the logical classes and they act in the form homology of every link. And that is kind of the main result is that this F2 corresponding to U2 is an interesting operator and that satisfies this kind of hard left and secure, this hard left condition. And that operator actually leaves to an action of SL2 on the form homology for any link L. And so as a corollary, this different homology has an SL2 action and so it is symmetric as this explains the symmetry that we saw in the bottom. And this resolves this conjectural look of Donald Rasmussen that for knots you always get a symmetry because for knots this different homology is essentially the same as under form. And so we get like the bottom line if you don't care about the symmetry is that we have a lot of interest in commuting operators acting in link homology and they're very new. You can ask tons of questions about them and how do they act? What are the other relations? How do you interact with this SL2? Like what is the structure of homology in non-cases as a model over these operators? So they're all very interesting questions and this is very, very... And so geometrically you can ask, well, okay, so we don't care about all this homological algebra, we like algebraic varieties. What happens there? What happens for these braid varieties or bisectoid varieties or whatever? And so we need to find some interesting operators acting in co-homology of these varieties. And of course the natural guess would be that there are some kind of testological classes in co-homology and we just multiply by these testological classes. And of course this is the case and that is actually one of the motivations for us to build this digital algebra story and this goes back to the work of many people, Atya, Bort, Schulman in particular and more recently I guess Lisa Jeffrey and others. So how to build the testological classes on character varieties and more recently on braid varieties. And so this FK would correspond to some algebraic K4 on this braid variety. So how can it be built? So we start from the group GLN and Bort and others give you a family of differential forms on the group and on some related spaces. So given any symmetric function of degree R in n variables, one can construct a characteristic class which I would call phi 0 of Q in two R's homology of the class phi and space BG. So this is probably familiar to many people. If you have a degree R symmetric function you get a characteristic class. What is also familiar to many people is that you have another interesting class which is, I mean really it's a form in this case. So maybe I should write a form instead of h. So this is an algebraic differential form on the group itself. So and the form will be 2R minus 1 form and this is closed. So it represents a class in h to R minus 1. That's why I wrote it. And in fact if you think about the respect real sequence for the universal for the vibration EG over BG, this form, this class would kill that class by a differential. You have seen this but in any case for any symmetric function you can build an interesting co-homological class of degree 2R minus 1 on the group itself. And this is also quite known and what is less known is that you can continue this procedure. So you have a 2R minus 2 form on G cross G. So this is just the product of G with itself. And this 2R minus 2 form, sorry, it won't be closed but it would satisfy some kind of co-cycle condition that the differential of phi 2, the differential of this guy will be 2R minus 1 form and it is related to 2R minus 1. Okay. Yeah, so what I was saying, I was saying that you have G cross G and there is some differential form on this. It's not closed but you know the differential. Here is a concrete example of this. And now what you can do, so if you have two algebraic varieties X and Y with the map to the group or alternatively you can think either you have algebraic varieties with the map to the group or you have just matrices kind of depending on parameters which leave in this algebraic varieties X and Y. Then you can build an interesting form or 2R minus 2 form on this. So suppose that you pull back this form phi 1 of Q from the group to X and you pull this to Y and they turn out to be the boundaries of some on G X and on G Y. So you have some form here and you have some form here. And the important point is that you can glue them together by taking the form on X, the form on Y, and this correction term which is phi 2 of Q which appeared over here. And maybe that was too fast but it doesn't matter. There is a procedure how to build interesting forms on product of varieties. If I have an interesting form here, if I have an interesting form here, I take the sum of these forms and just add this form correction term which leaves on G cross G. And I claim that this is very similar to the co-product on my DG algebra, that you have this kind of element or the action of U on the left, action of U on the right, and then you have this correction term which glues them together. And so classically people have used this construction and this specific form phi 2 of Q to build a two form and subtract two form on character varieties. But Anton Manit, for example, proved this that one can use it to construct an algebraic two form on this braid variety. And this two form and braid variety is constructed exactly like this. What is the braid variety? A bunch of matrices, for each of them, we put the two form to be zero, and then you glue them by this procedure inductively. And this gives a non-trivial two form and this turns out to be almost simplistic, up to some torus action, up to some un-things. And this two form is closed and it gives you a co-homology class and it does satisfy curiousness. And it does satisfy curious card left-shits theorem for this braid variety X of beta. And so you can use this to build an action of SL2 on the braid variety and you can use this to prove these curious card left-shits by this machinery. So actually these people also observe these curious card left-shits in this case, but they use very, very different machinery. But this is exactly the same two form, which appears here and which represents the class in HF or 2. And so maybe just to conclude, I want to say that in principle this works for any symmetric function. And so the fact that I had this f2, so this gives me an analog of f2, but I can build a 3, a 4, a 5 and so on. It has lots of interesting things and this gives you, in this case, it gives you lots of interesting forms and co-homology classes on X of beta. By the same procedure, you have something on matrices and then you keep blooming them. So it might be less explicit, but you might not need it to be more explicit. And so for example, the next logical class would correspond to sum of X i cubed, I guess, and this would leave here. So this would give you four forms. So if r is equal to 3, where is r? So if r is equal to 3, it would give you a 6 form on BG. It would give you a 5 form on the group, which is there if the group is at least JL3. And it gives you a class in each upper 5 of the group. Maybe I'll write it down actually. So if r is equal to 3, you have H6 over BG and you have H5 over G. And you know that there is a generator of degree 5 in the group and this gives you a class. It's not closed, but the class in a 4 form on J cross G. And then by some reasoning, you can check that this 4 form has an interesting weight and this actually gives you a generator here. So this is kind of the form which you can build from the cubic symmetric function. And it's there. And in this case, you see that cohomology is a ring is actually generated by this class in H upper 2 and this class in H upper 4. And it's a very natural question, which is as far as I know, wide open. Is it true that cohomology is generated by the theftological classes for torus nods? So maybe I will ask this question. Some problem slash conjecture. Is it true that for cohopriac is generated by the theftological classes? Or maybe equivalently? HHH of TmN is generated. And you see that all these topological classes, they are in even degree. And we just proved that co-homologies in even degrees, so maybe it's not so surprising. And one reason to believe is that in slightly different setting, there is a space with very, very similar homology. And their homology is generated by the topological classes. And another question is whatever. So suppose that it is generated by topological classes. So can you describe this co-homology? So again, here co-homology would be a ring because it's a co-homology for space. And you can describe this, can you describe this ring by generators and relations. And here co-homology is not really a ring, there is no ring structure, but it's a module over F2 and F3. And hopefully this module has one generator or one kind of co-generator. And again, how to describe this module explicitly, what are the generators and relations. These are all wide open and I think that's an excellent problem for someone to work on. So at least I try to convince that like besides these combinatorics and the fact that we know the dimension of homology, this opens up lots and lots of questions like what are these topological classes, how do they act, and what can we say about all this. And I think I'll stop here. So again, I'm very sorry for all these interruptions and conclusions. Thank you very much, Eugene. Any questions? Can you say a little bit about which symmetric function gives rise to this element that does this curious hard left shift? Yeah, sum of x i squared. So if you have sum of x i squared, this would give you, so again, I should write it down. So if r is equal to 2, then you get 2, 4, mongecro g. And this is precisely that 2, 4. So you will get 3, 4 on g, but you will get a 2, 4 mongecro g. And this is this 2, 4. And so the recipe is that you just take this, so for this particular queue, you take this correction term, this 2, 4, mongecro g, which is very explicit. And whenever you see a bunch of matrices, you do it inductively. So you do it, for the first matrix, the form is 0. For the second matrix, the form is 0. For the pair of matrices, you have this correction term. And then you have the third matrix, and then you add correction matrix, correction term for the product of the first two matrices and the third one, and so on. So you build inductively this 2, 4. And this gives you a class in, like whenever you close the braid, this gives you a class in HF2. And this class in HF2 will give you hard left-shifts. Okay, thanks. And maybe I want to add that like Anton actually used this theorem over here to prove pure hard left-shifts for more general character varieties. Because they can be somehow stratified by this. So this is like really, really powerful thing. And like most of what I explained in the beginning was motivated and kind of trying to translate it to not tier in more algebraic language, more kind of homological algebra language. Yeah, now I can see, sorry, I crashed again. Maybe my network or my sum or something. I am very sorry. Were there any more questions? And you didn't have any more questions. One more chance for a last minute question. Well, it's not less. Thank you, Gina, again.
|
Khovanov and Rozansky defined a link homology theory which categorifies the HOMFLY-PT polynomial. This homology is relatively easy to define, but notoriously hard to compute. I will discuss recent breakthroughs in understanding and computing Khovanov-Rozansky homology, focusing on connections to the algebraic geometry of Hilbert schemes of points, affine Springer fibers and braid varieties.
|
10.5446/54802 (DOI)
|
All right, thank you so much for inviting me. So I'm really sorry for not coming in person to Paris and to IHS. It would be great to see all you in person, but hopefully one day I will be there and yeah. I'll try my best online. So again, if you have any questions, I guess Andre will translate the question to me and I'm not sure I will monitor Q&A and I'm not sure who could raise hands. So please ask questions out loud. Okay, so the plan for so there will be five lectures and the plan is to start slowly. So today will be mostly definition of Kavan-Frasansky homology. What is this thing? Give some examples and mostly it would look like something quite formal, community of algebra exercise. So first of all, I have to say, and I'll say it again in a couple of minutes that I will talk about some link invariance. I guess none of you are really topologists in neither am I. So I will try to explain that this is not really the logical problem that we're talking to it. You can phrase everything in pure terms of community of algebra and I'll give a lot of examples like how to define this homology, how to compute this homology is slightly more complicated examples and what we know and what we don't know about it. Then in the next lecture, I'll talk about rate varieties and for a large class of rates and for a large class of links, this is actually very explicit geometric model for computing or at least defining this homology. So roughly speaking, you would have an algebraic variety and the homology of this algebraic variety will be Kavan-Frazansky homology with some subtleties which I will explain next time. Then on lecture three, I will talk about slightly more subtle structures and homological operations in this homology. So this homology, unlike homology of the logical space has no multiplication, but there are lots of interesting homological operations and there are the topological classes or analogs of those which are familiar to many people in the audience and there is this fine operation of deformation or y-fication of this homology and I will explain like how to work with this and how to compute a lot more examples with these techniques and these operations and prove some interesting results. In the fourth lecture, I'll talk about algebraic links that's even smaller class of links, but there is a very interesting connection to plane curve singularities, a Fein-Springer theory and some structures and some varieties that will appear are very actually closely related to the course by Joel Kahniser on B. Fein-Springer theory because that's very, very related. And finally, in the last lecture, I'll talk about the Hebert Schema points on the plane, which I guess is mostly familiar to most people here in the audience and how that is connected to link homology. So there will be lots of different models, how to use commutative algebra or algebraic geometry to understand what's going on with this homology. So this is the idea. And before all this, I just want to spend a couple of minutes and talk like what do we mean by a link invariant and how to build a link invariant and what is a link if you haven't seen it. So first of all, I'd like to start with the Braille group. So the Braille group is a group with generators sigma 1 through sigma n minus 1 and the relations written here. So sigma i sigma i plus 1 sigma i is sigma i plus 1 sigma i sigma i plus 1 and sigma i sigma j is sigma j sigma i if i and j are apart. So I think many people have seen this group and one should think about sigma i as a simple crossing between so I have n strength and I cross i's and i plus first strength and I choose this to be the positive crossing. So this would be sigma i and if I have a sigma i inverse, this would be the opposite crossing. And so by vertical stacking of such pictures, we can get arbitrary braid. So that should be familiar to most people. The last familiar thing for algebraic generators is the two theorems of Alexander and Markov. So Alexander's theorem says that any link you can actually obtain is a closure of some braid. So here alpha is an arbitrary three-strand braid and I can close it up like this. So I just joined the ends of the braid on top and on the bottom and they connect like this. This is a closed diagram with no ends and this is a link. There were several components. Depending on the braid alpha, you could have three components. You could have two components. You could have one component in this picture, depending on the permutation corresponding to alpha. For example, if alpha is the identity braid, so nothing happens here and then I just have three circles. So this is a link with three components. And then sliding the result theorem of Markov says that we can say when two braids give the same link. So two braids close to the same link, if and only if, they're related by the following moves. So I can relate alpha-beta and beta-alpha. So this is equivalent to conjugation of a braid. So I can just take this part of the braid, slide it down. So when I slide it to the right, it will get alpha upside down. When I slide it again, I will have alpha upside up. And so these two links are clearly the same. And then if I have alpha is a braid and n-thrands, I can add one more strand over here. Mark this. So this is a new strand. And then I add a crossing marked in red circle. And that crossing could be positive or negative. And again, here I changed the number of strands. So let me mark this like this. So I go from two strands to three strands. But it's clear that I can undo this thing, and the resultant link will be the same. And so these two operations of conjugation and what is called positive and negative stabilization, they don't change the link. But they change the braid, obviously. And so in particular, the second operation would change the number of strands. You would have two different braids with different number of strands, which close up to the same link. And so even all this, so let me give a rough plan for everything what will happen. So we're interested in anthropological link invariance. So we have a link, which is some curve in three dimensions. But we don't regard it as a curve in three dimensions. We want to get more algebraic structures there. And so first of all, we present this link as a closure of a braid. And to define a successful link invariant, what do we need to say? You need to say that something is assigned to crossing. So for each crossing, positive or negative, we assign something. And this something will depend on the context, of course. Then this something should satisfy braid relations, which I wrote in the beginning. So sigma, sigma plus one, and sigma plus one, sigma plus one, and so on. And so if this happens to satisfy braid relations, then automatically we get a braid invariant. So for any braid, however, we write it as a product of generators, we get something invariant. And this is not enough to build a link invariant. So to build a link invariant, we need to define, describe some operation, how to close a braid. And this you should think of it as some kind of trace in more or less abstract sense. But anyway, so you have a braid and you have to explain what does it mean to close a braid. And then for this operation of closure, you need to check additional invariance under conjugation and stabilization, under Markov moves. So that whatever product of generators you assign to alpha, beta, and to beta, alpha, they could be different. But once you close the braid, the results are the same. And the same thing here. So these are different braids, alpha and alpha with this extra yellow strand. So there will be different invariance of braids and they would leave in different categories, if you want. But then once you close the braids, this operation should give the same result. And that is just a formal consequence of the general theorem of Alexander and Markov. And so what I need to explain for you is how to assign something to crossing, verify braid relations, and then describe the separation of closing a braid, which is, I would say, a separate part of the construction. Not only checking braid relations, but what does it mean to close a braid? And that's pretty much it for the general outline. So sometimes it's also useful to restrict yourself to some subclasses of braids. For example, just positive braids when you don't have negative powers. And maybe you want to also restrict your invariance. So you can say maybe we're interested in braid invariance, which are only invariant under conjugation, but are not preserved by Markov moves, for example, by stabilization. There are lots of these invariance. We can also ask about invariance and we'll see them today and next time, which are invariant under conjugation, and only positive stabilization, but not invariant under negative stabilization. And so this won't give you a typological invariant, but for many purposes, it's still very good and very interesting to study. And maybe let me pause here and ask if there are any questions about this kind of very general plan, how to build the invariance. Any questions? Okay. So if there are no questions, let's actually go to actual algebra. So we start with the ring R. And this is a polynomial ring in N variables X1 through XN. And there is an action of symmetric group which can use the variables in obvious way. And we will consider bimodules over this ring. So to every braid, we will associate a bimodule or a complex of bimodules over this ring. So the most elementary bimodule that we will consider is called bi. So this is R tensor R over RSI. So SI is a transposition of i and i plus one. And so this RSI, let me write it down maybe. So these are really SI invariant functions. Oops, sorry. Okay. Fine. So these are SI invariant functions on R. And so in particular, if I have Xs, Xi, X1 through XN will be elements and generators of this R. X primes will be generators of the other R. And so what are the SI invariant functions? So we require that Xi plus Xi plus one is equal to Xi prime plus Xi plus one prime. So this is an invariant function under transposition of i and i plus one. So the action of this element on the left is equal to the action of this metric function on the right. The action of Xi Xi plus one on the left is equal to the action of Xi prime Xi plus one prime on the right. And then the action of Xj is equal to the action of Xj prime on the left and on the right provided that j is not equal to i and i plus one. So all these j's are clearly invariant under this transposition. And sometimes it's useful to draw. So I won't draw a lot of stuff because it's really more on the break course. But sometimes people like to draw this as the following pictures. So I have Xs on the bottom, X1 through Xn, X primes on the top, X1 prime up to Xn prime. And then what happens is that Xi and Xi plus one, they merge together and then they split into Xi prime and Xi plus one prime. But when they merge and they split, you don't know if they stay the same or they're commuted and transposed. And so all the things that you know are symmetric functions are preserved. So symmetric functions in Xi are the same as symmetric functions in Xi and Xi plus one are the same as symmetric functions in Xi prime and Xi plus one prime. And so this is a bimodule. And again, you can think of the left action of R as sitting on the bottom of this picture and the right action of R on the top of this picture. And this is a bimodule. So any questions about this bimodule? OK. And so Eugene, I just wanted to say, I think it's a good point to say that the picture you drew with the X's index prime is just like a picture for intuition and that the actual definition of BI is the tensor product on the left. Yeah, exactly. Yeah, yeah. So this is, again, you can think of like the way, the most explicit way to think about this is either this tensor product or you have polynomials. So maybe I should write it down. OK, I wouldn't be able to write it down, but it's fine. So I have polynomials and X's and X primes, quotiented by this relation over here. And an exercise, if you haven't seen this thing before, is you can tensor bimodules over R. So BI is a bimodule. So I have a left action and then the right action. And this BI is also a bimodule. So I can tensor them over the middle R and get, again, a bimodule over R on the left and R on the right. And as tensor product of bimodules, this actually splits as BI plus BI. And this is one of the exercises, and you can check it in the exercise session. And another exercise we'll get familiar with these bimodules is that there are very interesting maps of bimodules from BI to R. So R is a bimodule over R itself. And there are maps of bimodules from BI to R and from R to BI. And these maps are explicitly constructed in the exercises. So you can check, and this is kind of the most explicit thing that you can compute about this. So given these maps, you can take the cones and form two-term complexes of R-R bimodules. And I will call them TI and TI inverse. So TI is the column of BI to R, and TI inverse is the column from R to BI. And again, like so far, everything is pretty formal. These are just complexes of bimodules. And oh, yeah, now it's right. So now the main theorem here, since we're discussing braids and braid closures, is a theorem of Rookia, who proved about 20 years ago, I guess, that TI and TI inverse satisfy braid relations up to homotopy. So you have this complex of R-R bimodules. You assign this to the generator of the braid group and to the inverse of this generator, and then you can tensor them over R. So all these tensor products are over R. And maybe I'll stop writing over R for a while, but all these things are tensor products of complexes of bimodules. So for example, here, this is a two-term complex. And so I tensor it with another two-term complex. So as written, it will be four-term complex. And again, it's a very interesting exercise to check that this TI and TI inverse is R. And so maybe I'll do it over here and just give you an indication of what's going on. So if b goes to R, I tensor with the two-term complex where R goes to b. I got a four-term complex where I get b tensor R as b, here in the middle I have R plus b squared, and then here I have another b. So this is just the tensor product of these two things. And then I use this exercise over here that b times b is b plus b. And then you can use it to simplify. So basically, this b squared cancels out with this b and this b. And again, if you want to get a flavor of what's going on with these bimodules and these complexes, please do this exercise because this is really helpful. And checking braider relations is slightly more complicated, but not too much. And so as a consequence, for any braider and end strands, we get a complex of R bimodules, which is well-defined up to form a top. Question? OK. So we just tensor these things for generators because there is some echo by some reason. OK. So we tensor these two-term complexes for generators and we get a giant complex for the whole braid. So the number of terms in this complex will be two to the number of crossings of your braid. And then because of these relations, it's well-defined up to homotopic equivalence. So this is called Rucchia complex for braid. And there are a couple of remarks which are not so important, but I want to say anyway. May I ask a question again? Yes, please. OK. You can't hear me, right? So how do you define this T beta again? I don't quite understand. So I have a beta. I write it as a product of generators. Maybe it was. I'm OK. And then I just replace it with Ti1 in the power plus minus, tensor and tensor TiR in the power plus minus. And so this is a giant complex. So this is a two-to-the-R term complex. And this tensor product is again tensor product over R. And because the braid relations are not satisfied up to homotopic equivalence, this giant term, the product of complex, is still well-defined. And it doesn't matter how we write beta as a product of generators. So the result is a well-defined complex of R-bimonials up to homotopic equivalence. Does this braid relation also imply that you have some kind of a braid action on a particular R-bimonials? Yes, you can say that you have a braid group action. But this is a monoidal category. So you can just tensor things. So you can tensor on the rise, for example, on all this. Whenever you have a complex of bimonials, you can tensor it on the right with arbitrary R-bimonials. And with some care, you get a braid group action R-modules on the left or on the right. That's right. But yeah, this is the key thing that you can define this TI and the satisfied braid relation. So you do have a braid group action in this sense. OK. Any other questions? All right. So a couple of remarks just to say some words. And again, we don't need these details to keep going. And I'll give an example of this construction in a minute. So just stay put. And then I'll explain how to actually do some things in this example. So bi and the tensor products, they generated a very specific category, which is known as the category of Zorg-Rib-Bi-Modules. So formula speaking, so how do you define Zorg-Rib-Bi-Modules? You can take all possible Bi's, all possible tensor products of Bi's, and all possible direct summands in tensor products with the Caribbean completion. And it turns out that this category is much smaller and much more interesting than just all R-R-Bi-Modules. And it categorizes the Hecke algebra, and it has lots of other beautiful properties. And lots of people study this. And so you can definitely say that Tbeta is complex of R-Bi-Modules, but it's also complex of Zorg-Rib-Bi-Modules. So if you like geometric representation theory, the best way to think about it is that to say that this is really complex of Zorg-Rib-Bi-Modules. But we don't, we once really needed, I think, for most of this course. So I won't discuss this because I don't want to introduce too much notation. And another thing that lots of people like to ask is that this can be defined for any type, for any coaxial group, in fact. So the same definition just works for any coaxial group acting in some representation by reflections because you just define this Bi and you proceed. So you have Si is the generator of a coaxial group and you proceed this way. You have a beautiful theory of Zorg-Rib-Bi-Modules for arbitrary coaxial groups developed by Zorg-Rib-Bi-Modules, Elias, Williamson, and many other people. And you get the action of the corresponding Braille group by these Rukia complexes. So this generalizes and you satisfy Braille relations of the corresponding type, corresponding to your coaxial group. And this is all very nicely well behaved in all types. OK. And so the next piece of information is how to close the braid. So we defined an interesting braid in there. This is a complicated thing. It's not a number. It's not a vector space. It's a complex of Bi-Modules. And we need to explain how to close the braid. And to close the braid, we use the notion of coaxial homology. And I mean, many people here are much smarter than me. So of course, by hard word, it's coaxial homology. So rather speaking, you resolve our term by free RR-Bi-Module and identify Xi with Xi prime in the resolution. And then you take homology. But I once spent too much time on this definition. And in fact, I'll use a special case of it, which I'll say maybe right now, actually, that one special case of coaxial homology, which is really easy to explain for everyone, is just this. So you hh upper 0. This is coaxial homology. It's dual to coaxial homology in this case. So this is just home of Bi-Modules from R from the diagonal Bi-Module to X. So if I want to come, if I, some of you don't know what coaxial homology or coaxial homology is, you can always think about just this hh0. So hh0 of X is home from R to X. And this is well-defined for any Bi-Module. So this is home in the category of Bi-Modules. And we will see examples of this home very soon. And so starting from a braid, what do we actually do? So we start from a braid. We assign this two-term complex to every crossing. We assign the product of these two-term complexes to any braid. And this will be this Tbeta. And this is a complex of our Bi-Modules. And then you either apply coaxial homology, if you like it, or if you don't know what coaxial homology is, just apply this form from R to every term of this complex. And you have to do it to every term. So there are lots of terms in this complex, do it term-wise. And then you will get a complex of our Modules, because if X was an RR Bi-Module, from R to X is an R-Module. And the resulting complex of our Modules is essentially your invariant. Or more precisely, you take homology of the resulting complex of our Modules. And this is what is known as HHH of beta. So you first take HHH for coaxial homology of Tbeta, and then you take homology of that thing. So this is a pretty involved construction, and it has two very different steps. First is constructing this complex, then applying coaxial homology, and then if you want to take homology of that. And then mysteriously, question? Is the complex itself invariant or invariant or not? In what sense? I mean, as our Modules know, because for example, the number of variables is the number of strands in your braid. And so it can't be invariant. But if you just regard it as a complex of vector spaces, yeah, sure, because it's like up to homo that is the same as its homology, so it doesn't matter. But as a complex of our Modules, unfortunately, it's not invariant. You can have some remains of this structure. So you have slightly more than just the complex of vector spaces. And we'll talk about this probably on Wednesday. But for now, we just forget all the structure, take homology, and regard this homology as the vector space. Is it clear how this complex changes some of these conjugations as stabilization operations? Yes, yes, yes, yes. So all this is known. So in particular, Ethereum of Kavanoff and Rosansky says that if you just take this homology of HH, this is a Lincoln variant. So this is really invariant under conjugation and stabilization. And so it doesn't, so up to maybe some grading shift, which I will suppress, but this is really, really Lincoln variant. If you want to understand this as a complex of our Modules, it's still invariant under conjugation. That's true because basically, Horses with homology is some kind of categorical trace. And so it doesn't matter if you take homology of X tensor, Y, Y tensor, X. And you can see what happens with stabilization concretely. And maybe I won't see it right now, but the right explicit form also which say what happens if you have some complex of by Modules, then you add an extra strand at a crossing and what happens to HHH and that. So that is understood. That's right. And And For sure, homology is like some factor from the direct category of by Modules to the direct category of our Modules or From homotopic categories. So I never go to the direct category of our by Modules. I always work with homotopic category by Modules. And there are some subtle technical reasons why I want to do this. But I guess one reason is that the category of circuit by Modules is not really an Ibelian, just additive and you have to be very careful was talking about direct category of that. But like again, practically, what happens is that you just have a complex of by Modules, it leaves in the homotopic category and then you apply this HHH, which is the native fun to do every term. And that's it. Okay, and so I mean, so far, this is just a vector space. This vector space is triply graded and what are the gradings? So the first grading is an internal grading that all by Modules. So every by Modules BI is naturally graded if you scroll up. Okay. Issues here. So all these equations anyway were homogeneous. Oh my goodness. I'm sorry. Okay, sorry about that. So all this equation that we had before were homogeneous. Yes, Xi plus Xi plus one is XI prime plus Xi plus prime and so on. And so all these by Modules are naturally graded. And we assume that the degree of Xi is equal to two. So this is the capital Q grading on this thing. Then because we are talking about complexes, we have a homological grading and this will be denoted by QBILT. And then because we take HOSCHL homology for every term, this is what is called as A grading. And so just to repeat, so if we want to, if we don't like higher HOSCHL homologies because they're kind of harder to think about, you can just think about this home from R to X and this corresponds to picking one specific A degree in this triply graded homology. So this is A degree zero. And we will often just restrict to this part because it's so much easier to explain what's going on. You don't need to do this HOSCHL homology business. And from all the phenomenon that I will explain actually this thing is enough. So you can talk about other HHI and I'll mention them from time to time. But for most interesting phenomenon, for most interesting computations, this is already very interesting playground to think. This is again just specific A degree. And just as a caution, so this part is actually invariant under conjugation and invariant under polystabilization, but it's not invariant under negative stabilization because negative stabilization would kind of shift everything A degree up and you will lose this degree zero piece, but again, for many purposes, this is actually enough. And so this is it. And so this is this recipe again, that we start from a braid, build this tensor product of complexes, compute HOSCHL homology or apply this home from R to blah and then compute homology. And it's been more than half an hour and I haven't given you any examples really. So let me give you an example and work it out in detail. So the braid that I'm talking about is this two strand braid. So n is equal to two. And we have two crossings and we assume that both of these are positive. So actually, if you think about the link, it would close up to the Hopf link when I have two circles, which are linked like this. So to a single crossing, we assign this complex T, which is two term complex from B to R. So this is just a single crossing. So if you want to define something for this braid, you need to tensor this complex twice. So you have the complex from B to R, you have another complex from B to R, you tensor them over R and you get a four term complex where I get B square on the left, two copies of B in the middle and then R on the right. And I'm ignoring all the grading shifts because that would be too much. Then I'm using the rule that B square is B plus B. So this is happening here. And we have two B plus B, B plus B here and B plus B here with slightly different gradings. And again, I'm kind of cheating here. And if you do some consolations, you will see that actually you will be with one copy of B here and one copy of B in the middle and R in the right. And this is really the minimal complex for T square. So this is again a complex, very explicit complex of RR by modules that you assign to this braid. And then I want to compute the closed-debrates. So as we discussed, to close the braid, we just apply home from R to every term. So home from R to R is R in bimodules. And in fact, home from R to B is also R. So this is generated by that map from R to B that I discussed a little while back. And so we'll get a complex which is R goes to R goes to R. And this is again, home from R to B is R, home from R to B is R, and home from R to R is R. And then we need to compute differentials. And I was kind of sloppy with the differentials here. But in fact, you can compute them. And this differential will be 0. And this differential will be x1 minus x2. So R is, we call it R, it's just polynomials in x1, x2. And so finally, we get some answers. So it was this long abstract discussions, but this is a very concrete complex. And I'm sure that anyone here can compute its homology. So the result will be what? It will be R in degree 2. And then you will have R mod x1 minus x2 in degree 0. And this homology is interesting in particular, it's infinite dimensional because you have this clop of R. And it has an interesting module structure over R, which is actually in this case, link invariant. And so this is an example of again, HHH, H is equal to 0 of this link network. And if you have a two-strand braid, you can actually do more or less the same computation. So if you have a positive two-strand braid, you have a power of t, positive power of t. So you just keep doing this thing repeatedly, tensor them, simplify, use this rule that v squared is v plus b. And it's actually not that bad at all. So in examples, in exercises, there are some very explicit examples how to compute things and how to do computations on those strands. Because there is only one copy of b, and it's very easy to use this rule and compute everything there. So I have some explicit modules over C of x1 x2. And that's that. And for negative, powers of t is the same thing. But the problem is. Unipol is an example. I just have a question. You view it as a vector space homology, right? So when you take homology, when you take the virtual homology, it's just a vector space, right? I mean, you can say it's our module. So hh0 of b will be r, for example. So this thing is hh r0 of b. And I can regard this as r module. So this is a complex of r modules. And in this case, it actually makes perfect sense to think about this as a complex of r modules. And so we can just take homology of that as still r module, which is written here. I think it should be said that when you take r, you think of it as a vector space. It's important to think of it as a graded vector space. That's right. So this is, in this case, because we are 8 degree, we have still 2 graded. So this, again, the degree q of x, y is equal to 2. And then these two guys would live in different t degrees because they correspond to this homological degree. And I would say that capital C. I can't leave it. But now. Say it again. And this corresponds to t is equal to 2, or minus 2. I mean, it really depends on shifts and conventions that I want to talk. I don't want to talk about, but you will have this r module x1, minus x2 here. And you have this r here and the real in different t degrees. And each of them is a graded module because r is graded itself. OK. Well, all of that happens in the b module category, right? So r is also b module. No, these are b modules. So when we close the braid, when we apply this thing, we don't have b module structuring more. So these are just r modules. So maybe I'll write it down. So these are r, by module. And these guys are r modules. And again, in this example, you can actually find lots of interesting spaces where this is like a querying homologous subspace or homologous in shift on subspace. And we'll talk about this. So this example is kind of key to understanding what's going on. I would say. But this is really a complex of r modules. We kill the b module structure when we take the trace, when we take the whole homology. And so, but we still have well-defined r module structure. We have these two different segments. OK. Any other questions? OK. Anyway, so this is roughly how this thing works. And the problem for Maryland time, so all this was known back to the work of Kavanagh and Rosansky in 2000s. And then for about 10 years, there was a huge roadblock because nobody knew how to compute it beyond two strands, more or less. And the key problem, of course, is that this complex grows exponentially in the number of crossings. And even if you try to compute it for three strands, you'll have an exponentially large complex of b modules for polynomial ring in three variables. Then you need to take whole homology. And then you need to take homology of that. It's really, really messy and really complicated structure. And if like complicated commutative algebra after all with this axis. And so people really didn't know how to deal with that. But it's kind of a computer could do it in a kind of finite amount of time, which gives me the phrase. Yeah, that's right. And you can program it, but the programs would break pretty fast actually, because I don't know if you have. Exponential. Exponential. I mean, it's exponential, not of crossings. So if you have, I don't know. So T33, you would have something like six crossings. You already have two to the six terms in the complex. And then you need to run all this machinery with HH. And this, so this already is a lot actually. This is already 64 term by modules over putting numbering in three variables. So we can do this, but the computer stops pretty fast actually. And so people needed some new computational techniques. And in the remaining time, I want to outline like some very vague idea of how this might work. So the key theorem obtained in several papers by Ben Elias, Matt Hogan, and not Manit, is that this triply graded homology for all positive torus links is actually computable. So this is, first of all, supported in even homological degrees. So all, and this was so in this example, right. So we have some terms in degrees zero and some terms in degree two. And it turns out that for any positive torus link, the homology is supported in always in even homological degrees. And moreover, they give explicit recursion computing for the normals of this homology. And so kind of the state of the art is the recent paper from a couple of years ago, fucking company made it when they give very, very explicit recursion for all those things. And probably I have to say what is a torus link because maybe not so many people have seen it before. So torus link. We'll see them a lot. So T, K, N. So we have, for example, an N strand braid, which looks like this. And then you raise it to the bar key. And then it closes. And torus links are kind of easiest examples of things. And what these people tell us is that you can actually compute this thing. And one interesting example, which I will explain more, I guess, when Wednesday again, is that so you have N and plus one torus link. So you have this braid on N strands and raise into power and plus one. This gives us so-called QT cut-land numbers, which are related to lots of interesting things in algebraic geometry of the Hebrew scheme of the points on the plane and the algebraic combinatorics of McDonald's. And this confirmed lots of conjectures of myself and Ray, Alexei Blanco, Degrasmusen, and Vivek Shand, and many others. So there were lots of conjectures, and again, it was very hard to compute this, but these people made a real breakthrough with computing it and finding some recursive ways to do this. And this is actually also an example of a torus link, because this is just two torus links. And this thing on two strands and raise it into power two. And so how did they actually do it? So I just want to give you a very kind of rough idea. And the idea is you have to enlarge your class of things, and you really need to consider some complexes of Zorgilbimonials, or complexes of R-bimonials. Actually, that's enough. So a theorem of Hogan-Camp is that there exists a complex of Zorgilbimonials, KN, with the following properties. So first of all, if you add a crossing to this thing on the top or on the bottom, you will get the same thing, KN. So it eats crossings on the top. It eats crossings on the bottom. If you have this guy and you make this kind of partial closure, so you just take one strand, you close it up, you will have a previous KN with an extra factor of t to the n plus a. Here I'm using small t and small a. I'll explain what it is in a second. And you should think of the second rule really as abbreviation for the following thing. So you have KN plus one, you add an arbitrary braid or arbitrary complex of things on the bottom, let's call it X, and then you close up. So then you close up this extra strand without touching X, and you close up all the other strands over here. And this gives you just KN and just X with this extra factor t to the n plus a. So this is again some kind of mark of move for this. In some sense, this is mark of move for KN. What happens if you add a strand, you don't even add a crossing, but you just close it up. This is mark of move for KN. And the most interesting property is that if you have KN and you add an extra strand and wrap it around KN, then in fact you can resolve it as a two-term complex where you have KN plus one and KN over here with some shifts. And there is some interesting differential which potentially can be written down. And it's important that there is some chain map here such that it's cone, so the cone of this map is coming to be queried into the complex on the left. So that's as much as what we want to say. And if you have K1 with just one strand, then this is just nothing. So you can erase K1. And here we're using AQT, which are the grading sheets which are related to A, Q, and T. So these are the standard gradings that I defined before. And this is just some change variables. So it's not so important. What is important is that all homological degree shifts are even. So we're saying that we can resolve this thing by this thing and this thing with even homological grading shifts. And now the game is that you can try to apply these rules and say, well, you have a picture like this, you resolve it by something with this thing and something with this thing. Maybe you apply it again and resolve this thing by some smaller things. And at all times, all the smaller things will appear with even grading shifts. And if this recursion stops, this means that we resolved our complex with a bunch of easy stuff where we know the homology. All these things appear with grading shifts and there are some crazy differentials between them which we don't really understand at all. But because the homology is even, the differentials must be zero. And so the associated spectral sequence actually collapses immediately. And there are no differentials. And so homology of this guy is really equal to homology of this guy plus homology of this guy with explicit even degree grading shifts. Maybe said differently, if you know that the homology of this guy is even and if you know that the homology of this guy is even, then homology of this thing is given by a long exact sequence between homologies of this and this. And we're saying that the differentials between them, the connecting differential is zero because again everything is even. So kind of the most naive version of this is that whenever you have a topological space paved by even dimensional cells, you know that the homology is just the number of cells. And this is kind of the same thing. So this is a combinatorial technique. It works. So maybe for experts, I'd like to say that this KN is so-called compact categorical John Svensson projector. So if people have seen Hecke algebras, there is an element of the Hecke algebra which looks like this, which eats crossings, which behaves like this and which is a kind of an eigenvector for this thing, for this operation of multiplication by this morphine. But if you haven't seen this, well, there's some combinatorial rules. And I think I actually have, so I want to explain what this thing is and then I want to compute, maybe I'll actually want to compute first because I do have time for this and hope it won't crash. So suppose that I want to compute, again, this guy in a different fashion using this projector, using this game. So I can say that this is actually the same thing as K1 over here because we had the rule that one strand is the same as K1. And then we see that this thing with K1 is actually this picture. So I have extra strand wrapping around this strand with K1. And so I can replace it by this complex. I think this will be 2 minus 1. Here we'll get K2. And here I'll have something like Q, K1. And again, I don't know what these things are. So this recursive method just says that there are some complexes in some weird category which behave like this. And then when we close up, we know that the closure of this K2 is the same as T plus A times the closure of K1. And this is the same as T plus A times invariant of just a knot, which we can compute. The invariant of the second guy and just the invariant of this thing, which we can also compute. So this will be essentially just polynomials in X or if we have odd variables, we have sum X through A. And we have this thing. And what I'm seeing is that the homology of this thing are supported in even degrees because we just computed this. And the homology of this thing are also in even degrees. And in this case, by the argument that I tried to explain, it doesn't matter what the differential between these things is. And homology of the total thing, because all the grading shifts are even, it's just homology of this plus homology of this. And in the example sheet, in the exercise sheet, there are some explicit problems and explicit answers how to compute this. So this is the idea. And again, in general, what they showed is that this method works. So you can always find some recursive things and you can always find these pieces of KN and keep growing from K1 to K2 to K3 to so on to compute this homology for all positive torsions, develop recursion and compute it at least as a triply graded vector space. So this method won't give you anything as our module, but as a triply graded vector space, it works perfectly. And just another example in slightly different direction, which might be helpful, is this picture of K2, which is this complex R to B to B to R. So K2 can be written explicitly. So there are some explicit maps of binomials. Again, this is a complex of RR binomials. And you can write this complex either as a complex where here you have a negative crossing and here you have a positive crossing. And there is some chain map between them and you take the cone of that map. Or as you have R and you have this B to B R and as we discussed, B to B R corresponds to this Braille US2 crossings over here. So you can think of it as a cone of a map from identity to this B to B R. And this can be used to show that actually this eats crossings. And this complex has all the nice properties that we have. And it's pretty explicit complex. And I haven't actually seen this in other geometric settings, so it would be very nice to see it very explicitly. But this is it. And as I said, in exercises and Q&A sessions, we can explain how to use this theorem to compute some examples. And maybe one last thing which I want to say is some general properties. So before we go to all these general things and structures in homological operations, so some general facts. So first of all, if beta closes to a knot, then this AGH of beta is a finite rank. So the AGH of beta is the same. And so you can think of it as a finite rank free module over a pretty normal one variable, or you can think of it as just a finite dimensional vector space when quotient by this action of C of x-pines. So more or less, we can take this reduced homology. This is just a finite dimensional vector space. And so we can say that this HHH of beta in just HHH reduced tensor with polynomials in x-pines. Now if beta closes to a link with several components, HHH of beta is not free. It's still free over this polynomial rank. And I will explain it properly next time. But the point is that we had this example where HHH of t is clear as r plus r mod x1962. So this is definitely not free over r, but it is free over polynomials in x1 plus x2. So sometimes it's useful to consider just a small or subring and restrict to that. It's always a free module. So you get some piece of structure from there. But in general, if you want to find a dimensional vector space, you can do this. But it doesn't need to be a finite dimensional vector space. And I will see a lot of interesting examples next time. OK, so sorry for all the technical issues and thanks a lot. So that's it for today. The last line from item one, it should be Cx1 plus. Yes. Yes. Anyone has questions? We seem to have one on line. You can construct the link invariant from Kn. And that is related to so-called colored homologation. But the theorem says that there exists Kn. So this is one example. But in principle, there is some construction of Kn from this theorem. One way to construct it is to say that this Kn plus 1, you can express. So you have exact triple of this guy, Kn plus 1, and this guy with Kn. And then you can reverse the arrow and say that this is actually a cone of connecting map between this guy and this guy. And then you have to prove all these properties for it. And that's how this goes. But a priori, Kn is just some complex of our bimonials. And yeah, so in principle, yeah, you can use it to build colored homologation if you want, if you know what color homologation is. But maybe it's not so important. So if I understand correctly, can you scroll down just a little bit to the next relation? Yes. So this has relation with the strand that goes around the Kn is what allows you to compute the HHH of a product of these kinds of loops, right? That's right. Basically, when every other torus node, you have a lot of loops. And somehow you remove one loop at a time. And what you get is not a smaller torus node, but it's kind of a piece of inductive structure and get kind of less and less and less loops and less and less and less wrapping around. And if you have a cleverly organized induction, you can make sure that this recursion converges to something reasonable. That's right. But you remove this loop. And this allows you to simplify things. You have definitely less crossings on the right than on the left. So originally, this method should work not only for torus links, but for positive pure braids, right? You don't know. No. I mean, I wouldn't be as able to say it works for positive pure braids. And I don't think it works. But it's a very good question when this method actually works. So either a larger class of braids where we can compute things. Whenever we can apply this method, we get something where all homologes support an even homological degrees. So there are examples of positive pure braids where homology is not even. So that's the first kind of test. But it's a very interesting question, which many people are interested in, is what is the nicest class of braids where the homology support an even degrees? And like torus links are one example, but is it true for algebraic braids? Is it true for something else? That's an open question. I have a question. Is there an axiomatic approach to this Kovanov-Rodolf-Kozanskiy homology? I wouldn't say so. I mean, it's again. Like this recursion might work or might not work. So if I mean, it works in general. If you know all these differentials over here and kind of differentials where this is a part of a bigger picture. So in this sense, it all is worth. The question is a bit different. About the Kovanov-Rodolf-Kozanskiy homology, is there an axiomatic approach to it somehow? Like, you define it as the whole show, homology of certain complex, but can you define it axiomatically? Oh, no, I think Dancer just know. Like, again, you have a lot of properties, but I don't think there is like a full set of properties which characterize and compete. And does one expect that this to distinguish all the knots or what should be kind of what should I mean, if two knots have the same Kovanov-Rodolf-Kozanskiy homology, what can one and what's the expectation for the kernel of this map? I don't know. I mean, I think like they detect more than just. So I didn't say this, but first of all, you can extract complexly polynomial from this homology by taking non-historistic. And the right examples of knots where a whole-flap polynomial is the same, but this homology is not the same. So in some sense, it's a better invariant. But whether this is interesting thing for distinguishing knots, I don't know. And again, the main reason is that it's really hard to compute. So like there are some examples where this is computed, but not so many, but it's a part of the topological quantum field theory. For example, if a surface connecting to knots or cobertism, you expect a map in this homology and that's very interesting. You can get a lot of topological information from there. So is that an example of two non-isomorphic knots who have the same homology? I don't know. I'm not sure. I mean, for a Kavan of homology, which is related but slightly different, the right examples for this one, I don't know if anyone computed, but I'm sure there is. But I don't know if it's known. Excellent. Any more questions? For the thought, let's thank you again.
|
Khovanov and Rozansky defined a link homology theory which categorifies the HOMFLY-PT polynomial. This homology is relatively easy to define, but notoriously hard to compute. I will discuss recent breakthroughs in understanding and computing Khovanov-Rozansky homology, focusing on connections to the algebraic geometry of Hilbert schemes of points, affine Springer fibers and braid varieties.
|
10.5446/54804 (DOI)
|
Today we're going to be studying the topology of Y by which, I mean, the simplest thing you could imagine that I mean by that is just studying the cosmology of Y. And that's almost what we're going to be interested in, but something slightly different, a similar vector space of the same dimension, but a different vector space. So I'm going to explain what I mean by that in a second. But the first thing that I want to do is bring back one piece of structure I didn't talk that much about last time, which was this Hamiltonian torus action. So I recall that I wanted an action of a torus preserving the symplectic form acting on both X and Y, compatibly, and I demanded that the fixed points for this torus action is finite. Maybe I didn't emphasize this last time quite enough, but if we look downstairs on X, this fixed points of the torus action on X is just a single point. It's the same fixed points as for the torus action for the C star, this conical C star. So remember we already had an existing another C star conical C star, which scaled everything down. And this Hamiltonian torus commutes with that one and so they'll have the same fixed points upstairs on Y. The conical torus has maybe not just finding many fixed points as a big fixed point set, but this torus action will just have finally many fixed points. And then I'm going to define the attracting sets for this torus action, so Y plus. So this is the set of all points in Y. Oh, sorry, one more piece of data. This is what I meant to say. Inside this Hamiltonian torus, I'm going to choose a generic C star, choose and fix the C star. And this choice of C star is actually should be thought of as part of the data. We'll see later that under some flexibility of course, once you're a choice of resolution. So with this choice of C star, I can consider attracting sets for this C star inside the torus. So the C star chosen generically, so it's fixed point set is the same as for the full torus. And then the tracking set is just defined to be the set of points in Y such that the limit for this torus action, for the site, for the C star action exists. I mean, if the limit exists, it must definitely be in this fixed point set YT. So that's Y plus. And similarly, we have X plus for the same definition. So just as an example, recall our favorite example of symphonic resolution was this cone, no point of cone of SL2 being resolved by the cotangent bundle of P1. And in this case, maybe I just draw the, okay, well, inside here, maybe I'll draw the zero section of this P1, which is now down to the C. Now, the way that the torus action works here, it's kind of in this direction, it's attracting you down. So this is this C star action. I mean, in this case, this torus, the torus that acts effectively on P1 is just Hamiltonian torus is just C star. So in it, it's sort of going down in this direction and up in this direction. So that the attracting set X plus is just this copy of A1 here. And the other hand, in Y plus, the attracting set is the union of this P1 and this, the preimage of this A1, I mean, it's the preimage of this A1, but it's the P1 and this part. And then the two fixed points are here and here, of course, zero and ocean. So the blue stuff is the plus locus, the green stuff is the fixed points. In general, there's a morphism from Y plus to X. Okay. So what are we doing about this Y plus? Well, we're going to introduce one. So this, this maybe slightly scary sounding thing, but it's actually makes our life much simpler and is not, it's not complicated. So it's called hyperbolic stock. So what is it? It's a functor of, call it capital five, and it goes from the drive category of sheaves, constructible sheaves on X to category vector spaces. And it goes like this, you take a sheave, and you don't just compute its co homology, but you first take the shriek pullback to X plus and then compute the co homology along X plus. Here I is the inclusion of X plus index. Okay, so I think this guy hasn't been damaged, I don't know, internalized in the literature, but I think probably the best treatment of it comes from Nakajima's PCMI lecture notes from five years ago. And, well, you can be defining a much more general context. In our situation, the fixed point set is just finite versus rather a single point. That's why I call it sort of hyperbolic stock. If the fixed point set was not just a single point, but some locust, I might call it hyperbolic localization, but here it's just a single point. And the main fact is that if F is perverse, then this five F is concentrated in a single degree. Well, which degree I mean, the degree should be, I guess I should, the degree will be two D, maybe I shouldn't introduce here, two D is the dimension complex dimension of Y. So Y is of course even dimensional because it's subtracted and that's dimension two D. And so D is the dimension. And maybe I should have said this a second ago, but this Y plus and X plus W half dimensional compared to X and Y. The reason is, well, it's easier to see maybe in Y plus. If you consider Y plus, it's a Lagrangian sub variety because the torus action preserves the symplectic form. So the positive directions for this torus action for the C star and the negative directions for the C star are paired under the symplectic form. So they each have half dimension. So Y plus is half dimension and it's also Lagrangian, but it's half dimension. So Y plus and X plus are half dimensional X and Y. And then, okay, and then also we have this fact here that if you have a per sheet when you apply this hyperbolic stock functor, then it's concentrated in this tree. Maybe I should say, maybe I should say not all constructible sheets constructible with we have a preferred stratification on X, namely the stratification, we'll mention right now. We need to discuss in a second. So on X, I mentioned this last time, but then we reiterate it, we have finally many symplectic leaves. I call them X alpha, alpha and I, they give us a stratification. And when I say constructible sheets, I mean constructible with respect to this stratification. So why this hyperbolic stock? Well, you'll see in a second. So we're going to consider the decomposition theorem. So we have this map from Y to X. So we can take the push forward of the constant sheaf on Y shifted by 2D to make it reverse. And it's constructible with respect to this stratification by some like the leaves. So that's the result of clean. And it decomposes. Therefore, as a direct remover alpha of the I see sheaf of these leaves or their closures, sensor the topology of the fibers here, that alpha is my inverse of a point X alpha X little X alpha point. Then after this, so sometimes you would take this decomposition theorem, then push forward to a point to reach the usual, I mean to reach a decomposition of the co-molgia Y. But we'll do something different, which is we'll take this decomposition theorem and then apply the hyperbolic stock function. So that's a function. So this is an equation in the category of constructible sheaves on X. So I'm free to apply my functor, which goes from constructible sheaves on X to vector spaces. So after I do that, this left hand side will turn into the homology of this positive set. And in fact, the homology of this positive set in single degree, namely the top degree, 2D. Actually, yeah, let me write 2D here in a second and then abbreviate something. So we get the top homology of Y plus. And here we get these IC sheaves. So the good thing about doing this hyperbolic localization is it sort of gets rid of the IC sheaves. And in rather than turning them into the intersection homology of X alpha, it turns them into the top homology of this X alpha bar plus, the tracking set inside X alpha. So here X alpha bar plus is just X plus in a second, X alpha bar. And then this guy is a vector space. So he just comes along for the write. So we get this version of the decombination. And just because everything in sight just involves top homology, it's only going to be concerned with the intersection, the irreducible components of these varieties. And I'm just going to abbreviate H of something as H top. So then we can just write this equation maybe more simply as H of Y plus equals the direction over alpha H of X alpha bar plus tensor H of alpha. Let's look at an example. So this maybe seems a little strange or abstract. Let's take the cotangent bundle of G2,4. So this is an example that's rich enough that we see a bit of the structure. So in maps, it's a resolution of the 4 by 4 square zero matrices. So that's my X. That's my Y. What are my strata? Well, actually, what's my X plus first of all X plus is pretty straightforward. It's just such A is such that A is upper triangular. So because the way this torus is just acting by conjugation, so I choose this generics C star, and then the tracking sets for the generics C star would just be the upper triangular matrices. And then the various strata are as follows. There's X zero, which is just zero. There's X one, which is such matrices with square zero and rank one. And X two, which is such matrices with square zero and rank two. So picture our space X, like so. Then we have a locus like so where rank A is equal to one, and then somewhere here we have this point zero. And then what are the, so then we should, according to this equation here, we should examine, right in this form, we should examine fibers and we should examine this positive, that that's a tracking sets in each strata. So let's start with the tracking sets and then we do fibers. So tracking sets are pretty straightforward. X zero plus, well, just still zero. X one plus, so it's A upper triangular. A squared is zero. Rank A is one. Well, maybe if I put the bar, it's rank A is less than one. And this variety actually has three irreducible components. So there are four by four matrices like so. There's all these free entries in upper part of the matrix, but I want the rank to be in most one in the matrix that's square zero. And if you put that, then there's three possibilities you could have like this. Like this. More like so. So there's three irreducible components there. And X two plus, which is the same as X plus, it has two irreducible components. Maybe I'll leave it as an exercise to figure out what those two components are. This is two components. And actually in all three cases, the fibers are irreducible. Here are the fibers G two four, here the fibers P one, and here the fibers. So if we were to look at our equation, so it says homology top homology of Y plus. So I keep saying top homology. Maybe I'm making a big mistake. My top homology. Sorry. I think I don't, I'm sorry. Let me, let me, sorry, I think I said something wrong. The right hand side is correct. The left hand side just total homology. And I got myself confused. No, no, no, sorry. It's okay. Okay. Okay. This decomposition gives the homology of Y plus. And I get, so, so I get homology of X zero plus top homology of F zero plus top homology of X one plus. And so top homology of F one plus top homology of X two plus. And so top homology of F two. So in this case is like one one, the dimensions here, three one and two one. It works out to six. Okay. So great. So this is, this is the, the main objective study will be this homology of Y plus and this decon, and it's decomposition here. So one reason, well, one reason to focus on this homology of Y plus is because it's going to, it's going to play a big role in this symphonic quality. But another reason is that it admits a categorification. So. Yeah. That is a question in the Q. The person asks is it top homology of Y plus or total homology? Sorry. That's why I forgot to confuse by it for a second. Maybe I should just stop a second and straighten this out in my head. First of all, homology, when everybody say homology, I mean Borrell-Morhemology. I actually think that the top of Borrell-Morhemology, I actually think that, sorry, no, no, I'm actually believing what I thought a second ago. So the, the, the, the, I think this, this thing about the, this fact here that when it's perverse, then the, I feel like stock is concentrated in a single degree actually proves that the top homology of Y plus is the same as the total homology of Y plus. And Borrell-Morhemology is always. If somebody thinks I'm wrong, they could point out, but now I'm sort of more convinced that it's right. For example, for this T-star P1, there's no zero homology. This point is Borrell-Morhemology. Doesn't contribute anything because it can go off to anything. Okay. No one says I'm wrong, so I'll assume that I'm right. Okay. So this, it can be both top homology and total homology because they, that was the answer to the question. Okay. So what do I mean by a mid-seq categorification? So I mentioned last time that we have these algebras A, so this is a quantization of X or of Y, I'm sure you've heard of algebras quantizing Y. Let's just stick to a single algebra A, a quantization of X. Actually, A depends on a parameter which lives in H2, so maybe I'll fix a given one. Let me call it A sub theta. Okay. Doesn't need you to. And then the action of the torus on X gives us an action of the torus on A theta. And then the action of C, the choice of C star inside of here, give us a C star inside of here. So there's a C star action on this algebra A theta. So this means we get a Z-grading action of C star and a vector space is the same as Z-grading on that vector space to get a Z-grading on A theta. So in the case of, just a quick example, when Y is the cotangent model of the fly variety, then this torus action, just the usual torus action on the universal enveloping algebra, and this Z-grading on the universal enveloping algebra, A theta is the universal enveloping algebra, SLN, modulo sum, central character, and the grading, well, it depends on the choice of C star, but if we choose the choice, the natural choice, which is like rho for C star, then the grading is, just gives a degree of EIs, usual Chevrolet generators, one in degrees of FIs minus one. So that kind of grading. And then inside of A theta, we have the positive degree part. And we make the following definition. We define category O, or A theta is the category of finitely generated A theta modules on which A theta plus acts locally in the flow. So this is supposed to be a generalization of the vancin-gelf on-gelf on category O for the universal enveloping algebra, the semi-simple of the algebra. And it was the insight of Braden, Lakata, Prabhuprott and Webster that you should try to generalize this definition to any quantization of a symplectic resolution. So the definition is pretty straightforward. We just look for those modules, finitely generated modules on which the positive part acts locally in open. So the theorem of Braden, Lakata, Prabhuprott and Webster is there's a characteristic cycle map. From the growth in the group of category O, grade theta, to this real-mormonology of Y plus. And Joy? Yeah. We have our question in the chat. Great. Do you require Z-graded modules in category O? I think, so the question is whether the module should be also graded. Usually, always, that this torus action, there's a sub-space, like a kind of carton sub-algebra of A theta, which is sort of responsible for this torus action. And in that way, so that's like what you might call a quantum moment map for the torus action. And that way, you could recover the Z-grading by looking at the generalized eigenspaces for that C star inside your algebra. So I think, no, it's not part of the definition, but usually maybe it kind of comes for free. That's a short answer to the question. Okay. So let me explain the definition of this characteristic cycle map. So it works like this. You have M, so that's the theta module in category O. Then you sheafify to maybe I'll call it scripty A theta module. So let me just remind what this means. So scripty A theta, that's a sheaf of algebras on Y quantizing the coordinate ring of Y. And the global sections of this sheaf of algebras is just the original algebra A theta. So there's a localization function, and sometimes this localization function is an equivalent, sometimes not, but sometimes an equivalent of categories. But whether it's an equivalent of categories or not, there's always a function. So you can produce this sheaf version of it. And then you, and I mentioned last time that this quantization comes in two flavors, the formal quantization where you have an H bar parameter and filtered quantization where you don't. So here I'm assuming that we're in the filtered setting. So this A theta is a filtered algebra who's associated with graded as our coordinate ring. And similarly this is a sheaf of filtered algebras who's associated with graded as the structure sheaf of Y. And so this step, we need to choose a filtration on M on this sheaf M. And then we'll take its associated graded and this associated graded M with spectra is good filtration will now be a sheaf of modules, the quasi coherent sheaf. In fact, it'll be coherent because the algebra, because M was finally generated. So this is a coherent sheaf. On Y, and it will be supported, set theoretically supported on Y plus, and supported on Y plus, because I assumed that the A plus acts locally and allpotently. So A plus acts locally and allpotently that translates geometrically to the condition that the coherent sheaf is supported on Y plus since it's integrated. And then I take the support. Let's go ahead and change. So it's a multi-stage procedure. And they prove that under some circumstances this map is a nice model. So in this way, we can categorify this homology. And moreover, in their paper, which is a beautiful paper, about, well, it appeared on the archive almost 10 years ago, maybe published four or five years ago, they explain essentially that this decomposition I mentioned above of this homology of Y plus can be seen algebraically using the category of. So what kind of assumptions do you need to be a Nigerian officer? So for example, do you still need the CP, the connectivity of Sparta? No, I don't think that's necessary. The problem I think, well, I don't know the precise statement of when you have a nice morphism, but I think it's almost always, the conditions on Y and X are basically the ones I've mentioned above, which is to be a chronicle of some type of resolution. I don't think you need to simply connect to Sparta. But what's maybe more complicated is for which theta, this map is a nice morphism. So in general, it wouldn't be a nice morphism for all theta, but only for certain theta and maybe a set that you don't know very explicitly, but you just know there are some theta or maybe generic theta is a nice mouse. Recall that theta is the quantization parameter. So if, for example, in this classical case of universal enveloping algebra, semi-simple algebra theta is the central character. Okay, so now we're ready to move on and talk about some type of duality. Maybe I should say first of all that this, I don't know, word sounds like a duality and there's another word called 3D mirror symmetry. And some people use them differently. I'll just use them as exact synonyms. For me, some like duality and 3D mirror symmetry are just synonyms for each other. So what does it mean? It means that there's one symplectic resolution and there's another symplectic resolution. And these two symplectic resolutions are very different. For example, there are different dimensions, maybe they're constructed in different ways, but they have matching properties. So two symplectic resolutions. And that's why we call them dual. And another aspect of the dual is that if you take the dual twice, assuming that you're doing, usually it comes back to the thing you started. So this dual is really a dual in that sense. And so this symplectic duality or 3D mirror symmetry has been observed both in mathematics and in physics. And in physics, it goes sort of under the name of something called S duality for 3D and equals four supersymmetric field theories. So I won't say anything more about the physics motivations. Well, maybe slightly more, but basically I won't say anything more about the physics motivations. And if you have questions, you should definitely ask, you how on Thursday, who knows about that. Somebody asked, you know, when is the localization and equivalence in this localization function I mentioned here? And I mean, it's a good question. And there's some theorems when it's an equivalent. But I can't. First of all, I don't recall the theorems. And secondly, I don't think there's so any like really general, very explicit results like exact, you know, the theorem tells you that usually it's an equivalent or outside of some collection of some maybe affine hyperplanes and some equivalence. But back to some like duality. So to dual some like resolutions that have things that match, not the same things on each side, but so different things that match. So and I shall also say, there's a lot of things which match, not just like two things and that's it. But long list and I'm going to tell you a lot now. And I know many more that are not one of the ones that I'll tell you about. So many things, many things can be matched on both sides. Okay. And the last thing I should tell you is that the this tourist action is going to play a big role. So it's not just to simply resolutions, but they should also have Hamiltonian tourist actions with and so, so we assume we have chosen us a tourist action and also a C star inside of that source. And we'll also assume and this and with with this fixed points. It turns out that choosing this tourist action with finding many fixed points. This is actually equivalent equivalent in the sense of this some like duality to choosing the resolution by choosing a resolution. I mean, like if if we fixed x shriek, there's many possible resolutions as I discussed a little bit in the answering session last time, there's many possible resolutions. And picking out which resolution we're interested in is equivalent to picking which C star action inside of this tourist action. And if this finiteness of the of the fixed point set that's equivalent to the existence of a resolution. So if this tourist action is not there's no tourist action with finding many fixed points, there won't be a resolution on the other side and vice versa. And in fact, there's even if you don't have a resolution, there's still like, and you don't have a tourist action finding any fixed points, there's still stuff you can do. But I'm just going to stick with the simplest case where we have the tourist action on both sides of finding fixed points and we have resolutions on both sides. Okay, so what's what are some data which what are some matching properties. structures. So the first is that the the algebra of the tourists that acts on why is isomorphic to the second homology of why she. So it looks a little weird, but it's just a there's a matching of these two vector spaces the algebra of the tourists. And this, the reason I write T sub C is because I'm just going to emphasize it. This T sub C, this le algebra of our tourists comes with a natural integral structure, namely the co wait lattice of the tourists. And this, well homology also of course comes with the integral structure. And this is not just a nice more complex vector spaces but compatible with this integral structure. Moreover, inside each of these vector spaces, there's more data which is there's a chamber structure. So, and in particular, we're going to pick out two cones in this so inside here we have the nine inside the integral homology but inside the H two original one, we have the ample cone. And this ample cone is going to match with all those see all those choices of C star inside our tourists, which give the same attracting set. So, let me explain this for a second. So, backing up one second, we have a tourist action on why choice of C star inside of it. And this led to y plus. Let me write. What are these, let me row the choice of the C star in here. And then this depends on row. So we get, we can say that row one is equivalent to row two. If y plus row one equals y plus row two. So two maps from two embeddings of C star into the tourist are equivalent if they produce the exact same attracting sets. This will be true because there's not very many choices of what these possible attracting sets can be. And we get a fan structure on the the algebra of the of our tourists, well, in the real algebra of the tourists. For example, in the case of the cotangent bundle of the variety, this give reproduces you the vial fan. And this part of this data matching that matching of data is that the ample cone of so ample cone means those line bundles which are which are ample, the ample cone of our but this is actually by by work of clay in this case, it's just as more to the card group. So those line bundles, which are ample, they'll match those C stars, which are equivalent to our fixed C star. So that's one another example of this matching. Okay, so that's that's a matching from the this vector space data. Second, second piece of matching. So I mentioned we have the symphonic leaves on both sides and stratifications of our of our singular varieties X and actually, so we want there to be a bijection in order of reversing bijection between this. So I here denotes the strata of X. And moreover, so we would like that this exchanges these fibers with this. So such that we exchange the topomology of the fiber over some alpha with the topomology of the positive part, the tracking set in X alpha. And so these are just ice morphism vector space but even better it just comes from bijection between between your just work once. And vice versa, the homology of the X alpha, attracting sets is the same as the homology of the fiber. So that here we had these fundamental decompositions I mentioned. And they will be exchanged for this duality. So the direct sum is over the exact same set. And so it makes sense to match up these pieces of the decomposition. And these these pieces will match here and these pieces will match here. Okay, so fibers get exchanged with attracting sets. Let's see an example of this right now. So the simplest example is to take the cotangent bundle of projective space. Take that as my Y. The corresponding X is just the square zero rank one matrices. And my dual guy is this resolution of C2 minus Z minus N. Okay, so in these cases there's just two strata. So let's look at how this strata work on the left hand side here. So we just have X zero is just zero X one is just matrices of rank one. And then if we look at this decomposition of the homology of the tracking set in the cotangent bundle of the projective space, well, we get the homology of X zero, which is zero is the point. So it's one dimensional tensor, the homology of the fiber over that which is the projective space. Plus the homology of the mix one, the tracking part tensor, the homology of the fiber over that which is the point. And the guy that's so this is the dimensions or the number of your components are one, one, one and the only interesting guy is this X one plus and it has N minus one here to spoke components. So we already saw, we already saw a version of that over here. Sorry to scroll so much. But right here, this is the same X one plus it was four by four matrices with square zero and rank and most one and had three irreducible components. So there's obvious generalization. But I didn't say it before, but these these kinds of these things are called orbital varieties, you take a no point over closure and intersect with upper trans matrices. And the latest opponents are studied since the 70s, 80s, I know, and they're in bijection with the uncoupled, I guess by spultans themes, maybe the name was the social. So in this case, there are three components and here there's n minus one components of the same flavor. Notice these numbers like, you know, multiple these numbers. So it adds up to n. And on the other hand, if we work newly with C2 mod Z mod n. So then the, well, there's again two strata, the zero and everything else. So everything else is in the second strata. And I mentioned before. So this is just point and it and here we have the fiber over zero. And so I mentioned before that the fiber over zero and this resolution is very pretty. It's a chain of n different P ones, sorry, n minus one P ones. So here we have that that fiber. So it's those n minus one ones. And then here we have a point and here the tracking set of X1. I mean, this is just the affine space, affine line. So one one. Okay, so here we see this exchange. This guy is matched with this guy because of this order reversing bijection. So maybe the notation that's a good because I wrote zeros and ones, but the bijection takes zero to one. So here are the bijection between the strata must reverse the reverse the order. So the bijection is not just the bijection taking zero to zero and one to one. It's the bijection switching at zero and one. And in this decomposition, this also switches the roles of fibers and the tracking sets. And here you see this n minus one here, matching this n minus one. All the rest of them are one. So it's hard to see that it's matching but it is. So there's like, this is a very fun game to play to pick your favorite. It sounds like a resolution and try find the guy that looks dual in this sense. So there's a maybe let's let me say the third point maybe so I'm listening back here I was listening make a list of matching structures on both sides. Then as I mentioned, I could make a very long list but I'll stop after one more point. So I started by saying that there's this funny thing about the tourists matching the age to then I said this thing about the order of person bijection is a strata leading to this isomorphisms of the topologies. And the last thing I'm going to say is a categorification of number two. So there's an improvements of categories between the derived category of category O for the quantization of original guy and the derived category of category O or I don't know why I said for the quantization of the dual guy. Okay, so this this categorifies to and this equivalence is a little complicated in that it takes the form of a causal duality between graded lifts of these categories. So I'm this would be like subject of a whole lot more lectures but I won't talk too much detail about how this is supposed to work or but it just just to be give you a little bit insight or a little bit of honesty. It's supposed to be a causal duality between the graded lifts of these category O's and it should categorify the isomorphism between the homologies that we saw in number two. And again, this this idea of searching for such things is due to great indicator. Proudfoot Webster and it's inspired by the results on causal duality for the classical category O due to very nice balance and Ginsburg sort of. Okay, so you probably probably good idea to give some examples and so well we just saw this kind of fundamental example. Okay, and so of course this this fundamental example generalized in many ways. So one way it generalizes is we can have a hyper twerk right here. So a hyper twerk Friday is given the data defining a hyper twerk Friday is the embedding of a tourist. Oh, say, let's say rank K, so C star to the K embedded inside of C star to the end. And if we have that data, we use this to produce a hyper twerk Friday, which will end up having dimension to n minus K. So we take remember that we take the quotations bundle of this CN and we take the Hamiltonian reduction by this C star K. Three lines. And the dual dual to this embedding of C star the K and C star Dan, there's an embedding of C star to the n minus K inside of C star to the end. I mean this C star Dan is like the dual towards of that one. And this leads to a well to a hyper twerk Friday, but the same way does not have the same dimension as we see. And like the fundamental example of this duality is where we have just this C star embedded diagonally in C star to the end. And that's dual to this tourist product one. So rank n minus one tourist embedded. That's the example which reproduces this. Okay. And so there's some beautiful commentatoric self hyper plane arrangements, which are related to this. The defining these hyper twerk varieties related to finding these venues of tourists. So Michael will talk about that in the question answer session tomorrow. So more details on this duality in this thing. This is called in terms of hyper plane arrangements. This is called gale duality. And the next example is the potential model of a flag variety full flag variety is actually always do up to itself, or maybe slightly more precisely to the bag variety. And then last, if we have here the river variety. So I explained last time the definition of a neck and jimbo per variety associated to choice of quiver and two dimension vectors. I don't know what I use last time. Let me call it and lambda mu. Dual to this choice of a variety will be an affine grist money and six. So I'd like to explain this, but I'm a little short on time for today. I don't know whether to try to sneak it in today or discuss it. It will definitely take more than five minutes. I'm a little unsure whether to sneak it in today or discuss it next time. So that next time. Maybe I just pause to see if there are any questions. So if there are any questions, we can take a few questions down. Or if not, I'll go on. I had a question. So in the point three above, the same for both sides. The data is the same. It shouldn't be. Okay. So there are, okay. So this depends on a choice of theta and theta. Yeah. But usually we just take here theta and theta to the district of generic and integral. Um, yeah. Any other question? Do you comment on what you said about synthetic duality being 3D or symmetry? Well, probably not, but I can, um, so I don't know. I mean, you mean from the perspective of, of the physics or just in, well, maybe from the perspective of, um, of your symmetry in terms of like, s, y, z or, or otherwise, I think it's not. Yeah, that's a good, that's a good question. But that's not, it's not related to that. I mean, I think from mathematicians viewpoint, um, it's hard to see what makes this three dimensional and that two dimensional. For the physics viewpoint, um, the mirror symmetry has something to do with two dimensional, uh, quantum field theories, duality of two dimensional quantum field theories. And this simply duality has to do with the duality of three dimensional quantum field theories. Um, okay. And so from the, um, but from mathematicians viewpoint, it's not clear what's like three dimensional, this one and two dimensional about the usual mirror symmetry. So that's not a great answer. Um, but okay, well, maybe I'll just say a tiny bit more about the physics since I have not enough time to really talk about the Saffron grad Spaniard's license. So I'll start with that next time and I'll just say a couple of words just about this physics. So in physics, people are interested in this 3D, n equals four super symmetric field theory. And one, I don't know too much about these things, but one thing I heard is that if you have such a theory, it produces you to what we say algebraic right, these two kinds of spaces, one space is called the Higgs branch and one space is called the Coulomb branch. And one way to say what this simplecary duality is that we observe in math is it's the relationship between the Higgs branch and the Coulomb branch of a single theory. So these two are the simplecary dual pairs. But there's even a sort of another way to say what it is in physics or closely related is that you can take this three dimensional super symmetric field theory and then do something called estuality and produce another three dimensional n equals four super symmetric field theory. This one we call t and this one t shrink and maybe this guy here is going to be called mht and this one mct. And this field theory, it will also have a Higgs branch, but it's the Higgs branch of this theory is the Coulomb branch of the original theory and vice versa. So another way of thinking about what this simplecary duality is about is that it's a duality of these field theories and what happens so it's so good. So these two guys, the Higgs branch of the first theory and the Higgs branch of the second theory will be simplecary dual in the mathematical sense. And I think, I don't know, so why is it called 3D Rear Symmetry because it has to do with this duality of these 3D field theories is called 3D Narsimetry. From a mathematician's viewpoint, there's nothing three dimensional about it at all. I think the subject, the history of the subject is like, is very interesting. I mean, this duality was served by basically originally by Braden Lakata-Proudford Webster. And one day when Ben Webster was giving a talk at IHS, not at IHS, at IAS, when he was starting his postdoc there, like all the postdocs have to give an introductory talk about their work and he talked about this duality and he had no idea that it had to do with physics at all. And somebody in the audience, maybe Gukhav Witten, anyway, said that, oh, this is the same as the duality which physicists have observed for like 20 years or something. And so that was a birth, I think, of a very fruitful interaction. Okay, so we'll see next time, I guess we'll stop now. So we'll see next time, this, I haven't yet to define these affine-grasse-main-ance slices, which is a bit surprising because they're my favorite guys. So I'll define these guys next time, affine-grasse-main-ance slices, and we'll see in what way they are dual to perver varieties. First thing for next time, and then we'll continue, then we'll discuss this work of Braden, of Ravram and Finkelberg and Akajima about constructing these, some likely, duals in some generality. Okay, I'll stop there. Any further questions or remarks? Don't be shy. I have a question about data and data dual for quantizations. So they live in different spaces, right? So one in H2 of Y and another in H2 of Y dual. Yeah, so you mean what's the relationship source to be? It's a good question. I kind of, I don't have a very good answer at the moment, so I didn't think about it recently. H2 of Y dual is the choice of one parameter subgroup. Yeah, but I think in any case, as long as data, as long as they're generic and integral, the category actually doesn't depend on the choice. So I'm pretty sure that this category is independent of it, as long as it's generic and integral. So just like the usual category O for semi-symbolic algebra will be independent of the central character. So the block, so usually we think of the full category O for semi-symbolic algebra and we think of blocks. So this thing is like a block of category O. So as long as the block is generic and integral, it doesn't depend on the central character. So the choice of them is not important. I thought there was some relation that H2 of Y dual is like the choice of one parameter subgroup for X or Y. And then one parameter subgroup for the dual is the choice of H2. And so the category O contains both and they're small. Yeah, no, no, it's true. So that's true too. Again, the category O, in some sense, the choice of this one parameter subgroup is only important up to this cone that it lies in. So the category O would only be sensitive to the choice of this cone. So I'm assuming that you're already in this sort of ample cone. So if you're staying the same cone, then the grading will be the same. So the category O loses. I think, Gina, I'm not satisfied with the answer. Maybe. Anyone else have any questions? Yeah, thank you. Sorry. At the end, question? Yeah, but sorry, I'm a bit confused. But in the case of category O, usually it only depends on the choice of central character, right? So it's actually t quotient by w. Oh, you mean rather than t? Yeah, rather than t. Yeah, okay. So technically, the, yeah, so, well, it's a small subtlety which is the universal space for quant, the parameter space of quantizations of x, just as an algebra, is just the h2 mod w, mod this Nemi-Kamahou-Wahl group in general. And then if you want to speak about a quantization of the variety y, so a sheaf of algebra is on the line, then that's parameterized by a choice in h2. So it's almost the same thing. So you could think, yeah. So for what I wrote here, it's probably better to think of it as this h2 of y. Yeah, I guess there's something going on in the localization procedure. When you're in the unfold column, you get a billion equivalents usually. And when you go to the other places, you only have the derived equivalents. Yeah, yeah. Okay, thanks. Yeah, I see it. How does it relate to the resolution of the E6 singularity? So it's a good question. I don't usually think about this case. And in fact, it's an example of, okay, I didn't say it because I always mess it up whenever I say it. So there's an example of another like fundamental example, some black duality, which I did mention, which maybe I should have and it'll come back to answer Alexis question is that you can take the potential bundle of gmod p for any semi-simple p. Well, that's a resolution of some no-po-norba closure. That's my y. I won't bother writing what x is. That's dual to resolution of a slow-dry slice in the algebra of g. Maybe in the, let me just do a little bit of algebra. And I always mess this up because I don't know the combinatorics of these no-po-norba closures very well outside of type A. So in type A, this is very easy to see. Outside of type A, it's a lot harder to see. So Alexis questions of this form because this C2 mod gamma, this is the C2 mod gamma, this is the subminimal. This is a slow-dry slice. But it's a slow-dry slice in the in the li-algebra corresponding to gamma. So in this case, it would be a slow-dry slice in E6. So that would be dual to a cotangent bundle of gmod p and well E6 is simply lay. So this is like this, the LA doesn't matter very much. So a cotangent bundle of a partial flag writing E6. And which partial flag writing, I guess it should be a small, I mean, yeah, the which partial flag write, I don't know, but some, some cotangent one of some partial flag writing in E6. Oh, except now to think about it more carefully. Okay, I don't think, sorry, this dictionary doesn't quite, maybe I should have said slowly. So this dictionary works very well in type A. Outside of type A, I think the dictionary which works with, we don't always get resolutions. So I think in general, we have slow-dry slices, dual to no potent over closures, but not every no potent over closure has a resolution. And I think actually this one will be dual to one without a resolution. So maybe this answer is not very right. So a better answer would be some no point over closure. And I guess we could probably figure exactly which no point over closure it is, because this is a slow-dry slice to submaximal no point over closure. So this will be the correspond duality to the sub minimal no point over closure. Okay, so maybe minimal. So maybe it's, maybe it's dual to minimal Yeah, okay. Now I'm happy. This slow-dry slice that you mentioned is dual to the minimal no point over closure in E6, whatever that minimal no point over closure in E6 like, and I don't think that one admits this is likely resolution. And my reasoning is, because I mentioned before, that when we don't have a Hamiltonian torus action with isolated fixed points on one side, we don't get a symplectic resolution on the other side. And if we take C2 my gamma and gamma is not Z my N, we don't have a Hamiltonian torus action. So on the dual side, we should expect not to have a resolution. So I don't know what this minimal no point over closure in E6 is like, but I think it doesn't have, I think it's dual, and I think it doesn't have a resolution. I think we have a, does that answer your question, Alex? Alex was asking also for D4, but maybe you can reply to him and your channel or tomorrow. Well, the answer will be the same. It'll be, it'll be the answer the same. It'll be the minimal no point over closure of that type. So the minimal no point over closure in D4. And then again, I believe it doesn't have a resolution. So I think we can, thanks for that again.
|
In the 21st century, there has been a great interest in the study of symplectic resolutions, such as cotangent bundles of flag varieties, hypertoric varieties, quiver varieties, and affine Grassmannian slices. Mathematicians, especially Braden-Licata-Proudfoot-Webster, and physicists observed that these spaces come in dual pairs: this phenomenon is known as 3d mirror symmetry or symplectic duality. In physics, these dual pairs come from Higgs and Coulomb branches of 3d supersymmetric field theories. In a remarkable 2016 paper, Braverman-Finkelberg-Nakajima gave a mathematical definition of the Coulomb branch associated to a 3d gauge theory. We will discuss all these developments, as well as recent progress building on the work of BFN. We will particularly study the Coulomb branches associated to quiver gauge theories: these are known as generalized affine Grassmannian slices.
|
10.5446/54805 (DOI)
|
So let me just remind where we left off the last time. So we were considering symplectic dual pairs. So we had two symplectic resolutions. And we saw that symplectic duality was some list of relationships between these two pairs of symplectic resolutions. And the one that I sort of discussed the most was this relationship about the homology. So let me just remind you about that. So we had torus actions on these two symplectic resolutions. And we considered the attracting sets for these torus actions. And then we said that the homology, and in fact here, let me just say that I said the right thing last time. So this, so recall here that H of some variety, Z, means the top, we're around more homology. And the last time I said, and it was correct, that if we take the attracting set of in Y, then it's top Braille homology actually coincides with the total Braille more homology. So just for this Y plus, we can either think top Braille more homology or total Braille more homology. And this top Braille more homology, this more total homology decomposes by the decomposition theorem. And after using the decomposition theorem and this hyperbolic stock, we decompose it into the strata downstairs, the symplectic leaves downstairs. We see that the homology top homology of the attracting set in each strata, closer to each strata, tensor, the top homology of the fiber over a point in the strata. So recall here F alpha is pi inverse of some point little x alpha, little x alpha lies in some stratum, capital X alpha, this method is called pi. And under some black activity, we have a bijection between strata in the dual varieties, order reversing bijection. And also what's reversed is the roles of the fibers and the attracting sets. So some shrieks here, Shriek is just my notation for some like the tool. And then we have equality is going like this. And these equalities are not just the qualities of vector spaces, but they're actually given by bijections between irreducible components of those two. Here's the right. Okay, I guess I can move this guy back up. Okay, so that's just a quick recollection on that. By the way, I just wanted to point out, so I listed a number of things that are structures common, we're not common, but structures match on both sides of this inflective duality. And I want to point out that there's some more structures I didn't talk about. Let me just point out two more structures of continued numbering. So this will be four. This four I may come to later if I have time. So in the end of the lecture, it's called something called the hechidic conjecture. So maybe if we have time, we'll come to that. And a number of five, I'd like to point out is a matching of stable envelopes, particular elliptic stable envelopes. And this will actually be the topic of Richard Romani's lecture next week. So you should tune in for that, I think it's toward the end of next week. And I should say that we have these different structures that match and you might ask, what's the relationship between the matching of structures? Certain ones imply other ones. And usually you could say that there's more mostly I would say no, we don't know that matching of certain structures implies other structures, but usually there's a kind of compatibility between them, but not a perfect implication. I think the situation is very analogous, very similar to in the usual mirror symmetry, 2D mirror symmetry, whereas you know you have different structures like matching of hodge diamonds or mirrors of hodge diamonds or homological mirror symmetry. Anyway, many other things that I'm not an expert on. And I believe that it's not that one implies all the rest of them, but that there is interrelations between these different matching. So it's a similar story here. And I would expect that there's probably more things that people just haven't thought about or maybe they have thought about them and I just don't know. So today what I'd like to do is talk about a particular example of some blackly duality, some blackly dual pairs, which turns out to in some sense be, I don't know, the main example, at least for me it's the main example. And it concerns core varieties and affine reges mining slices. And after doing that then, if we have time today, if not tomorrow, we'll get into this Barman-Finkelberg-Maker-Giemann construction. So to set the stage, I'm going to start by reminding a little bit more about core varieties. So let me fix the algebra. So G is semi-simple, the algebra. And I'm going to require it to be simply laced. So in other words, an ADE type. And let me associate some data to this semi-simple, the algebra. We'll take two dominant weights. We'll write, we'll, all these dominant weights will extract two vectors of numbers. So we write the first dominant weight as a linear combination of the fundamental weights. And we write the difference between these two dominant weights as a linear combination of the simple roots. And if it's, I would like to demand, like my lambda and mu are not just arbitrarily chosen, but they're chosen so that these VI's are integers. So wI's and VI's, not just integers, but natural numbers. And sorry, here I, in both these sums, I ranges over capital I, which will be the vertices of the Dinken diagram. The fact that VI's are natural numbers, that's saying, I mean, that's equivalent to the fact that lambda is greater than or equal to mu in the usual partial order on the dominant weights. And one more piece of data I'm going to fix is I'm going to write lambda as a sum of fundamental weights. So each of these are fundamental. So also those omegas that appear there are fundamental. So there's going to be w1, omega 1, w1, omega 1's among this list, w2, omega 2 is among this list, but I'm fixing an ordering on them. So there's a tiny bit more. Choice fixed here. And the last thing I'm going to require is that all these lambdas actually are not just fundamental, but that they're actually minuscule. So in type A, this is not a requirement at all. But in types D, there's only three, there's three minuscule fundamental weights. So it's a very strong requirement. And in type E8, there's no minuscule fundamental weights. So it means I can't deal with E8. Now this assumption, well, there's a few, this assumption is not strictly necessary, but we'll see later why I did it on both on the covariate side and on the other side. And if you're familiar with the covariates, you might ask why did I bother saying that mu was dominant? Well, we'll also see the answer to that question. So associated to this data, we're going to consider representations. So we have these fundamental representations. And then I'm going to tensor them together. So this tensor product, the fundamental representations of our Lie algebra, and I'll write this as V lambda underline. So lambda underline denotes this list, V lambda underline denotes this tensor product. Then I'm really going to be interested in setting the weight space in this tensor product. What about this weight space? Well, we can take this tensor product and we can decompose it into ERAPS in the way we decompose any representation. So that these multiplicity spaces, which give me these arms and tensor that you're over new dominant weights. And then in particular, we can look at the mu weight space there. So it's this, this is the kind of decomposition of a weight space of a tensor product into multiplicity spaces, tensor weight spaces at ERAPS. So let's go to covariates. So what do we do? We choose an orientation, which won't really matter later in the day, but for now we'll choose an orientation of the Dinkin diagram. That oriented Dinkin diagram is going to be called the quiver, which is the directed graph. And then we have dimension vectors coming from our V's and W's. Let's try an example. So this is maybe for SL4. So the Dinkin diagram is a type A3. So I have three vertices, my arrows go to the right, and then I have three framings. And last time somebody pointed out I wasn't very consistent with which direction the arrows go to the framings. Anyway, maybe this time I'll be more consistent, send them to the framings. So W determine the framings, V's determine the gauge vertices. So last time I explained that from this choice, we get a group G, which is the product of the GLVI's. And we get a representation of G, so which I'll call N, which is the direct sum of, of CVI, CVJ, where I, J is an edge in the quiver, and HOMS of CVICW. So then we form the macrogymquiver variety, and I'll use this new notation. So I'll call it M lambda mu. So definition, we take the co-tension bundle of the spectro space N, and take the symplectic Hamiltonian reduction by the action of G, and then we take a projective GIT quotient, and then we put, like I did, less than two parameters. So the first parameter refers to the level of the moment map we take, and the second parameter refers to the, the, the GIT parameter for GIT quotient. And so chi here is a character of our group G, and we just take it to be the product of the determinants. So that's the definition of the macrogymquiver variety. So that's the definition of the smooth one, and this is the definition of the affine one. So just the same thing, but at the zero level. And now we can state theorems of macrogymma. So one is that M lambda mu is smooth, and I will separate them up. And one lambda mu is a resolution of M zero lambda mu, and t, thanks. And one lambda mu, let's find any many fixed points. Oops, I didn't describe t. Back up one second. So I have a torus acting on my quiver variety. Where does it come from? Well, these, I have these framing vertices in the squares. By the way, Alan Knudson told me why they're called framing because it looks like a frame, a frame of a feeding. Or maybe why they're called, why they're going in squares. So we have those, those squares, those frames. And so the, of course, the general linear group of those vertices of the vector space of those vertices acts, in particular, we consider the t torus to be the product of the diagonals inside those vertices. That's where it's. So that's our choice. And it acts as finding the many fixed points. So by the way, right here, we use that mu is dominant. And right here, we use that lambda IR minus. So those assumptions are used in this term. To the syntactic leaves of this affine, affine quiver, right? The singular guy are just given by regular loci in smaller, such singular guys. Joel. Yeah. We have a question. Oh, great. Is it possible to realize tensor products of Verma module similarly? Yes, similarly, not exactly in the same way, but similarly. These are in some sense it's a good question because it sort of will come up a little later when we talk about this affine-gross-mining slices and cool branches. So there's kind of maybe two approaches you could use. One is you could take these framings to go to infinity. Of course, then the representation gets bigger and bigger and becomes more like a Verma. Another approach is to get rid of the framings altogether, but then do a slightly different realize the representation in a little bit different way. So if you get rid of the framings altogether and do something slightly different, then people sometimes call that the loostic, no-ponent variety. And that can be sort of used to realize a Verma module. But it doesn't quite fit in the same framework that I'm talking about, so I won't talk about it. Okay. The symphonic leaves of these singular guys are given by the smooth loci or regular loci. I'm just spelling that in the notation. In the other such properties, that's a new one. And here are new ranges of over dominant weights that are trapped between lambda and G. So there's an action of our Liegeber G on our friend from before. So we take this smooth square variety, take its attracting set, take its spill, more homologous and action of G on that thing, such that. So here we have our fundamental decomposition that I talked about last today and at the beginning today. So we have the attracting sets of strata downstairs, tensor homology of fibers. So here F new denotes the fiber over a point in this new strata inside the core variety and lambda mu. And this matches the tensor product, so decomposition that I mentioned just a few minutes ago. So this is isomorphic. This homology of the attracting set in the total space is isomorphic to the mu weight space in the tensor product. And this matches this decomposition. So we make sure careful about which way around it matches it. Yeah, this way around. So this attracting homology of attracting sets becomes home spaces into tensor products, tensor product multiple space. So quality here, quality here, quality here, so on. We have another question in the Q&A. Go ahead. Oh, what's the map? Well, there's always a map from a projective DIT quotient to the corresponding affine DIT quotient. This is projective of some ring and this is speck of some ring and I have to think for a second why that gives them that. But if you think of, I mean, it's, oh, this is I think the degree zero part of the ring that this is projective. So that's why there's a map. Yeah. Yeah. So this is like speck of some a zero and this is projective. So there's always a map like this. I didn't define the projective or DIT quotient, but I mean, I can if you like, but it is too take us a little off time. Okay. Let's look at an example. So this will be my running example in this section. It's a very simple example G to the SL2 lambda to be N copies of the first fundamental weight and this is one fundamental weight in SL2 and mu to be lambda minus alpha, the simple root. So the representation in question is, well, we have the fundamental representation of C2. So we're interested in C2 tensor N and then looking at the N minus two weight space of C2 tensor N. Well, decompose into many representations, but only only two of those representations have an N minus two weight space. So this is, this has two factors in this direct something condition, we can we have the n dimensional representation and it's N minus two weight space and it occurs once. I mean, they may consider with my home notation on the end into C2 tensor N and sir, the N minus two weight space of the N plus on the N minus two into C2 tensor N, and sir, it's weight space. And so it's weight space. And so everything in sight is one dimensional, except for this factor here, which is N minus one. So what's our quiver in this example? Well, we just have a one here because one, the coefficient of alpha is one, so w is equal to N and V is equal to one. And so we just have the cotangent bundle of P one is our quiver variety. Again, it's like P and minus one is our providing again resolving N by N matrices of rank less than equal to one and square zero. And we saw this before there's two strata. And we saw before that over that this N minus one, so that the decomposition of the positive locus here looks like. Well the only really interesting part is here where you have the, where you have this, all these orbital varieties, so you have this topomology of this space of upper triangular matrices. We call this the X, I suppose. So here X plus is going to be those upper triangular square zero rank less than equal to one matrices. And that's responsible for this N minus one. Here we have fiber one, I guess I called it last name. And here I have X zero, plus which is the point, and here I have fiber zero, which is just the PN minus one. There's another point. And here I have N minus one components. So there we see the same decomposition. But by the way, I don't know if I really said this before, but any cotangent bundle of a partial Thag variety in type A can be realized as a Nacoginmucule variety, as well as any resolution of a, of a Solotovic slice in type A, as well as any resolution of an intersection of a Solotovic slice with a Nelkhoff and Orvac closure. So in type A lots of, I mean, anything you can think of can be realized as a Nacoginmucule array. Well not really anything you can think of, because in a few minutes we'll think of some other things that you can. Okay, great. So that's the core varieties. Now I switch to these ethnogrysmining slices. So let's take G to be the Langland's dual group to G, which looks a little weird, but it's not such a problem because actually the G is isomorphic to its Langland's dual leogic process, since it's an AD type. The Langland duality you don't really notice, we can kind of ignore it. But I mentioned this for you for thinking about more general situations. And then we're going to be interested in the afangrysmining and of G. So which means I take G over the Laurent series and I take the quotient over G over power series. Afangrysmining will play an essential role in the remaining tux in the series, actually in two different roles. So this is going to be the first way and later we'll see a different appearance in the afangrysmining. So because I took Langland's dual, the reason why I'm mentioning that is because this lambda I'm going to think of now as a co-weight of G. So a map from C star into my group, into the maximal torus of this group G. And therefore we can define a point t lambda in the afangrysmining of G coming from this lambda. So for example, example would be in say the SLN and this lambda would be some integers adding up to zero. And t lambda would be the matrix t to lambda one, up to t to lambda. A lot of is a point in this afangrysmining. And we're going to be interested in orbits in this in the afangrysmining. So two kinds of orbits. The first I call a grilamda. So then I take the group I co-ordinated by this power series group, it's orbit through t lambda. And this is something called spherical Schubert variety. It's an analog or spherical Schubert cell. It's an analog of Schubert cells in a finite dimensional fiber. So it's afangrysmining. One way to think about it is think of it as a kind of G mod p and this is the orbits of p on G mod p. And these guys are finite dimensional. That whole afangrysmining is infinite dimension, but these orbits are finite dimension. In fact, the dimension is given by the pairing of lambda with two row. But it's not very important for our purposes. Oh, mine are the important purpose. And then a second thing I'm interested in is called w mu, which are transverse to these first orbits. And so to find their orbit, I take a group transverse to my first group, which I didn't know it like this and take its orbit and then we have to find this group. So I take G now of polynomials in T inverse. I have an evaluation map to G sending T inverse to zero. And I take the kernel of that map and that's G1 T inverse. So you can think of it as matrices where the coefficients are polynomials and T inverse and a modular T inverse its identity matrix. I take the orbit of that group through T mu and I got w mu. So these are transverse orbits, in particular, their infinite dimension. So these are like some kind of Schubert cells and these are like the opposite Schubert cells. And then the main objective study will be denoted w lambda mu, which is the intersection of verlanda with w mu. And maybe even more importantly will be w bar lambda mu, which is the intersection of verlanda bar with w. So this guy, this guy here is an affine variety. Finite dimensional, in fact, the dimension of this w lambda mu bar is two row, erred with lambda minus mu. Row is the half the sum of the positive roots or maybe positive corvots. Oh, positive roots. Okay. So one more construction. So this is going to be this w lambda mu bar, this is going to be our affine plus unbright. It's also possible. And now we're going to construct it's simplified resolution. So to do that, we need one more sort of construction. We form and go lambda underline. So I'm using this list now. Another, this is just notation. So it's notation is suggesting that it's a kind of product of these girl lambda eyes, but not exactly a sort of twisted product. So that's just notation. Here's the definition. So it's a sequence of points in the affine gross mining. With the condition. So these G is the node elements of GF. They're all serious and brackets G, you know, it's a corresponding point in that kind of money. And the condition is that G nine minus one inverse GI brackets is in girl lambda I. Okay, so I admit this probably like a little confusing. If you haven't seen this before, the way that I like to think about this one way I like to at least like to explain it is that and these were setting points in the affine gross mining and like this, and they're my G one, two, three, and so on. And they're, they have distances between them described by these lenses. This is distance lambda one, distance lambda two. So it's a variety of polylines and that fingers money and sequence of points with prescribed distances between them. So this condition can be thought of as distance between GI minus one and GI. And this distance, I mean, just some formal notion, but it actually has some kind of metric implications. For example, if we take G to be equal to PGL two, then the affine gross mining of G is an infinite tree. Or more precisely, the vertices of an infinite tree with P one branching. So that means it looks something like this. So have the tree, but infinite tree and at every, at every vertex, there's many, many edges in fact a whole P one coming out. So that's, and then it continues. And that fingers money is just the points in this tree. And this distance is just the distance in the tree measures like along the edges. Okay. So that's the definition of this girl lambda underline. And then finally, we define w till the lambda underline mu by definition. One piece of notation, one piece of notation. This space girl underline comes as a map, which will call M lambda underline to that thing, or smanian. And it takes this, the sequence to the last point in the sequence. So just remembering the last point of this line. And then I define w till the underline new to the M lambda underline inverse of w. So I just want that last point to lie in this sort of slice w. Okay, so here's a theorem about these bases. I guess this theorem basically do to myself with the Webster Alex weeks. So the first point is that W till the new is some section resolution of w. So in particular, it is a resolution and it has a simple structure. And the W lambda mu has a plus instruction. And then this torus, okay, so torus here will just be the torus of the group, which is acting just by left multiplication. So Tx, W till the M mu with finally made fixed points. Oh, and I promise this to tell you this guy, this first fact this uses that lambda I or minus. And here we see a manifestation of something I mentioned last time, which is that when this lambda I minus cool on the corporate righty side meant that was needed to ensure finding many fixed points. And on the dual side is needed to ensure that we have a resolution. So that's interplay on both sides of one structure. Existence of resolution matches the other structure finding many fixed points for the torus. Okay, continuing this vein. So another result is that the syntactic leaves of this W bar are new bar or maybe new delta bar I suppose. And that's for news dominant ways trapped between them. And here we see a promised feature of some likely duality by Jackson between the leaves. So both cases of leaves are indexed by those dominant ways trapped between them. And so part four, and this part four basically just uses the number there. And this is just part of the geometric attack a correspondence of Mirko-Betsch and Belonen and says there's a small between well, this one as follows. So between the top homology also for homology of this W tilde lambda new bar, tracking set and this weight space in the tensor product compatible. So just a nice way to make spaces but compatible with these decompositions. So recall from the representation theory we have this decomposition by the isotipic components. So there tensor the weight space. And geometrically, we're going to have here sitting the homology of the fiber, which will be all right, and then the new inverse of this point. And so the homology of the tracking set in the leaf. And this guy, I mean, is sort of a famous guy. This guy is called the or the earliest for components of this guy called the Mirko-Betsch Belonen cycles. Yeah, a question. Yes. Is this situation the same place the dual of the Q variety picture you presented the first. Yeah, that's a whole point. Yeah, that's exactly what I'm about to say. But before I say that, I'll just do a quick example. If we take lambda nu as above, so lambda is n times the first fundamental weight and nu is n times that fundamental weight minus alpha. Then just w lambda nu bar is just this C2 mod n and this guy is its resolution. And I already discussed previously how this decomposition of homologies works in this case. I won't bother saying it again. In fact, well, you might say, how do you see this so easily? Well, you can actually write down explicitly in matrices this isomorphism. Another fact that's true in general fact is that, well, the dimension formula I gave predicts will we have lambda minus nu, which is rho. So we just get to the dimension of this core variety will just be given like this, which is just equal to 2. So we definitely get something not per variety, aphorangous mining slice. We definitely get something two dimensional here. And in fact, you always get C2 mod z minus n. For all two dimensional aphorangous mining slices are of the form C2 mod z minus n. Okay, so now I come to what Jessica said. So this claim is that these guys are some type of tool. And well, what does it mean to say this is something I could go I'm just saying I had some list of things which are generally supposed to match. And I should check them off one at a time. Well, I did explain quite a few of them but let me explain actually go back to almost the first one I said, which is a matching of the algebra of the tourists with the age two. So what tourists do we have so recall that I have this C star to the sum of the W eyes acting on the per variety. If you're careful you'll notice that this actually maybe is not, is not effective like some portion of this C start of some of the W's actually extricately but ignore that for a second because actually it will match on the other side. And so the lead algebras that was our tourists here. So it's the algebra well, which would see some of the W's. Now what that's supposed to match each two on the other side. Well, this guy this W lens until the new is by definition a sub variety of the affine gross money into the end. Just because it's by definition end points in that kind of money. So I ended with a map backwards from each two of that kind of money into the end, which of course just each two of that kind of money and directs on itself and times. If I took the comology I would get the comology for the comology that's I get each to direct some self and time as I have a zeroes tensor many many zeros and then one each to figure this each to direct some self and times so map backwards like this. And this some of the some of the W is actually equal to and most beneficial. This and is the same as the sum of the difference because remember what was and it was used to make this list. First of all, this. Okay, so the end here appears because it's the list of these fundamental ways adding to lambda and the W is how much of each fundamental way it occurs. So of course, and if the sum of the W. So in this way, we see that this tourist is lead algebra is C to the end and also this age to here is also C to the end. I hesitate to write their isomorphic because they're not isomorphic but somehow they actually have exactly the same sort of kernel. So they match. So oh, I didn't really say this but the age to of that fingers my name is one image. And it's Picard group is just that it has a canonical line. Okay, so that's that's the matching of the members right as an equation. So we get this matching. Matching of this the algebra of the tourists with this age to. And let's look in the opposite direction because it's also quite instructive. So in the opposite direction. Let's start I guess with the F and question server the correct idea again. So how do we get line bundles on the correct idea how do we get co-mology of the correct variety. There's this Kerwin map. So because it's a quotient, we get the these tautological line was coming along. So we end up seeing that each two is just see the eye. So I here is the vertices in the thinking diagram. So these line bundles correspond to determinants of tautological vector bundles. Anybody who's working with curavit is this will be familiar and if you haven't is a more general phenomenon called this curve Kerwin. So that's the each two of the variety. And on the other hand, if we take our F and gross money and slice, I have this is a this is living in the F and gross money of this group G. And so the tourists of the group G as I mentioned before will act here in the resolution. And this tourist, this group G has dicken diagram with vertices I so this tourist is just C star to the eye. So we get H two of this. Where are I? I see the sea. Hi, the algebra. Let's see. And so that's the matching there. And what's, you know, beautiful about this is if you can look at some examples, you might say, well, sometimes maybe this tautological line bundle is trivial because because of, you know, the nature of the per variety, maybe it's not really doesn't use that vertex. And so the tautological line bundles trivial. And then if you go look on the dual side, you'll see, oh, that that like component of the C star that C star there will also act trivially. So everything always matches. Okay. So that's the matching of the tour. And then of course, the co logical matching is what I mentioned. So we saw, sorry to scroll a lot, but we saw these two isomorphism. So this Mirkovic balloon and isomorphism here. We're seeing this representation theoretically composition shown geometrically here and using it as Mirkovic alone and result and then going further back up the same thing. But now with flipped, flipped the roles. So I, I, here I have tracking set fiber and down below I had fiber tracking set. So I flipped the roles and that's exactly what we expect with some like duality. And I already mentioned that we have this by ejection with leaves or reversing by ejection with leaves. In fact, you can even go a step slightly further and you can produce a bijection. You can produce a bijection between the irreducible components of say one of these fibers. This is in the perverite. That's the new side of the perverite with the irreducible components of this tracking set in the African grace money and slice. I mentioned before that these guys are called Mirkovic alone in cycles. One way I know to produce such bijection is using theory of Mirkovic alone and polytips wrote some many papers about this, this topic. But anyway, I'm not going to bother explaining that, but there is exact bijections. I see a question. Where does the plus on structures and selecting structures on African grace mining come from? Thanks. That's a great question. Of course, I've been like not really discussing these African, these some like structures too much in general in this talk, but I apologize. So the answer to the question is we have a man in triple. I'll just write down. I think there's money in slices come from the manager. So there's just an aside and this is explained in our paper. Okay, great. So now I have a few minutes. I'll maybe give a little preview of this. So what's the idea? Well, we have this notion of simple, like, old pairs. We've seen that many examples now. But now you might ask, how is there some systematic way of constructing the dual of if you have one simple resolution, is there some systematic way of producing the dual some like resolution? And well, in general, still there's no really good answer for that. But here's a sort of partial answer for that. So the answer, let's look at our list of some of these some like resolutions we saw. We had hyper torque for ideas. We had river varieties. We had something like T star G, not P's. We had these F and G. So for these classes of examples, they're of the following form. You take the contangent bundle of some representation of G and then take a Hamiltonian reduction by G for some group G. And they're always of this form. And whenever our symbolic resolution is of this form, that's when we can use the BFN construction. So it's this class of examples. So to use a slightly physically language, or not yet, but let's start with the following data G a reductive group. So usually this group G will be product of some GLNs or a torus, which is just a product of C stars, GLNs. So G will be a reductive group and N will be a representation. And for the physicists, they would say that this G and N define, well, one of these theories, a 3D gauge theory, which is one of these n equals 4 supersymmetric theories. So they define it's 3D gauge theory. And I mentioned before that from any of these n equals 4 supersymmetric field theories, the physicists associate two spaces, one called Higgs branch and one called the Coulomb branch. And this Higgs branch is just, for mathematicians, is just this Hamiltonian reduction. So just take the cotangent bundle and then take the Hamiltonian reduction by G. So that's the Higgs branch. And the Coulomb branch, I guess, was more mysterious both to the physicists and mathematicians. And so, well, work of some physicists, I don't know the physics literature very well, and then the work of Barbara and Finkelberg and Neckajima. So let me just give a rough description and then we'll give a more precise description next time. There are many questions. So for this rough description, let's introduce a following weird scheme. So sometimes called a ravioli curve, sometimes called a bubble. I'll call it P, raviolo curve, maybe more precisely. So you take two copies of the punctured disk and glue them, two copies of the formal disk and glue them together along the punctured disk. So it's a non-reduced, it's a non-separated curve. The usual way I like to think about this thing is you probably know that if you want to construct P1, you should take two copies of A1 and glue them along C star. It's a usual discussion of how you build P1. And usually when you do that, you don't glue them directly, but you glue them using the inverse map. So somewhere in this gluing thing, there's a T goes to T inverse. If you forget to do T goes to T inverse, well, you can still glue them, but then you get a non-reduced, a non-separated curve, say. Something like this, right? You end up with A1 with double origin. So it's a kind of bad version of P1. So this bubble curve is like the bad version of P1, and then you just look in a formal neighborhood of this doubled origin. So that's the bubble curve. So it's pretty close to P1. If you don't like it too much, you can just think about P1. In fact, in Nakajima's first paper about the school loan branches, he explained basically that the natural thing to use would be P1, but for the purpose of, well, for some purpose we'll see soon. It actually only works if you use this bubble curve, but you can think of it as just P1. So what are we going to do with this funny bubble curve? We're going to consider the following modular stack of maps from this non-separated curve into the stack of n mod g. So n here is a representation of g, reductive group g, and we consider the stack quotient n by g and consider maps. So I emphasize that you have to use the stack quotient here, whereas in constructing the Higgs branch, we've always used the git quotient, and in doing so, we've like thrown away the unstable locus. But here we need to use the full locus, full everything. So we take this full stack quotient here. We take such space of maps. Then we take the homology of this mapping space, and this homology carries an algebra structure, the convolution algebra structure. So where does this algebra structure come from? You'll see, I'm going to redefine this thing, and it's like different language next time. So in that, and then I'll explain the algebra structure in a different way. But intuitively, it comes from some kind of gluing of these bubble things. So it comes from considering like a disk with tripled origin. And so considering such a modulus maps from such a modular space into the modular space and maps from such a triple disk, or disk with tripled origin into the stack, that's used to define this convolution algebra structure. And that somehow is like related to why we don't use T1, because here we have this possibility of this triple origin. And this algebra structure, so it's similar to the, if you've studied the Steinberg variety inside of the product of the quotient, the square of the quotient of the phi variety, you know that this Steinberg variety, its homology has a convolution algebra structure like in the book of Chris Ginsburg. And it's a little different is this case example, the convolution algebra structure is commutative. So this is a commutative algebra. And since it's commutative, we can then take and we can get a scheme. So that's the definition of the pooling branch, at least the singular. So this is going to be the dual of this Hamiltonian reduction. It's going to what, so these guys are going to be selected to. And this guy is going to be the equivalent branch. Too much space. Okay. At least, so this is how to produce the singular guy and then you can produce the smooth guys one. So next time we'll define all this stuff in here a bit more precisely or this is pretty precise but I'll define it more in a way that makes it easier to work with. So, and we'll see some examples and. Okay, let's stop there any questions. So, I have two questions and you have two questions now in the Q&A. Okay. Mentioned that stable envelopes play a role in some like to do I what is known about stable envelopes for slices and that fine gross money and so this is subject I guess of Ivan Danny Lenko's thesis, which hasn't yet been published. So it has been studied by Von Danny Lenko but I mean, sorry not being published I mean not even appeared on the archive so. So it does serve exists to study but not not in public yet. So, I have a question from anonymous person. Can we define WMU for non integral London you. No. No. I don't know how to define it if they're not. The question is, how to define it when you is not dominant. So in the definition. At some point, the beginning today I said, we assume you is dominant. And if you're paying close attention you would see that actually on the curve variety side, much of what I said goes through even if you is not done. And it's still a smooth it's still a it's a resolution but not exactly of this singular guy, not not a zero London you but it's a resolution of something. So there's still a smooth curve right even in you is not dominant. And, and going on the after this money inside well actually, it looks like much of what I said also goes through if you is not dominant, but we'll see soon and how to really good definition of WMU and you is not done. But for not integral me I don't know. What is G of the law polynomials. Okay, let's back up. And I answered that question. So your view up I guess. Right and right. Yes. So don't worry. The answer. Yeah, I think about. Great. I took anybody. And you have that question. Thank you. So I have a question. So, before you say that, we've already said for finite thinking diagrams is understand well where corresponds in particular to slices. So if I have any, not going to write it this corresponds to some slides and I mean what it would be the sympathy to do. If you have a quick variety outside of finite type. Is that the question. Yes. Yeah, so, well we'll get to that later. Yes. That's that then we're then we're in the realm of generalized I find grass money and slices and those are defined in this cool and branch way so we'll. Okay, we'll get we'll get to that later in the talk. So, but in, in, maybe I should say though in affine type a, which might be the case that you're most interested in. In affine type a, then the symphonic dual of Nakajima karate is again a Nakajima karate. Well, sorry, when, at least when new is dominant. And the reason the reason for that is, is, and it also happens in finite type a, and the reason for that is there's this funny rank level duality, which happens in finite and affine type a, which which which ends up manifesting itself as an isomorphism between a fine grass money and slices and quiver rise. So something called Mirkovic we were not by some words. So I'll make sure to address that question a little more precisely a little later. Any other question remark, let's see. So, I guess we can thank Jordan again.
|
In the 21st century, there has been a great interest in the study of symplectic resolutions, such as cotangent bundles of flag varieties, hypertoric varieties, quiver varieties, and affine Grassmannian slices. Mathematicians, especially Braden-Licata-Proudfoot-Webster, and physicists observed that these spaces come in dual pairs: this phenomenon is known as 3d mirror symmetry or symplectic duality. In physics, these dual pairs come from Higgs and Coulomb branches of 3d supersymmetric field theories. In a remarkable 2016 paper, Braverman-Finkelberg-Nakajima gave a mathematical definition of the Coulomb branch associated to a 3d gauge theory. We will discuss all these developments, as well as recent progress building on the work of BFN. We will particularly study the Coulomb branches associated to quiver gauge theories: these are known as generalized affine Grassmannian slices.
|
10.5446/54808 (DOI)
|
So, we had two lectures on essentially building up to understanding sheaves, their defamation theory and virtual cycles. If I lost you, which I'm sure I did, just take that now as a black box. There's this virtual cycle which plays the role of the fundamental class, the modular space, has good properties, if you perturb things, it stays the same, it's defamation invariant, and it's the correct fundamental class for this modular space. Okay, now we're going to apply it in Baffer-Witton theory, but we had Lota's lecture, right? So, maybe I'll just skip this and go on to the next lecture. No, okay, we'll go through it. So, there's this paper hundreds of years ago by these guys, and Donaldson even gave it to me in my thesis during my PhD and told me I ought to think about making mathematical sense of it. It was around the time of Donaldson theory, well, it was around the time of Cyborg-Witton theory rewriting Donaldson theory. So, at the time, there were all these gauge theories in physics which were leading to really exciting mathematical, rigorous mathematical invariants like Donaldson invariants. And the idea was could you do something with this? And for 25 years, the answer was no, fundamentally because there's no natural way to compactify the modular space here and therefore get invariants. Okay, so very briefly, you're not meant to follow this, don't take notes. This is a theory given for a real four manifold, like a Riemannian four manifold. You pick auxiliary data, a bundle over it, and then you're supposed to count solutions of these gauge theory equations. And you're not meant to really understand what they are, but just roughly, you've got something here which is the self-dual part of the curvature, which is what's relevant to Donaldson theory, which is what's relevant on a Kehler surface via the Hitching-Kobeashi correspondence, or Donaldson-Ulimbe-Yau theorem. It's what's relevant to stable bundles. And then you've got some additional fields which we'll incorporate in one field later on and call the Higgs field. Okay, and then you're supposed to count solutions of these equations, and that's the tricky bit how to do that, but let's assume you could do that, then there's this sort of voodoo in physics which says that when you form a generating function of Fourier series or something of these invariants, these counts, then you should get modular forms. And so in particular, this sort of infinite collection of numbers should be determined by only finitely many numbers. And so in the Kehler case, so we still can't define Vaffer-Wittner invariants, I should say, but there's been some very recent progress of some physicists defining them in other ways using TQFTs, and now more progress on really using the equations and almost complex structures. So there's some hope now that maybe one day we'll be able to define these invariants, but what these lecturers are about are defining them in the algebraic case or the Kehler case. And so the problem is that the modularized space is inherently non-compact, these fields can grow, and you'll see that better in the algebraic case in a minute. So from now on, I'm going to work on a Kehler surface or an algebraic surface from the next slide onwards. I'll call it S, so that's my four manifold. And then you can package these fields in a certain nice way using the splitting of two forms on a Kehler surface, and you end up with these equations. So this is the intergrability condition, which says that your connection defines a holomorphic structure on your bundle, and then this is a moment map equation, which tells you how to fix the metric on that bundle, and then this says that your Higgs field is holomorphic. So you end up with this data. Okay, so again, you're not really supposed to be paying too much attention yet. And then there's a hitching Kobayashi correspondence. So people have proved that the solution, at least in the algebraic case, that the solutions correspond to the following data. So this rather linearizes the problem and simplifies it. So this is where you have to start paying attention. So this is what the Waffa-Witton equations are for us. So they're the data of a vector bundle over our complex surface, and a Higgs field, there's this twisting by the canonical bundle of the surface. The Higgs field should be trace free, and the determinant of the bundles should be fixed. And then there's a stability condition that ensures that you can solve this equation here. So you get rid of this nonlinear equation by seeing it as a moment map. This is annoying. So you see this middle equation here as a moment map equation, and then you can solve it. So long as this holomorphic data satisfies a stability condition. Okay, and the stability condition is the same slope stability that we used in the first lecture, except this Higgs field here modifies it so that instead of testing via all sub and quotient bundles of E, and testing their slopes, as we did in the first lecture, you only take those invariant by phi. So you take phi invariant sub bundles of E, and you check that the slope is less than the slope of the quotient. And that's the stability condition. Okay, and yeah? Yeah. What would omega 11 mean? What would that mean? So you decompose this, so originally you would decompose this kind of omega 11. So omega here is the scalar form, so it's already a 11 form. So this wedge here is a 2, 2 form. Okay, so yeah, ignore everything I said. Here's what we're interested in algebraic geometry. You know, analysts have done a hard job for us, and now we can forget about the equations, and we can reduce it to studying a stable Higgs pair. So there's the bundle field, endomorphism, twisted endomorphism, satisfying a stability condition. And one of the kind of insights that this work has produced is that you shouldn't, bizarrely, you shouldn't take slope stability. You should take geyser stability because otherwise it's all a mess and you get the wrong answer. So that's the first thing we'll do is we'll replace slope stability by geyser stability or semi-stability. And the second thing is this means we can partially compactify things by allowing E to be a torsion-free coherent sheaf instead of a vector bundle. So that will partially compactify the modular space, which gives us more of a chance of defining an invariant. But you'll still have non-compactness because you can scale this phi. If you just scale phi, if it's non-zero, then you'll get another solution. Okay, so exercise. When the degree of the canonical bundle is negative, stability, that stability condition I told you, forces the Higgs field to vanish. Okay, I mean, very roughly it's a map from a vector bundle to a more negative vector bundle. And so you, by taking kernels and co- kernels and so on, you can prove this rather easily. So, Vaffram Witton have a vanishing theorem that essentially when the curvature of the manifold is positive in some sense, then you get this vanishing theorem. All the Higgs fields, those Bs and gammas and whatever, you vanish and you get reduced to the anti-self-dual equation considered by Donaldson. And this is the complex analog of that. Okay, so in this case, the modular space is compact and you can do something. But otherwise, it's tricky. Yeah. What do you mean by partially compactify? Partially compactify. A slightly bigger space, which is a bit closer to being compact, but isn't compact. I don't mean anything. Ignore it. See what the meaning of the zero is? Yeah, trace free. Trace of phi should vanish. But a trace is a section of ks. Yeah, that's right. Okay, so what we're going to consider actually is the modular space. So yeah, so when this vanishing holds, then what we end up with is a modular space just of stable sheaves. So this is what you get in Donaldson theory on a projective surface with fixed determinant, maybe trivial determinant. And then what you find is we'll see in an next lecture and as Lothar mentioned, is that the Vaffram Witton invariance is some kind of a Euler characteristic or virtual Euler characteristic of this modular space. And there was a lot of work starting back in 1994 and going on about computing these numbers and seeing modular forms and so on. But really what this lecture is about is the other components when phi is nonzero. But we'll get to that. Is it true that the actual determinant bundle falls from the fact that we consider a COR bundle? Yeah, but you don't have to do it. Later, we'll fix the determinant to be just a line bundle. But it's sort of this is in the, you know, this is to do with the Lie algebra of SUR and this is to do with the group SUR. So here you fix determinant and here you take trace free. Right, this is where I really want to start the lecture. So this is the cool thing. You know, those of you who know about Higgs bundles on curves or hitching systems or something will know this. The rest of you won't know this and you're really missing out. It's just a beautiful piece of mathematics. It explains what matrices are. So pay attention. Okay, so we have this data. And here's what we're going to do. We're going to turn it into data on a bigger space on the canonical bundle of S, which we call X. Okay, so this is a Kalabiya three-fold. So this data is equivalent to a certain torsion sheaf, so a two dimensional sheaf supported on a surface called the spectral surface, which is some kind of multi covering of the surface itself. Okay, as I run out here with this picture, the vertical lines are the canonical bundle of S. And roughly speaking, this torsion sheaf is a line bundle on this spectral surface. Okay, so for now, this is just the rough idea and then we do it rigorously. So what is this? Roughly speaking, the red points here, let's work on one fiber, okay, so one point of S. So what do you have? You have this endomorphism, phi. So what you do is you plot its eigenvalues. There is three, this is the rank three case, you plot its three eigenvalues in KS. There's always this twist by KS. And what do you put over those eigenvalues? You put the eigenspace in E, okay? So there's an eigenspace in E, so that's some line in E at this point, over there, there's one over there and there's one over there. And then you put it all together in a family and this gives you a line bundle over this surface. All right, that's somehow the generic situation. In reality, this surface might be, you know, some thickened version of S or it might just be S itself with a high rank bundle on it instead of a line bundle. So it's more complicated, but that's the rough idea. So any questions about that? Okay, so next we're going to do it rigorously, but first we're going to do it over a point. So we're going to understand a vector space and an endomorphism. All right, so we're going to understand what matrices are. So we fix a vector space and an endomorphism. And so I'm always working with the complex numbers. So V is a C module. But now we've just made it into a CX module because X or a C phi module, we just let X act through phi. Phi commutes with itself, right? So this is, this is a commutative thing. It's really a CX module. Okay, but we know what CX modules are. They're sheaves on the affine line on spec CX. And the fact that V is finite dimensional means that the sheaf has finite dimensional sections. So it's really, it's a torsion sheaf. It's only sported on a finite number of points. Otherwise it would be an infinite dimensional module. Okay, so we get a torsion sheaf supported, maybe exercise, check you're happy with this, that it's supported on the eigenspaces of phi. Okay, that's why spec is called spec. It's spectrum. It's to do with eigenvalues. Eigenvalues, the points of spec CX are the eigenvalues of the operator X. Okay, so exercise. So this is really worth doing. This is really, you know, exercise about matrices. So here's a silly matrix, multiple of the identity, that corresponds under the spec correspondence, check you're happy that it really corresponds to two copies of the structure sheaf of the point lambda. So the eigenvalue. Okay. Whereas if you take this Jordan normal form, what does this correspond to? So really check what module this gives you over CX, and what sheaf that corresponds to what you should find is, it's very similar, but it's not the same. It's the structure sheaf of the thickened point two times lambda, the double point at lambda. It should actually be supported on eigenvalues of phi, right? Yes. Oh, sorry. Eigenvalues. Thank you. Yeah, I'll correct that. Thank you. Okay. And more generally, another exercise that this explains the difference between the minimal polynomial and the characteristic polynomial. The modules actually supported on the zeros of the minimal polynomial scheme theoretically. But it's divisor class is given by the characteristic polynomial. So I don't have to ask someone if you don't know what the divisor class of a module is, then ask someone. Or this is also called the fitting support or fitting ideal or yeah, but I can only go through so many things. So I'm not going to go over that. There's very not a pretty story. Someone could explain it to you very quickly. Okay, any questions? Yeah. What do you mean by generalized eigenspaces? Oh, the generalized, sorry, a generalized eigenvalue is something that is so is lambda such that phi minus lambda to some power is zero, not just phi minus lambda is zero. And then the eigenspaces are the things that are killed by this phi minus lambda. Did I say that right? No, I didn't say that right. I said that wrong. So lambda is a generalized eigenvalue and V is a generalized eigenvector. If phi minus lambda times the identity V, but maybe you have to do that n times is zero. That's the condition. Okay, so now we want to do this rigorously and globally. So globally, what we do is we take our sheaf E on S with its Higgs field phi, okay, and we make it into a pi star O X module. So we push down the functions down the fibers. And we get this, this, this is just a polynomial, you know, always think over a point, this is just a polynomial ring in X. And you make it into a module over this by using the by using phi to the I that's how you describe the action of this guy, this graded ring on E by this here. And it's obviously it's all commutative because phi commutes with phi to the I. Okay, and so what you end up with is a pi star of O X module. And so what is that again, by the relative version of spec? That is a sheaf in fact, yeah, it's a sheaf over X. And it turns out the stability condition I told you about is equivalent to stability for this sheaf. Okay, so we're interested in get I'll just say it, it's not so important. We're interested in Giesekus stability for E phi. That's the way you look at the reduced Hilbert polynomial of phi invariant subsheaves. That's equivalent to the Giesekus stability of this curly E phi, the spectral sheaf, which is to do with their reduced Hilbert polynomial of subsheaves on X of E phi. So stability matches up and I won't go into the details. Alright, so exercise. I really advise doing this one. This is nice. So describe a Higgs pair on just the affine line. So this is really one dimension down. This is really sort of hitching theory with spectral curve Y squared equals X. So you really want to see this branching. This non trivial branching. And it's non trivial. You see, what what what does that mean the spectral curve Y squared equals X. So Y is the eigenvalue and it's got to be square root of X. Well, square root of X not a well defined function. It has monodromy is multiply valued. So you can't just write a diagonal matrix square root of X square root of X. All right, you got to work hard on that. And you know you can't diagonalize it because you can diagonalize it away from the branch point. But at the branch point, you're getting this double point vertically and we know that that's supposed to correspond to a Jordan normal form. So you can't. Yeah, we have a question on a bio genome. Do you need the sun condition that support us affinate and flat morphism to S? No, it all comes out in the wash. There's no there's no conditions here at all. This is an equivalence of categories. I mean, I haven't set up one of the categories, but it's absolutely fine. Everything just comes out in the wash. And also question you assume that F wage F is equal to zero like in Fy, Fy wedge Fy. You don't have to because you don't you don't have a one form here. Yeah, where was Fy? Yeah, there because because Fy is already a two form. So you cut there's no wedge. Yeah, that's right. In general, you would have to do that. You need some commutativity property. Okay, so you're going to get this branching. So the hint for this question is that a matrix is determined essentially by its it's trace and determinant. Okay, so if you want eigenvalues square root of lambda minus sorry, square root of x minus square root of x, you know what the trace and determinant are. And so now I think you can write down the right matrix, which will which is well defined and is not not multiply value. Okay, and then exercise do the same for y squared equals x squared. So now there's no branching. But you can do two different things. You can describe one. You can describe two different spectral sheaves on that spectral surface. One of them corresponds to a diagonalizable Higgs field and one corresponds to a non diagonalizable Higgs field with a non trivial Jordan normal form. They're really great exercises. Okay, conversely, if I have one of these torsion sheaves supported over a surface, it'll be compactly supported. So that's the that's relevant to the question online. What this is an equivalence to compactly supported sheaves on x. Okay, so if you have a compactly supported chief on x, then you recover a chief on s by pushing down by taking sections. And then you can recover the Higgs field by the action of the sort of the total logical endomorphism, which you have on x. So you know, at every point of x gives you a section of this line bundle. There's a total logical section of this line bundle ks or it's pulled back on ks, right? So it's zero here and it's one here and it's two here and so on. All right, if you multiply that by the identity, you get a sort of a you get a canonical endomorphism, which is zero on the zero section and one up here and so on. Okay, that's the thing which gives you the Higgs field. So it turns out that Higgs pairs are entirely equivalent to compactly supported torsion sheaves on x. All right, so so now we've reduced this Vafa Whitten thing to you can describe it either by Higgs pairs or you can describe it by torsion sheaves on a certain local Calabi R threefold non compact Calabi R threefold x. And here you see the non compactness, right? I mean, x is non compact. You can scale the Higgs field. You can stretch everything in the ks directions and you can see it's inherently non compact. You could try and compact projectively completed or something you'd lose the Calabi R condition. It's not clear whether that's a profitable thing to do. It probably is and you know, maybe you could study it. Yeah. Does the spilt a construction work more generally with an arbitrary line bundle? Yes, it does. Yeah. There's nothing special about the canonical bundle here. Okay, and then what we're really interested in is two conditions. Remember, we're dealing with S U R not U R. And what that means is we want to fix the determinant of E and take the trace of phi to be zero. And what that corresponds to is that the determinant of the pushdown of E should be trivial. So that's kind of not really a translation at all. But the trace phi being zero is this condition that this spectral sheaf has sort of center of mass zero on each fiber. So where you know, you would have to make sense of the center of mass. So in this case, it would be sort of this point of the fiber plus this point of the fiber plus this point of the fiber. But more generally, you know, you might have on the spectral surface might instead of being sort of three to one, it might just be one to one, and you have a rank three bundle on it, instead of a line bundle on it. So that's the case where your, your matrix is a multiple of the identity. And in that case, you have to wait those points by three. So you have to make sense of the center of mass. The easiest way is to just say trace phi equals zero. That's the best way of making sense of the center of mass. Okay, so that's, so you could forget everything up until now and just say Vaffa-Witton theory is about these spectral sheaves, these sheaves on X, which satisfy this strange condition. Okay, they have this sort of center of mass zero on each canonical bundle fiber. And there's a condition on the determinants of the push forward. Yeah, yeah, probably. Yeah, maybe. Yeah, if you take the, the divisor class, yeah, yeah, yeah, you could do that. There's probably a way of formalizing that. Okay, so we want a virtual cycle on this thing, and then we got to deal with non-compactness. Okay, so I already said this about stability, we're going to use Giesica stability. And the first thing we're going to do, like in the last lecture, is we're going to assume for simplicity that there's no strictly semi-stables. So we're going to assume that anything that's semi-stable is actually stable. That's just to start with. And then what you find is the deformation theory, we've already done that. I've compactly supported coherent sheaves on a Clavier 3 fold is perfect. It's also got a certain symmetry. So it's the same as before, there's defamations, obstructions, they happen to be dual to the deformations. This gray piece is sort of for later. That's, that's when you keep track of the C star action. Then if you know what this is, this, this is when you keep track of the C star action, I'm just tensoring with the one dimensional standard character of C star there, the standard representation, weight one representation of C star. So that comes in. The point is that this Clavier 3 fold has a holomorphic 3 form on it. And the C star action of scaling the fibers does not preserve that holomorphic 3 form, it scales it by a factor, weight one or minus one. That's important. Assume this date is trivial, volume form is kind of trivial, then this chamber is kind of a variation. Now that was the determinant of the bundle, whereas this is the holomorphic 3 form on X. Okay. And there's no higher obstructions at least once you take trace free. I'm making Dennis nervous now, but it can all be handled. Okay. And therefore it inherits a virtual cycle. And that's essentially what the last two lectures were about. And you know, there's my picture, remember all this. Okay. So there's a way of describing the moduli space by current issue models. They give you these graphs and you make them vertical, you get a cone in a vector bundle, and you intersect that with the zero section. Okay. And so you get a zero dimensional virtual cycle. So this is essentially some kind of DT theory. Okay. But M is non-compact, just as X is non-compact. So really the virtual cycle vanishes. It doesn't make much sense. But we're going to use this C star action where we scale the fibers, the canonical bundle fibers, or equivalently we scale a Higgs field phi. You can work in on X or you can work on S with Higgs pairs. It doesn't, they're equivalent. And you know, at various points in these papers, we switch from one to the other. There's various things which are very convenient in one picture and various things are impossible in one picture and very convenient in the other. And now the C star fixed locus is compact. And that's essentially because the fixed locus of C star on X is compact. It's the original surface. Yeah. I think that's not quite true actually. Yeah. We could talk about that after. I mean, if it would be zero, then the localize one would be all right. Yeah. Yeah. I think you're right. Yeah. Anyway. Yeah. It's not a great thing to work with. Let's just say that. Okay. And in fact, in the K-theoretics set up in my last lecture, there's really no difference between the non-compact and the compact. But you have to use the C star action. Yeah. So in, let's say it's uninteresting and one should use the equivariate version and then it becomes interesting. Anyway. Okay. So the C star fixed locus is compact because the C star fixed locus is those sheaves which are really supported on S, but only set theoretically on S. They can be supported on some scheme theoretics thickening of S. Okay. We come to that soon. So we're getting sheaves supported on S and S is compact. So you end up with a compact modular space of sheaves. And so because it's compact, what we can do is we can just apply the virtual localization formula and we can integrate. So again, this is something I thought long and hard about whether I was going to discuss localization. Eventually I decided I didn't have time and Lota already did it. So, you know, when you're doing integration, when you're dealing with core homology of a variety with a C star action, a circle action, most of the information is contained in the fixed locus. You can localize integrals to the fixed locus. And so we do that here, even though maybe the integral might not make sense on the non-compact thing, we just pretend it's compact, apply the localization formula and we get an expression on the fixed locus. And because this does make sense, there's some kind of regularization of the integral on the non-compact thing. We just take this as our definition. All right. And because we've localized, that involves inverting certain things like this. So we end up inverting integers. It turns out we end up with a rational number. Even though we're assuming that there's no strictly semi-stables, we end up with a rational number. Don't you have to take some residue? Yeah, that's what... Yeah, this is called a kind of called a residue. Anyway, do you mean like a residue of a function of t, like the coefficient of t inverse? No, it turns out because the virtual dimension is zero, you're taking the constant coefficient. And actually because the virtual dimension is zero, this integral really is constant inside Q of t. Exercise. Yeah. No, this is a bit like the question yesterday, is that can you define this via bare end functions and weighted Euler characteristics? So I refer... Which DEMA? Ah, I referred DEMA to the original papers where we define two invariants, one via bare end functions and weighted Euler characteristics and one in this way using virtual cycles. Because of non-compactness, they're not the same. And for a long time we worked with both. And then we did computations which showed that to get the numbers that physicists get, you have to take this definition, not the other one. And the co-section that he's referring to is much more closely related to the bare end function, I think. So anyway, for now I say no. All right. So this is really just what's called a local DT invariant. But this is really the UR Vaffa-Whitney invariant because I haven't fixed determinants and centromass zero and trace zero and so on on this particular slide. And so what you find is this is kind of uninteresting. The moment S, this is fine if the coromology of S is very simple. For instance, when you have the vanishing theorem, so if the canonical bundle of S is negative, then this is a fine definition and it really is the Vaffa-Whitney invariant. But in general, for general S, general type S, for instance, this invariant is always zero. And that's to do with the action of the Jacobian of S and this extra piece in the obstruction theory of S. So I leave that as an exercise. That's a trivial exercise for people who know a bit about obstruction theories and duality and so on. And it's an impossible exercise for most people. So ignore it or ask someone and they can explain it to you. It's one of those things that's completely trivial when you see how to do it. But you would never see how to do it. Okay. So what we really want to do is define an SUR Vaffa-Whitney invariant that's going to have more chance of being nonzero. It's much more interesting and it's not just a DT invariant. So it's something new. Okay. So what we do is we take this modulized space, which I call m-perp, of those E with centromass zero and this fixed determinant condition. Okay. Equivalently, we take, instead of all Higgs pairs, we take those Higgs pairs or stable Higgs pairs with trace zero and fixed determinant. And you can relax this OS and fix just any old line bundle. That's fine. Okay. So I won't go through too much of this, but turns out you can def, you can change the deformation theory. You see the deformation theory of these sheaves on X, these torsion sheaves was governed by this X group here. And what you can do is you can prove there's a splitting. You can take out a piece. So this first piece, the coromology of OS, that governs the deformation theory of the determinant of the pushdown of E. So the determinant of the straight E on S. The deformation theory of that is the deformation theory of a line bundle. So you get X from a line bundle to itself, but that's just the coromology of the structure sheaves. Right. This is, so this structure sheave here is the endomorphism sheave of the line bundle determinant of E. So this is what governs, this first piece governs the deformation theory of that line bundle that I'm fit, that I want to fix. So I'm going to remove it. The second piece governs the deformation theory of the trace of the Higgs field or the center of mass of the spectral sheave. And then the third piece is what's left over and you can prove there's a canonical direct sum decomposition. And that is what governs the deformation theory of my SUR of Vaffawitton Higgs pairs or sheaves with center of mass zero. Okay. So if you wish, again, this is more for experts, I think, is an exercise, you can deduce this at least point wise in the following way. There's a certain resolution of our spectral sheave. Okay. By, you know, remember, this is an eigenspace and this is the vector space. So this is just the map from the vector space projecting to the eigenspace by projecting out all the other eigenspaces. But this is just a, this is just the obvious adjunction. This takes the sections of a sheave to the sheave, evaluation of sections. Okay. And what's the kernel? Well again, intuitively, this is kind of clear. There is no kernel generically, generically e phi zero, it's a torsion sheave. But there's kernel when you're at a point where phi minus lambda or phi minus tor, tors the eigen, remember tors the sort of the tautological eigen function. So where phi minus tor has some kernel or its rank drop, so it has some kernel and co-kernel, that kernel or co-kernel is the generalized eigenspace. And so that gives you this resolution. There should be an identity times that tor, really. So, so where, so at the points where tor where, where how far you up the con, are up the canonical bundle is equal to an eigenvalue of phi, then this operator here suddenly drops rank and it has some co-kernel and that is the eigenspace. And that's what, that's exactly what this resolution says. All right. So it's an exercise to prove it. It's quite, it's pretty tricky exercise, but it's in our paper, the original paper with Tanaka. Okay. And then maybe, maybe a more manageable exercise, much easier is just to take x's from this sequence to e, e phi, curly e phi. Okay. And then you get this user junction and you get this long exact sequence here. And this is what tells you how to relate the deformation theory of the Higgs pair to the deformation theory of the sheaf. So in the middle here is the deformation theory of the sheaf on x. And sub is the deformation theory of the Higgs field. So as I deformed the phi, as I change phi, I deform e phi. And as I deform e phi by pushing down, I deform e on s. And not every deformation of e on s gives me an e phi because I might deform my e on s and not be able to deform the Higgs field with it. There may be an obstruction to deforming the Higgs field. And that's this last arrow here. And this is the obstruction space for the Higgs fields. Okay. And now using this, you see, you can see these pieces. So the cohomology of Os comes from this third term here. So it comes from taking trace or identity. It sits as a sum and in this x one of straight e, straight e on s. It sits as a sum and in there using the trace and identity maps. And instead of talking about deformations of e, it's talking about deformations of deti. And it's said you'll, everything has this duality, it's said you'll, which is the cohomology of chaos is sat in the first piece. Again, as a sum and via trace and identity maps. It's sat there in terms of the Higgs field, it's the trace part of the Higgs field. And then the purpose what's left over. Yeah. For x one, could you view that second summons as a, as a, you want to transform the similarity map or here? The blue lines. Here? Yeah. The second summons. This one. For x one, it's going to be h. Yeah. For x two, for x two. So for the structure. Yeah, I think, yeah. It certainly has that flavor and I bet you could do it. Yeah. It's not how we do it, but you could do it. Yeah, I think that's right. Yeah, that's a good, yeah, that's a very good point. Yeah, you could do it that way. Okay, so I think I said this already. All right, so what you end up proving is that this M perp, this S U R Vaffa Witton space also has a nice perfect obstruction theory, virtual dimension zero. It has this symmetry and you can make the same definition as before and you get a Vaffa Witton invariant. And okay, here's a, here's a, you know, six month exercise. If anyone wants to do it, I'm not sure how valuable it would be, but it's an alternative way of defining this perfect obstruction theory in the rank two case will be to take the fixed locus of dualizing and applying minus one on the fibers. So you can dualize your E Phi on its support. I mean, you can write that actually on X in terms of X to one instead of POM, but anyway, you can dualize that line bundle, that spectral line bundle and apply minus one to the spectral surface and then the, the fixed points of that should give you this M perp. And you know, that Graeber and Panda, Panda told us how to pass from a perfect obstruction theory to a perfect obstruction theory for a fixed locus under a group action. So that would give you a more immediate way of constructing this perfect obstruction theory. Yeah, but you can surely adapt what they did. The only reason I say this is because, you know, this slide is hiding like really 30 pages of hideous deformation theory. I mean, we should have found an easier way of doing it, but we never did. And this is one easier way of doing it in the rank two case. It turns out to be really, really hard to define this, this X perp perfect obstruction theory. Can we take some sort of derived fiber? Yeah, that's another, we also discussed that in the paper. Yeah, if you, we couldn't do that honestly, because we didn't have the expertise, but yeah, if you, if you're happy to say infinity derived algebraic geometry, homotopy, something, then yeah, you could do that. Hopefully it goes away in a second. All right, so just treat it as about black box. There's this virtual cycle. There's this, we localized the fixed locus, the C star action, and we define an invariant. All right. And what these lecturers are about is essentially how you compute this and what the two contributions are. There are two different types of contribution to this, as Lota mentioned. There are two types of fixed locus. Okay. One is where phi is zero. That is obviously fixed under the C star action. So that corresponds to sheaves on X, which are scheme theoretically supported on S. They're pushed forward from S. So they're not line bundles on S. They're really rank R sheaves on S pushed forward to X. So that's the most degenerate case where the Higgs field is zero. And then the exercise is to show that you can also get fixed pairs from nilpetent phi. So when you, that's essentially down to the fact that when you scale a Jordan block with zeros and just a one in the corner, when you scale that one to a lambda, it's similar. It can be, it's, what's the word I'm looking for? It's similar to itself, right? If I put, you know, this matrix can be conjugated to this matrix under an endomorphism. So what you need is an endomorphism of your E, which is possible because E isn't strictly stable. E is only stable when you pair it with the Higgs field. So you can have an endomorphism of E, which takes this to this. So when you scale your Higgs field by an element of C star, you end up with a similar Higgs field. So you end up with an isomorphic Higgs pair. It's, it's easier to see the C star fixed loci in the sheaf on X language rather than the Higgs pair language. So on X, what you're doing is you're taking sheaves which are supported on a thickening of the zero section. You can see those could be C star invariant. They're not all C star invariant. The structure sheaf of a thickening of S, for instance, is C star invariant. So we get a decomposition of the C star fixed locus into two pieces, the so-called instanton branch. So this is sheaves with fixed determinant on S. This, this corresponds to solutions of the anti-self-dual equations by the Hitching and Kobayashi correspondents. Yeah. Actually, a problem. Why do you need this kind of C star action here? Just to make things compact. That everything's non-compact. You can't define invariance because of that. But by localizing to the fixed locus of the C star action, you get a compact situation where you can, where you can define things. And then what we call the monopole branch on which phi is nilpotent rather than zero. So these are the sheaves supported scheme theoretically on S and these are the sheaves supported set theoretically on S, but not scheme theoretically on S. Oh yeah, I said that. And I said that. So the Vaffa-Wittning variance is some of these two contributions and then they seem to be swapped in a certain sense by the S duality voodoo predictions of physics. Or this is some non-Abelian version of electromagnetic duality. Those are just words that, you know, that's some kind of jargon I heard. And it. If phi is equal to zero and the model space is on S is more than easy to describe the virtual cycle. Yeah, we're going to do that next. And Lota did it yesterday. Okay, there's my answer. Now we're going to do that next. Okay, so I'm going to focus on the monopole locus because the other locus, the instant on branch is where people made lots of progress over 20, 25 years and the latest progress is by Lota and Martin Kuhl and a lot is known about that. And mainly I'm going to focus on the monopole locus. But to start with, I will talk about this instant on locus very briefly and I refer you to Lota's lecture for more information. So let's look at moduli of sheaves on S. So it's just sheaves straight E with zero Higgs field. That has its own virtual cycle based on the deformation obstruction theory of sheaves on S. Okay. If you hate all this deformation theory, just consider the case where it's already smooth of the correct dimension where this X2 here vanishes. So yeah, you can already form this moduli space of sheaves on S. We know how to handle it in many different ways. But this is the most general case. So it has its own virtual cycle. But considered in this threefold theory instead of a surface theory, the obstruction, it has extra deformations and extra obstructions. Okay, so you can start to deform the Higgs pair that lives in here. So you can start to deform the Higgs field that lives in here. So that's an extra deformation and it's dual to the obstruction space. And it has extra obstructions here, the obstructions to deform in the Higgs field and they're dual to the original deformations. And you know, an exercise, you can really work that out. This is easier than the previous exercise because now my sheaves supported on S. So it's much easier to resolve it. And this is how you do it. And you really just see that the deformations, this is in the sheaf language. So when you here's your curly E, it's just the push forward of straight E. When you take its deformation theory, you get the original deformation theory on S. But then you get this other piece, which is essentially the dual of that up to a shift and a character of C star. Okay. So in fancier language, this says that when considered on X, you're taking a minus one shifted cotangent bundle as the derived scheme of sheaves on S. So that doesn't help anyone. I appreciate. But what it's saying is that the virtual tangent bundle is the old virtual tangent bundle just of the surface theory. Plus it's dual, which gets shifted and it picks up a character and that becomes the virtual normal bundle. So that's the directions in which you deform the Higgs field. That's normal to this monopole locus. Okay. And from that, you can compute, use the localization formula, you can compute what is the Euler class of this guy. And you can see it's very close to being just the Euler class of the virtual tangent bundle. In fact, it's dual. And so what you find from that is that when you take this definition by localization of the Vaffa-Witton invariant, or at least the contribution to the Vaffa-Witton invariant from the instant on locus, what you get is this signed virtual or their characteristic. And I use a different sign from low term. And this has been heavily studied over 25 years. And the really the state of the art in recent years is Gertrude and Kuhl, which you heard about yesterday, and physicists in particular, Peerlene and Manchaud and other people. And there's this vanishing theorem that when the degree of the canonical bundle is negative, that's everything. So that kind of justifies studying this locus intensively. But I'm interested in the case where this vanishing theorem doesn't hold and where you get monopole contributions. And then we'll come back later to the interaction between these two contributions and how incredibly this estuality appears to swap the two. And so that means you can make predictions on one side or the other. Sometimes you can do calculations more easily on one side, sometimes on the other. And then the astuality switches them. And it's a very powerful tool as Lothar demonstrated yesterday. Okay, so I'm really interested in this monopole locus. So for instance, here's what happens in rank two. So an exercise is tensor is sheath supported on two times the zero locus. So on this thickening in the zero locus, tensor it by this obvious exact sequence to describe it in terms of rank one sheath is on S. So this piece will give you a rank one sheath on S, this piece will give you a rank one sheath on S. And there'll be a map between them up to tensioning with KS. And both rank one sheaths will, let's assume everything's torsion free, they'll give you ideal sheaths. And the map between them will give you a nesting of the two ideal sheaths. I will do that in more detail in more generality later. What that corresponds to in Higgs language is that you can decompose the Higgs pair using the C star action. So you can decompose your vector bundle E in terms of a weight zero piece, which is fixed by the C star action, a weight minus one piece, which is not. And the Higgs, what you find is the Higgs field, remember it was off diagonal, it was this Jordan block, the Higgs field maps the E zero to the E one. Okay. And stability forces the E is to be torsion free. So up to tensioning with the line bundle, their ideal sheaths. And what it turns out is that this phi ends up being a map between two ideal sheaths for, of subschemes of S. All right. So the green bit was doing it in sheath language on X. The black bit is doing it in Higgs language on S. And so these monopole contributions have something to do with nested Hilbert schemes. And they're essentially computing with these things is the subject of the rest of the course. And also eliminating other components. You know, in higher rank, there's other components where you get more than where these aren't both ranked one, this could be ranked two and this could be ranked one. And then the rest of the course is about showing why they don't contribute and so on. Okay, that's just some history. So we did some computations a few years ago, just in very low degrees before we had general methods for computation that I'm going to talk about in this lecture. And those computations are very hard work. And Martijn Kool observed that our answers, which we thought were rather disappointing, turn out to give the first few terms of modular forms predicted by Vaffer and Witten hundreds of years ago. So here's what it was. Gertrude and Kool were already seeing these terms. This is cut out of a Vaffer Witten paper in 1994. Gertrude and Kool were already seeing these terms, but they weren't seeing this first line. And what we were seeing with this term, the first four terms that we managed to calculate Martijn observed were exactly this term, the first four terms of this. This is some generating series. And then this term, so we were calculating I think with fixed determinant equal to the canonical bundle for convenience. If you calculate with fixed determinant equal to zero, you will get this term and La Raca has done that computation since. So that was what told us that we have that this is the right definition that I've been giving to you. The other possible definitions that have been discussed are incorrect. Okay, so this is saying that the bare-and-weighted definition that some of you will be aware of gives the wrong answers. Okay, so I should, let me just do this slide. Okay, so more generally, something in the monopole locus is fixed by C star. You can lift that being fixed by C star to prove using stability. You can show that what that means is that actually you can make C star act on the sheaf. You can make the sheaf be C star equivariant. And then you can decompose it using that equivariance into C star weight spaces and you end up with a picture like this. So you split your bundle into pieces and these are just the, these just correspond to the Jordan blocks. Okay, so this is, you should think of this in the, in the sheaf on X language. This is the bits supported scheme theoretically on S. This is the next bit supported on the first order thickening of S and so on up to the K-threaded thickening. K minus 1. And phi, which has weight 1 because it's being scaled by the C star action, always maps from EI to EI plus 1. And so you get this sort of chain of maps. And so this defines components of the fixed locus labeled by integers later will show that those integers have to be decreasing otherwise you get, it contributes zero. So you can think of this actually as a young diagram or a partition, plain partition. Okay, so you get these components of the modulate space where these, these are the ranks of these pieces. So these are the ranks of the sheaf. You've got the sheaf on this thickening of S and it sort of has rank, on its support it has ranks on different thickenings of S. You can look at how, what, what its rank is. And those are these RIs. And the most important components as Lothra explained yesterday are the two extremes where the whole thing supported on S. So everything's in E0. That's the instant on locus that he discussed and I'm not going to talk about much. And then there's the other extreme where there is spread out as much as possible and they're all rank one and that's these nested Hilbert schemes. So that's where the, the profile looks like this. He called this one to the R. And what we'll see later is that when your surface has a two form by something called co-sectional localization which I'll discuss, the only non-zero contributions come from constant profiles. So where all these ranks are the same. And where actually these fies are generically isomorphisms, generically over the surface. So we'll discuss this later. But in particular what it means is, and Lothra mentioned it, when the rank is prime, of course there's no way of splitting up into constants like this except for the two extremes. So when the rank is prime, so I tell you what a prime number is here, only the nested Hilbert schemes contribute and the, and the instant on locus. Okay. So all, all I'm doing here is setting up for next time, explaining why nested Hilbert schemes are important. They, they're really almost everything in Valfa Witton theory or they're everything we know until now. And then this question was asked twice since the session yesterday. What about the first non, the case not covered by this. So that's rank four and this component two, two. Okay. And I believe Shoshmani Yau have been thinking about this. And the issue is that these individual ease needn't be stable. And so, but when they are stable, Andre has done a great deal of work studying, you know, correspondences between modulized spaces of sort of E zero and E one set up by maps between them and so on. And the problem is that in general they needn't be stable because the stability condition is more complicated. So I think the idea of Shoshmani and Yau is that you should try and wall cross through some sub space of stability conditions to the situation where you have stable sheaves where the two EIs are both stable. And if you could do that, then you could start to study this problem. But at the moment, no one knows how to do that. Okay. Okay. So next time I'll start here with the semi stable case. So far I've always assumed that my sheaves are stable and that any semi stable sheaves are actually strictly stable. Next time I show you how to deal with the semi stable case. Any question, comment? Yeah. Is it the monopoly equation like both of only are reaching? Sorry, any other monopoly equation related to this? I mean, the Hitchin equation is essentially this equation one dimension down. And there again, you have a C star action and you can localize to compact fixed loci and you know, Hitchin does that. Yeah, that is not the question. The monopole means where you say monopole. Oh, yeah, honestly, I have no idea. That's probably a bad. So, so in the original Vafa Whitten paper, they very briefly in one section mentioned that they do exist solutions which aren't just push for they aren't just instant terms. And so they're interested in the rank two case. So what they end up with is two rank one sheaves. So essentially what we call in the nested Hilbert scheme. But in their case, they don't allow for singularities. They have two line bundles. And then they have this Hicks pair, which is therefore a section of a line bundle and the homes from one line bundle to the other. So they end up with an equation for a section of a line bundle. And that this equation looks very much like the cyber Whitten monopole equation. And they call it monopole. And then it's become known as monopole because I'm sure that's a dreadful name. But I'm sure Hicks is a dreadful name as well. So, yeah. Any more questions? How bad is how badly does the getting this capability composition work here? Yeah, I mean, what's sort of relevant is the virtual or derived version of that. And that, yeah, so my understanding is, um, the Davesh has announced this localization formula, right? I think that's basically a derived version of this BB theorem. And he hasn't written it right. Any never will. Okay, exercise. Yeah, I'm not an expert on this. Yeah, I'm a bit nervous about saying anything because the versions I know are in the language essentially of the bear end function. And so they're not entirely relevant to what I'm doing here. But there are different versions. So there's different refinements of DT theory where you take, you replace the bear end function by perverse sheaf of vanishing cycles and then there's other versions using K theory, which I'm going to talk about, but they all have localization formulae. And Davesh has a has proved a localization formula, which unfortunately hasn't written down, but it's very much like a derived version of this BB formula or decomposition. But let me not say more than that because I don't, you know, I'm starting to say things that I don't really know what I'm talking about. If you were to work with compact quality of three four, are there any way to put the conditions here like here trace zero and fix the terminus in the intrinsic language of that? Yeah, that's a great question. Sometimes, but not usually, but usually that it's done already for you. Let me, I'll come and talk to you about that later. Yeah. And often, often it's done for you. Like the very fact that you're deforming inside this Calabi out kind of stop stops you seeing it already looks like an SUR theory rather than the UR theory. And essentially because the club, you probably assume has no Jacobian right? And no H two zero. Yeah, so then physicists have since extended these estuality conjectures from Vaffa Whitton theory to the club, okay. So they predict that when you can calculate with two dimensional torsion sheaves in a club, our three fold, you should get modular forms without putting extra without putting extra conditions. Yeah, there's no problem. That's just the same as for the UR Vaffa Whitney equations. That's really just the local DT theory. Again, there should be estuality for that. It's just often a duality between zero and zero. So these are these are not the nicest modular forms. They're Mark modular forms in general, and they're vector valued modular forms. So they're hard things to deal with, but nonetheless. So you know, according to what I said yesterday gromm of Whitton theory might be determined by some modular forms, vector value module forms, but I'm not convinced actually that that's a useful statement at all because the length of the vector gets bigger and bigger each time you try and pin something down and make a statement, you find that you have to make the vector longer and then you know, you want to use modularity to say, well, there's only a finite dimensional space of modular forms of this type. Therefore, once I know 10 coefficients, I know everything or something like that. But you often find that once you compute 10, you find actually you need more because the length of the vector is longer than you thought. And then I don't know when you compute some more and you try and use our theory about how you to go to between DT theory and gromm of Whitton theory, you find some parameter has to get bigger in order to follow our theorem through. And when that parameter gets bigger, suddenly the length of the vector gets longer and yeah, it's not entirely clear yet that it's a useful theorem. So is there some hope for making this S-validity mathematical? I think in the Vaffa Whitton theory, absolutely. I mean, it's kind of what loves to say. Oh, I see. Well, actually geometric rather than mathematical. Yeah, I mean, it's really it's an artifact to formulate the moment. It's not some geographic ruins. I mean, yeah, I've often thought about it. I don't know. I mean, it'd be nice, wouldn't it? Yeah, no. Yeah, at the moment, no, I mean, you know, I'm going to explain how you're supposed to go between instance on so sheaves on S and these nested Hilbert schemes. The easiest way to see any kind of link is nested Hilbert schemes, as I will describe by degeneracy loci, you can express in terms of just Hilbert schemes. So that's one way that's that reduces the theory to the theory of Hilbert schemes, which gives you modular forms, vertex operators and so on. Instantons, you can do something similar. You can do this mochizuki wall crossing, where you've you take instantons plus another field, and then you start wall crossing in the space of stability conditions for them. And at one end, you get instantons at the other end, you get Hilbert schemes again, you manage to decompose your bundle into sums of rank one bundles and so on. And then there's another route to Hilbert schemes. And then maybe estuality is is just relating these two statements about Hilbert schemes, but it's awfully non direct. Is the integrant in the Hilbert scheme points integral, do they even look similar to Cd's? Not on the face of it, no. And you have to use lots of results of, you know, necrosoph and people, integrable systems people and so on to lots of combinatorics to eventually see that the generating functions are related at all. Yeah, I think this is really a mystery. I mean, people should really Are you reducing this estuality at the level of instantons and some kind of other sort of monopoles? But it's more general than that. Is it not? Here, as it seems that estualities, they can kind of do a little bit in instantons on hand and the other hand monopoles. It's not. Well, that's just one element of the estuality group. There's others. So even if you just form the instanton generating series or just the monopole generating series, those also have modular behavior. So it's more general, is it? Yeah, I mean, it's not something we understand at all. We have a question. How does this estuality relate to the estuality for one dimension one sheaths on the compact three people? Dimension one sheaths. Co-dimension one sheaths on it. Yeah, it's the same thing. Yeah, there's a generalization of estuality to that setting, which when you restrict to these non-compact Kalaabyaos, becomes the estuality I'm talking about. I have a question. You said that using the Beren function gives the wrong answer. It means that there is no physics counterpart to that number? Yeah, that's right. It gives very uninteresting answers. It also gives modular phones, but they're all just the eta function. They're not the numbers that physicists predicted. If you take an arbitrary line bundle instead of KS, do you expect some sort of partial modularity problem? Oh, yeah, I don't know. You'd need something like that line bundle to have a degree lower than the canonical bundle. Otherwise, you'll get X3s and stuff. You won't get a perfect obstruction theory. But then you could ask that. That's a great question. I have no idea. Yeah, I don't think anyone's done a single computation in that setting. If you're a Gaelic student, that's what you'll be. I think we can thank Richard. Thank you.
|
1. Sheaves, moduli and virtual cycles, 2. Vafa-Witten invariants: stable and semistable cases, 3. Techniques for calculation --- virtual degeneracy loci, cosection localisation and a vanishing theorem, 4. Refined Vafa-Witten invariants
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.